id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
35513906
|
pes2o/s2orc
|
v3-fos-license
|
Constipation in Childhood Coeliac Disease
Twelve of 112 infants and children with coeliac disease were constipated at some time before diagnosis. Of the 12 children 9 had faecal impaction when first seen, and in 3 of them coeliac disease was not suspected at the first investigation; 3 children had a history of constipation alternating with mild diarrhoea; 4 had no diarrhoea at any time, and steatorrhoea was found only in 3 of 9 cases. Over 30% of children with active coeliac disease did not have steatorrhoea at the time of diagnosis when on diets containing usual amounts of fat. Constipation was probably due to anorexia, normal or increased ileal function, and decreased intestinal motility.
. Archives of Disease in Childhood,47,238. Constipation in childhood coeliac disease. Twelve of 112 infants and children with coeliac disease were constipated at some time before diagnosis. Of the 12 children 9 had faecal impaction when first seen, and in 3 of them coeliac disease was not suspected at the first investigation; 3 children had a history of constipation alternating with mild diarrhoea; 4 had no diarrhoea at any time, and steatorrhoea was found only in 3 of 9 cases. Over 30% of children with active coeliac disease did not have steatorrhoea at the time of diagnosis when on diets containing usual amounts of fat. Constipation was probably due to anorexia, normal or increased ileal function, and decreased intestinal motility.
Constipation in children witCl active coeliac disease has been mentioned in reports from this hospital by Chen et al. (1964) and McNicholl and Egan (1968), and also by Anderson (1966) and Dyer and Dawson (1968), in children and adults, respectively. In two recent large series of coeliac disease, totalling 152 cases (Hamilton, Lynch, and Reilly, 1969;Young and Pringle, 1971), constipation is not mentioned, though cases without diarrhoea are described. We have reviewed our case material to assess the incidence of this not generally recognized feature of coeliac disease. By constipation we mean the passage of stools of harder consistency than normal, or the clinical observation of impaction of abnormal amounts of hard (usually pale) faeces in colon and rectum.
Material and Methods
One hundred and twelve children were thought to have coeliac disease according to criteria described previously (McNicholl and Egan, 1968) but principally because of undernutrition and retarded growth accompanied by Grade 2/3 or Grade 3 jejunal mucosal damage according to our classification; (normal mucosa is graded 0, mild non-specific change 1; grades 2 and 3 correspond to moderate and severe villous atrophy). Growth retardation was assessed on the graphs of Tanner and Whitehouse (1959) and subsequently confirmed by catch-up growth following treatment with gluten-free diets. Faeces were collected between carmine markers given at 5-day intervals, their fatty acid content being measured by the method of van de Kamer, ten Bokkel Huinink, and Weyers (1949). Fat intake was not measured but diets were designed to Received 16 September 1971. contain from 35 g fat daily under 1 year to over 65 g daily in the older children; the diet, containing adequate amounts of gluten, was fed for 7 days or longer before faecal collections were started.
Results
Twelve children were constipated at some stage before diagnosis, details being given in Table I. 9 children presented with constipation and faecal impaction; of these, 5 had had intermittent diarrhoea and constipation, but 4 (Cases 2, 4, 5, and 11) never had diarrhoea, and Cases 2, 4, and 5, who presented at around 1 year of age with anorexia, vomiting, failure to thrive, and faecal impaction are described elsewhere (McNicholl and Egan-Mitchell, 1972). Case 3 was admitted to a surgical ward with suspected subacute intestinal obstruction because of vomiting and faecal impaction. 3 children, Cases 7, 9, and 11, had been investigated by us between 4 months and 2 years previously for constipation, growth retardation, and faecal impaction, one also having mild iron-deficiency anaemia, but were not then suspected of having coeliac disease; we consider it almost certain that these 3 children had active coeliac disease when first seen. Case 12 had been referred from another hospital 9 months previously with a provisional diagnosis of Hirschsprung's disease because of growth retardation, abdominal distension, faecal impaction, anaemia, and a large colon on x-ray; Hirschsprung's disease was discounted, but coeliac disease was not diagnosed. At subsequent investigation, grade 3 mucosal changes were found, as also in Cases 7, 9, and 11. The 3 children who did not have faecal 238 impaction when investigated had histories of constipation alternating with mild diarrhoea, and all these had been given laxatives frequently for their constipation. Though a description of the colour of the faeces was not recorded in every case, descriptions by parents and hospital staff were almost always of a pale or 'putty-like' colour. Accepting the criterion of Anderson (1966) for steatorrhoea, as a daily faecal fat excretion of 4 5 g or more, only 3 of the 12 constipated children, Cases 10, 11, and 12 had steatorrhoea, and Case 7 with 4 g daily would be regarded with suspicion. The 3 children with mild steatorrhoea were older than the remainder and their disease was of longer standing. The faecal fats were not estimated until the time of diagnosis in Cases 7, 11, and 12, and were not estimated at all in Cases 1, 6, and 9; nevertheless, we feel confident of the diagnosis in the latter 3 on the basis of the mucosal changes and the response to treatment. A full faecal analysis was done in one case only (Table II). Discussion Though many clinicians are familiar with the occurrence of constipation in coeliac disease, its occurrence is regarded with scepticism in some centres. Two recent series of 42 and 110 cases, Hamilton et al. (1969) and Young and Pringle (1971), while mentioning children without diarrhoea, do not describe constipation or faecal impaction. Apart from occasional constipation, the occurrence of constipation with marked faecal impaction may understandably deflect some clinicians from considering the possibility of coeliac disease. The factors causing constipation in active coeliac disease are probably anorexia, compensatory ileal hypertrophy, and reduced intestinal motility. Frazer (1960) stressed the frequency of anorexia in coeliac disease, and Gent and Creamer (1968) instanced the low fat intake resulting from anorexia as one cause of low or normal fat excretion. MacDonald et al. (1964) and Stewart et al. (1967) showed that the ileum may be relatively or almost completely unaffected in the presence of severe jejunal damage; Dowling and Booth (1967) showed the ability of the rat ileum to hypertrophy and take over jejunal function following jejunal resection. Cameron et al. (1962) were the first to record normal faecal fat excretion in coeliac disease (3 children) and we reported that 14 of 43 children with coeliac disease excreted less than 3-5 g fat daily (McNicholl and Egan, 1968);Stewart et al. (1967) and Gent and Creamer (1968) found normal fat excretion in 19% and 25%, respectively, of adults with coeliac disease.
Our continuing experience has been that over 30% of children with active coeliac disease do not have steatorrhoea, at least during a single 5-day period ingesting moderate amounts of fat, as in the diets described above.
It seems reasonable to suggest that the clinical picture in coeliac disease may be influenced by such factors as anorexia and the extent of the mucosal damage. Normal or constipated stools with normal fat excretion may be more likely when anorexia is prominent and the disease mainly affects the upper intestine, loose fatty stools being more likely when
|
2018-04-03T02:53:18.185Z
|
1972-04-01T00:00:00.000
|
{
"year": 1972,
"sha1": "4e34cc57f91017853b3f452bcafbb0280fe99361",
"oa_license": "CCBY",
"oa_url": "https://adc.bmj.com/content/archdischild/47/252/238.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "BMJ",
"pdf_hash": "ae65ed8a21f6ecee165e5ca692394c2a722fd492",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7732051
|
pes2o/s2orc
|
v3-fos-license
|
Rapid Prion Neuroinvasion following Tongue Infection
ABSTRACT Food-borne transmission of prions can lead to infection of the gastrointestinal tract and neuroinvasion via the splanchnic and vagus nerves. Here we report that the transmission of transmissible mink encephalopathy (TME) is 100,000-fold more efficient by inoculation of prions into the tongues of hamsters than by oral ingestion. The incubation period following TME agent (hereinafter referred to as TME) inoculation into the lingual muscles was the shortest among the five nonneuronal routes of inoculation, including another intramuscular route. Deposition of the abnormal isoform of the prion protein, PrPSc, was first detected in the tongue and submandibular lymph node at 1 to 2 weeks following inoculation of the tongue with TME. PrPSc deposits in the tongue were associated with individual axons, and the initial appearance of TME in the brain stem was found in the hypoglossal nucleus at 2 weeks postinfection. At later time points, PrPSc was localized to brain cell groups that directly project to the hypoglossal nucleus, indicating the transneuronal spread of TME. TME PrPSc entry into the brain stem preceded PrPSc detection in the rostral cervical spinal cord. These results demonstrate that TME can replicate in both the tongue and regional lymph nodes but indicate that the faster route of brain invasion is via retrograde axonal transport within the hypoglossal nerve to the hypoglossal nucleus. Topical application of TME to a superficial wound on the surface of the tongue resulted in a higher incidence of disease and a shorter incubation period than with oral TME ingestion. Therefore, abrasions of the tongue in livestock and humans may predispose a host to oral prion infection of the tongue-associated cranial nerves. In a related study, PrPSc was detected in tongues following the intracerebral inoculation of six hamster-adapted prion strains, which demonstrates that prions can also travel from the brain to the tongue in the anterograde direction along the tongue-associated cranial nerves. These findings suggest that food products containing ruminant or cervid tongue may be a potential source of prion infection for humans.
Prion diseases are fatal neurodegenerative diseases of humans, livestock, and cervids. The majority of prion diseases have an infectious etiology, and food-borne infection has been linked to the transmission of transmissible mink encephalopathy (TME), bovine spongiform encephalopathy (BSE), and kuru in humans (21,23,58). Indirect evidence suggests that oral infection is involved in the transmission of other prion diseases, such as scrapie in sheep, chronic wasting disease (CWD) in deer and elk, and variant Creutzfeldt-Jakob disease in humans (2,13,26,47,52). The experimental ingestion of high doses of scrapie agent (hereinafter referred to as scrapie) has been used to determine the sites of scrapie replication in peripheral tissues and the routes by which the disease spreads to the peripheral and central nervous systems (24,33,36,38,54).
The disease-specific isoform of the prion protein, PrP Sc , is found in the enteric nervous system of the submucosal and myenteric plexus and the gut-associated lymphoid tissue following oral scrapie ingestion (6,24,36,38). Prion spread from these sites to the central nervous system can occur by axonal transport within the parasympathetic nervous system (e.g., from the vagus nerve to the dorsal motor nucleus of the vagus) and the sympathetic nervous system (e.g., from the splanchnic nerve to the intermediolateral cell column of the spinal cord) (36,38). The distribution of PrP Sc in the tissues of subclinically infected sheep with scrapie and deer with CWD is consistent with spread along these pathways (2,47,52,53). The additional spread of prions within a host can occur within the lymphoreticular system (LRS) and can result in systemic prion infection of secondary lymphoid organs. There is no consensus on the cell type(s) involved in prion replication and accumulation in the LRS. Although PrP Sc deposition is associated with follicular dendritic cells in the germinal centers of the secondary lymphoid organs (31,37), recent studies using immunodeficient mice indicated that mature follicular dendritic cells are not required for prion infection and neuroinvasion (34,40,42). Macrophage subsets located in the marginal zones of secondary lymphoid tissues appear to be necessary for prion propagation in the LRS (34,42).
The view that LRS infection must be established prior to the spread of prions to the nervous system has been challenged by several studies. Prion infectivity and PrP Sc are not detected outside of the nervous system in animals with natural BSE (11), even though early PrP Sc deposition in the brain stem has been reported to take place in the dorsal motor nucleus of the vagus and the nucleus of the solitary tract (44). Oral ingestion of high doses of mouse-adapted scrapie can also result in neuroinvasion and disease in the absence of LRS infection (43). In one study, peripheral scrapie inoculation was performed on transgenic mice (with a PrP knockout genetic background) that had restricted expression of Syrian hamster PrP C in a subset of neuronal cells (i.e., gene expression was controlled by the neuron-specific enolase promoter) and no expression of PrP C in secondary lymphoid organs. In these transgenic mice, infection with the 263K strain of scrapie was not found in the LRS due to the lack of PrP C expression, but the mice were susceptible to hamster-adapted 263K scrapie by intraperitoneal (i.p.) inoculation and oral ingestion (43). These findings indicate that peripheral prion infection and neuroinvasion can be LRS independent and suggest that direct infection of the nervous system is an alternate route of infection. This conclusion is supported by additional studies in which peripheral scrapie, of immunodeficient mice (e.g., muMT and RAG-1 knockout mice, which lack functional germinal centers and are unable to replicate scrapie in the LRS) resulted in scrapie infection of the brain (20).
In the present study, we investigated the ability of the HY strain of the TME agent (hereinafter referred to as HY TME) TME to establish disease in hamsters following oral infection by ingestion, inoculation of the lingual muscles, or topical application to the surface of the tongue in the presence and absence of a superficial wound. We demonstrate that tongue infection is a more efficient route of prion neuroinvasion than ingestion and that HY TME can directly spread to the brain from the tongue via the hypoglossal nerve. We propose that the exposure of nerve endings in the tongue or oral cavity, possibly due to lesions or microbial infections, may increase the risk of prion infection and may serve as an alternate route of infection following oral prion exposure. In addition, we demonstrate that six hamster-adapted prion strains can spread to the tongue following intracerebral (i.c.) inoculation. This finding has implications for public health, since livestock tongue is used in food products and may be a potential source of prion infection in humans.
MATERIALS AND METHODS
Strains of hamster TME and scrapie agents. Biological clones of HY and DY TME were isolated as previously described (9) and maintained by i.c. inoculation into weanling male outbred Golden Syrian hamsters (Harlan Sprague Dawley, Indianapolis, Ind.) as described below. Scrapie strains 139H, 22AH, 22CH, and Me7H were isolated upon serial passage in hamsters and were a gift from Richard Rubenstein (New York State Institute for Basic Research in Developmental Disabilities, Staten Island, N.Y.).
Animal inoculations. All procedures involving animals were approved by the Creighton University Institutional Animal Care and Use Committee and are in compliance with the Guide for the Care and Use of Laboratory Animals (39). Hamsters were inoculated with 5 to 100 l of a 1% (wt/vol) brain homogenate from an HY TME-infected hamster containing 10 7.5 median (50%) lethal doses (LD 50 ) per ml. Intrasciatic nerve (i.n.), i.c., and i.p. inoculations were performed as previously described (4,9). Intratongue (i.t.) inoculations were performed by the bilateral inoculation of 20 l of HY TME into the intrinsic muscles of the tongue. In a second study (see Fig. 5), the tongues of hamsters were unilaterally inoculated with 5 l of HY TME. For intramuscular inoculations, hamsters received injections in the right femoral biceps. Intravenous inoculations were performed by injecting HY TME into the penile vein. For oral ingestion studies, inoculum was dried on a food pellet and subsequently fed to hamsters. To produce a superficial wound on the tongue, hamsters were anesthetized with a ketamine and xylazine mixture and the tip of a 30-gauge needle was used to cut the dorsal surface of the tongue. Each hamster received a 3-mm-long wound that penetrated through the epithelium. HY TME inoculum was directly applied to the wound before each animal regained consciousness. Following inoculation of HY TME, hamsters were observed daily for the onset of clinical symptoms. The incubation period was determined based on the initial onset and early progression of symptoms characteristic of HY TME, which included hyperactivity in response to touch and sound, a tremor of the head and body, and ataxia.
Efficiency of the i.t. route. Endpoint titration of HY TME by the i.c. and i.t. routes of inoculation was performed by injecting groups of five hamsters with consecutive, serial 10-fold dilutions of HY TME-infected brain. The titer was calculated by the method of Kärber. Differences in titer were used to determine the efficiency of i.t. inoculation relative to that of i.c. inoculation in establishing TME infection as previously described for rodent-adapted scrapie (12,29).
Tissue collection for PrP Sc analysis.
To study the route of TME spread following i.t. inoculation of HY TME, three to five hamsters were sacrificed each week postinfection for 10 consecutive weeks. The brains, spinal cords, tongues, spleens, and submandibular and cervical lymph nodes were collected for PrP Sc analysis by Western blotting and immunohistochemistry.
Tissue preparation for PrP Sc Western blotting. PrP Sc was enriched from tissue prior to Western blotting. Briefly, 2 to 100 mg of tissue was homogenized to 20% (wt/vol) in Tris-HCl (pH 7.4) buffer containing 5 mM MgCl 2 . Benzonase nuclease (Novagen, Inc., Madison, Wis.) was added to a concentration of 100 U per ml, and the reaction mixture was incubated at 37°C for 1 h with constant shaking. An equal volume of 20% (wt/vol) N-lauroylsarcosine in 10 mM Tris-HCl (pH 7.4)-133 mM NaCl-1 mM EDTA was added, and the tissue homogenates were incubated for 30 min at room temperature with constant shaking. The tissue homogenates were further subjected to a series of ultracentrifugation procedures and a proteinase K digestion step in order to enrich for PrP Sc as previously described (4). The PrP-enriched pellet was resuspended in a sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) sample loading buffer.
Western blot analysis. SDS-PAGE and Western blot analysis were performed as previously described with monoclonal antibody 3F4 hybridoma (28) (a gift of Victoria Lawson, National Institutes of Health Rocky Mountain Laboratories, Hamilton, Mont.) (4,8). Quantification of PrP Sc bands from Western blots was performed with a Storm PhosphorImager (Molecular Dynamics, Sunnyvale, Calif.) and ImageQuant software as previously described (4).
PrP Sc immunohistochemistry. Immunostaining of brain tissue for PrP Sc was performed as previously described (8) or by using the method of Wilson and McBride (59). Briefly, tissues were immersion fixed in neutral buffered formalin or animals were perfused with McLean's paraformaldehyde-lysine-periodate (PLP) fixative, after which tissues were postfixed in PLP. Paraffin-embedded tissue sections (7 m) were subjected to antigen retrieval by either hydrolytic autoclaving (1 to 3 mM HCl) or pretreatment with formic acid for 20 min. We found that the procedure using PLP and short fixation times produced more consistent results and was less disruptive to tissue morphology. A minimum of 2 serial sections for every 20 tissue sections were examined for PrP Sc by immunohistochemistry analysis. In the brain, the region between the first segment of the cervical spinal cord and the midbrain at the level of the inferior colliculus was analyzed each week postinfection. Tissues were incubated with monoclonal 3F4 hybridoma antibody (1:600 dilution) or ascites fluid (1:2,000 dilution) (the latter was a gift from Richard Kascsak, Institute for Basic Research in Developmental Disabilities, Staten Island, N.Y.). The ABC-HRP Elite (Vector Laboratories, Burlingame, Calif.) method was used for anti-PrP antibody signal amplification, and PrP Sc was visualized with 3-amino-9-ethylcarbazole in 50 mM sodium acetate (pH 5.0)-0.03% H 2 O 2 . For immunofluorescence, rabbit anti-mouse Alexa Fluor 488 (Molecular Probes, Portland, Oreg.) was used at a 1:200 dilution. Adjacent tissue sections were stained with cresyl violet to aid in the identification of brain and brain stem nuclei.
TME infection by neuronal and nonneuronal routes of inoculation.
The length of the incubation period following inoculation of HY TME was investigated with neuronal and nonneuronal routes of inoculation. i.c. inoculation directly established TME infection in the brain and resulted in an incubation period of 59 Ϯ 1 day, which was 42 days shorter than the incubation period resulting from i.p. inoculation (Table 1). i.n. inoculation directly established infection of the peripheral nervous system and resulted in an incubation period 8 days longer than that associated with i.c. inoculation but 33 days shorter than than that associated with i.p. inoculation (Table 1). Among the five nonneuronal peripheral routes of inoculation, i.t. inoculation into the lingual muscles resulted in the shortest incubation period, at 79 Ϯ 5 days. The incubation period of the i.t. route of inoculation was statistically significantly longer (P Ͻ 0.01) than those of the i.c. and i.n. routes but statistically significantly shorter (P Ͻ 0.01) than those of oral ingestion and i.p., intravenous, and intramuscular inoculations of HY TME ( Table 1).
The efficiency of the i.t. route of TME infection was deter-mined by calculating the TME titer by endpoint dilution for both the i.t. and i.c. routes. All of the hamsters that were i.c. inoculated with the 10 Ϫ8 dilution (wt/vol) of brain homogenate developed TME, while none of the hamsters in the group inoculated with the 10 Ϫ9 dilution of brain homogenate developed clinical TME by 400 days postinfection ( Table 2). Of the i.t.-inoculated animals, 20% of those in the 10 Ϫ7 dilution group developed TME and none developed clinical symptoms of TME at higher dilutions of brain inoculum. Based on this data, the LD 50 per gram of brain were 10 9.5 and 10 8.4 for the i.c. and i.t. routes of inoculation, respectively ( Table 2). These findings demonstrated that the i.t. route of inoculation was 10-to 100fold less efficient in transmitting disease than i.c. inoculation of HY TME. In contrast, only 20% of the hamsters that received the 10 Ϫ2 dilution of brain inoculum by oral ingestion developed clinical TME (Table 1). This percentage of clinically affected animals was similar to that found for those receiving the 10 Ϫ7 dilution of brain inoculum following i.t. inoculation. Based on this comparative analysis, the estimated titer of HY TME following oral ingestion would be 10 3.4 LD 50 per g of brain. These results indicate that the i.t. route of TME inoculation was 100,000-fold more efficient in transmitting disease than oral ingestion of HY TME. Temporal accumulation of PrP Sc following i.t. inoculation of HY TME. The chronological deposition of PrP Sc was investigated in order to determine the route of HY TME neuroinvasion following i.t. inoculation. Hamsters were mock infected or i.t. inoculated with HY TME, and three animals per group were sacrificed each week postinfection. Brains, spinal cords, tongues, spleens, and submandibular and cervical lymph nodes were collected for PrP Sc analysis. PrP Sc deposition was detected in the tongue from PrP Sc -enriched preparations (25-mg tissue equivalents) beginning at 2 weeks postinfection (Fig. 1A). The amount of PrP Sc in the tongue gradually increased each week until 8 weeks postinfection. There was approximately a twofold increase in the amount of PrP Sc found in the tongue at 8 weeks postinfection compared to the levels measured at 7 weeks postinfection (Fig. 1). From 8 to 10 weeks postinfection, PrP Sc levels in the tongue reached a plateau. In the first half of the incubation period, PrP Sc was localized to nerve fascicles in the tongue and was associated with individual axons (Fig. 2A). These findings indicate that TME can replicate in the tongue at early stages of infection and that axons are potential sites for PrP Sc formation or accumulation.
PrP Sc was not detected in comparable amounts of spleen (25-mg tissue equivalents) between 1 and 10 weeks postinfection, suggesting that a systemic infection of the LRS did not occur following i.t. inoculation of HY TME (Fig. 3). However, PrP Sc was detected in the submandibular lymph node (25-mg tissue equivalents) and cervical lymph node (2-mg tissue equivalents) at 1 through 10 weeks postinfection ( Fig. 3 and data not shown). The PrP Sc levels in the submandibular lymph node peaked by 3 to 4 weeks postinfection and, at this time, were present at higher levels than those found in the tongue. These findings indicated that HY TME established a regional infection of the LRS following i.t. inoculation. PrP Sc deposition in the central nervous system following i.t. inoculation of HY TME. PrP Sc immunohistochemistry was performed on the spinal cord and brain stem at weekly intervals postinfection to determine whether HY TME enters the brain stem by rostral spread in the spinal cord following i.t. inoculation of TME. PrP Sc was initially found in the brain stem at 2 weeks postinfection but was not present in the first segment of the cervical spinal cord (C1) until 6 weeks postinfection (Table 3). At 6 weeks postinfection, PrP Sc deposits were not detected in segments below C1. These results suggested that TME spread directly from the tongue into the brain stem and that entry was not a result of rostral transport from the spinal cord. The entry of PrP Sc into C1 at 6 weeks postinfection was likely due to the caudal spread of TME from the brain stem to the spinal cord.
The sites of PrP Sc deposition in the brain stem were investigated to determine the possible route(s) of neuroinvasion following i.t. inoculation of HY TME. PrP Sc was initially found in the hypoglossal nucleus (XII nucleus) at 2 weeks postinfection, and the intensity and distribution of PrP Sc immunostaining in the XII nucleus increased between 2 and 6 weeks postinfection (Table 3 and Fig. 4A and B). PrP Sc deposits were found in additional areas of the brain beginning at 4 weeks postinfection. Prominent PrP Sc accumulation was detected in specific areas of the reticular formation and to a lesser degree in the sensory trigeminal nucleus and the nucleus of the solitary tract (Table 3). These findings indicate that HY TME can spread to the brain stem via axonal transport within the hypoglossal nerve (cranial nerve XII [CN XII]) following i.t. inoculation. Subsequent spread of HY TME in the brain is consistent with transsynaptic TME spread and axonal transport to brain cell groups that project to the XII nucleus (i.e., second-order neurons). PrP Sc deposition was also found in the dorsal motor nucleus of the vagus (X nucleus) at 6 weeks postinfection, but these deposits were not located in the somata of neurons.
Punctate cytoplasmic and perinuclear PrP Sc deposits were found in motoneurons beginning at the earliest detection of TME in the XII nucleus, at 2 weeks postinfection. The number of PrP Sc deposits varied within the cytoplasm, and initially these deposits had a small, uniform size, but at later times large heterogeneous PrP Sc deposits were also found in the cytoplasm (Fig. 4C). The intracellular accumulation of PrP Sc in the motoneurons of the XII nucleus is consistent with retrograde axonal transport from the axons located in the tongue to the cell bodies located in the brain stem.
Tongue lesion model of prion infection. To investigate the role of an injury to the tongue in establishing prion infection, we tested the hypothesis that a wound on the surface of the tongue will enhance prion entry following oral prion exposure. A 30-gauge needle was used to make a 3-mm-long superficial cut in the dorsal epithelium of the tongue, and HY TME inoculum was topically applied to the surface of the tongue. The incubation period for the tongue lesion group was 161 Ϯ 47 days (15 affected of 15 inoculated), and the first six hamsters in this group to develop TME had an average incubation period of 110 Ϯ 15 days, which was statistically significantly different (P Ͻ 0.001) than the incubation periods of the i.t.inoculated group (82 Ϯ 2 days; 15 affected of 15 inoculated) and the oral ingestion group (184 Ϯ 23 days; 3 affected of 15 inoculated) (Fig. 5). A fourth group of hamsters received a topical application of HY TME to the dorsal surface of the tongue in the absence of a lesion. In this group, the mean incubation period and the percentage of animals that developed clinical TME (185 Ϯ 35 days; 5 affected of 14 inoculated) were similar to those of the oral ingestion group (Fig. 5). These findings indicate that a lesion on the surface of the tongue can increase the likelihood of prion infection following oral exposure.
Prion transport from the brain to the tongue following i.c. inoculation. To investigate whether prion infection in the brain can spread to the tongue, hamsters were i.c. inoculated with HY TME and the tongue was examined for PrP Sc deposition. After the onset of clinical symptoms of HY TME, PrP Sc was found in a PrP-enriched preparation of the tongue (25-mg equivalents) upon analysis by Western blotting (Fig. 6). With the use of immunohistochemistry, PrP Sc was found to be associated with axons in the nerve fascicles of the tongue in hamsters that were i.c. inoculated with HY TME (Fig. 2B and C). To determine if the spread of PrP Sc to the tongue was a property of additional prion strains, we examined the tongue for PrP Sc deposition in hamsters that had been i.c. inoculated with DY TME and scrapie strains 139H, 22AH, 22CH, and Me7H. These five prion strains have distinct phenotypes that are defined by incubation period, clinical symptoms, and brain neuropathology, as previously reported. For each prion strain, PrP Sc was found in the tongue at the onset of clinical disease (Fig. 6), indicating that prion infection of the tongue is a common outcome following prion infection of the brain. These findings demonstrate that prion infection can spread to skeletal muscle, and specifically to the tongue, from a prion infection that originates in the brain.
DISCUSSION
Our findings indicate that the inoculation of HY TME into the lingual muscles results in TME replication in the tongue and regional lymph nodes and direct HY TME transport within CN XII to motoneurons in the XII nucleus. HY TME infection of the tongue resulted in the shortest prion incubation period reported for Golden Syrian hamsters following inoculation by a nonneuronal route. PrP Sc deposition in the XII nucleus at 2 weeks postinfection is also the shortest amount of time in which prions have been demonstrated to enter the brain following peripheral inoculation in any experimental model, including direct ocular and i.n. inoculations. There was a 6-week delay in the entry of PrP Sc into the brain stem following i.n. inoculation of HY TME (4), while infectivity was not found in the brain for 7 weeks following the intraocular inoculation of hamsters with scrapie strain 263K (a strain with properties similar to those of HY TME) (30). An 8to 10-week delay in the increase in titer in the brain was found following intraocular inoculation of murine scrapie (19,45), but other studies report a minimum of 2 weeks for scrapie to reach the brain via the optic nerve even though no scrapie infectivity was detected at this time point (46). HY TME infectivity should be found in the XII nucleus by the prion animal bioassay in less than 2 weeks following i.t. inoculation, since bioassay is a more sensitive method for measuring prions than PrP Sc immunohistochemistry (5). Our findings suggest that the inoculation of TME into peripheral tissues that are innervated by cranial nerves that project to the brain stem, such as the skeletal muscle of the tongue, can result in rapid direct prion neuroinvasion of the brain.
The present study indicates that prion infection of the tongue may be an alternate route of prion neuroinvasion following oral exposure. The transport of TME to the XII nucleus following i.t. inoculation was rapid, and the efficiency of the i.t. route of inoculation was 100,000-fold greater than that of oral ingestion of HY TME. A previous study also reported a low efficiency of infection in hamsters following oral ingestion of 263K scrapie (16). Our findings indicate that a low dose of prions, which is more likely to exemplify a natural infection, is unable to cause disease when the prions are orally ingested but may cause disease when they are inoculated into the tongue. We also found a higher incidence of TME infection (100%) following topical application of HY TME to a superficial wound on the tongue than that resulting from a similar dose delivered by oral TME ingestion (20%), suggesting that TME neuroinvasion proceeds by different pathways in these groups. In the hamster tongue lesion group, the incubation periods of three animals ranged from 88 to 104 days postinfection, which may have been due to prion infection of the tongue, but these incubation times are not consistent with neuroinvasion via the splanchnic and vagus nerves following oral ingestion (6). Reduced access to prion replication sites in the tongue or lower prion doses delivered to these sites may account for the long and highly variable incubation periods in the tongue lesion group compared to those in the i.t. inoculation group. Prion infection via a lesion on the tongue is a more representative route of natural infection than i.t. inoculation, especially in grazing and foraging animal species such as ruminants and cervids. Prior studies report that scrapie inoculation into the tooth pulp of hamsters (25) and scrapie exposure via gingival scarification (14) in mice can cause disease, but i.t. inoculation results in a significantly shorter incubation period than intradental inoculation (82 Ϯ 2 versus 156 Ϯ 16 days). We propose that prion infection by an alternate route can occur when a host has an infection or minor wound on the tongue and that, under these conditions, greater access to the tongue-associated cranial nerves may result in prion infection and direct prion transport to the brain stem.
The detection of PrP Sc within axons in the tongue following FIG. 5. Incubation period of HY TME following oral infection. Syrian hamsters were exposed to HY TME by four different oral routes of inoculation. The percentage of unaffected animals in each group versus the incubation periods of individual affected hamsters following inoculation of 10 5.2 LD 50 of HY TME was plotted for each route. FIG. 6. PrP Sc accumulation in the tongue following i.c. inoculation of TME or scrapie prions. Hamsters were i.c. inoculated with distinct TME or scrapie strains, and animals were sacrificed during the early stages of clinical disease. PrP Sc was purified from the tongue and analyzed by Western blotting as described in the legend to Fig. 1. Each lane contains 25-mg tissue equivalents. the inoculation of HY TME into the lingual muscles and the localization of PrP Sc to the XII nucleus indicates that movement of HY TME to the brain stem was via retrograde axonal transport within CN XII. Although HY TME was also detected in the submandibular and cervical lymph nodes at 1 week postinfection, the absence of PrP Sc in the spinal cord at 5 weeks postinfection is inconsistent with TME neuroinvasion of the brain stem via the sympathetic nervous system. HY TME entry into CN XII may occur at the neuromuscular junction since PrP C is localized to subsynaptic areas of the postsynaptic and presynaptic cells (3,22). It is possible that PrP Sc can bind to PrP C at the neuromuscular junction and allow PrP Sc entry into the nerve terminal. In the present study, PrP Sc was localized to individual axons in nerve fascicles but we were unable to determine the spatial location of PrP Sc in the neuromuscular junction. The detection of PrP Sc in the tongue at 2 weeks postinfection and the subsequent increase in PrP Sc levels indicate that HY TME can replicate in the tongue. Previous studies described TME replication in the skeletal muscles of mink (35) and scrapie replication in the muscles of transgenic mice that express elevated levels of PrP C in myocytes (10). The lingual tonsils located at the root of the tongue may also serve as a site for HY TME replication.
Our findings on the location of PrP Sc in the brain stem following i.t. inoculation are consistent with those of previous studies that used viral transneuronal tracers to identify brain cell groups involved in higher-order afferent control of the lingual muscles (18,32,49,51). The distribution of PrP Sc in the brain stem outside of the XII nucleus at 4 through 6 weeks postinfection is consistent with the transsynaptic spread and retrograde axonal transport of HY TME to second-order brain stem neurons. Each of the brain cell groups with PrP Sc deposits has been reported to project its axons to the XII nucleus (18,32,(49)(50)(51). PrP Sc deposition was also found in the X nucleus in hamsters at earlier times postinfection than has been previously reported following experimental prion ingestion (38), but immunostaining was not localized to neuronal cell bodies in our study. The dorsal motor nucleus of the vagus is the primary site of prion entry into the brain stem following oral prion ingestion and in natural cases of scrapie and CWD (2,7,38,47,52). In the present study, PrP Sc localization to the X nucleus may have been due to the accumulation of PrP Sc in the dendrites of motoneurons of the XII nucleus that extend into the X nucleus, since a previous study reported extranuclear dendritic projections from the XII nucleus (1). Several brain cell groups project to both the X and XII nuclei, including the areas of the reticular formation and the nucleus of the solitary tract, both of which contain PrP Sc deposits during the early stages of brain neuroinvasion following oral scrapie ingestion (7) and i.t. inoculation of HY TME. These patterns of PrP Sc overlap indicate that examining the brain stem pathology and the distribution of PrP Sc in the brains of ruminants and cervids naturally infected with prion diseases is an unreliable approach for determining the route of neuroinvasion. One study reports that intraneuronal PrP Sc deposition in the XII nucleus was greater than that in the X nucleus in sheep after experimental oral BSE agent (hereinafter referred to as BSE) ingestion even though stronger PrP Sc deposition is expected in the X nucleus if neuroinvasion is via the vagus nerve (27).
A second potential route of spread for HY TME from the tongue to the brain stem is by the sensory pathways, which include the general somatic afferents (CN V and IX) and specialized afferents (CN VII and IX). In this scenario, retrograde axonal transport within these cranial nerves followed by transsynaptic spread to brain stem nuclei would result in initial PrP Sc deposition in the spinal trigeminal nucleus and the nucleus of the solitary tract. Since PrP Sc deposition in these locations occurred after PrP Sc was found in the XII nucleus following i.t. inoculation, and the spinal trigeminal and solitary tract nuclei are known to project to the XII nucleus, axonal transport of HY TME via these sensory nerves does not appear to be the primary route of neuroinvasion following HY TME inoculation of the lingual muscles. The accumulation of PrP Sc in the hamster tongue following i.c. inoculation of two hamster-adapted TME strains and four hamster-adapted scrapie strains indicates that the spread of prions to the tongue may be a common event in prion diseases. The detection of PrP Sc in axons of the tongue after i.c. inoculation suggests that one possible route involved in the establishment of tongue infection is axonal transport of HY TME from the brain to the tongue. In this case, TME transport may be via the motor efferent or sensory afferent pathways of the tongue. Prion infection of the XII nucleus, the spinal trigeminal nucleus, or the nucleus of the solitary tract would be necessary for prions to have access to and be transported within the tongue-associated cranial nerves. In cases of both natural and experimental oral infection of ruminants with scrapie and BSE, as well as in infection of deer with the CWD agent, there is evidence for the infection of the tongue-associated brain stem nuclei (27,48,56,57). Experimental oral ingestion of scrapie and BSE in sheep results in PrP Sc deposition in the XII nucleus, and the relative amount of PrP Sc deposition was greater in the XII nucleus than in the X nucleus in BSE-infected sheep but not in scrapie-infected sheep (27). The latter observation may indicate that CN XII has a more prominent role than the vagus nerve in neuroinvasion following oral BSE infection of sheep. In natural cases of infection with BSE, PrP Sc deposition has also been reported to occur in the XII nucleus and spongiform lesions are found in the nucleus of the solitary tract (56,57). The spongiform lesion distribution in the brain stems of animals with BSE is remarkably uniform, and a high lesion score is found for the spinal trigeminal nucleus (56,57). Furthermore, BSE infectivity is present in the trigeminal ganglia of cattle (11), which suggests that BSE can spread within axons from the spinal trigeminal nucleus to the trigeminal ganglion or, perhaps, in the reverse direction. In either case, BSE transport to the tongue may proceed via the general somatic afferents of the trigeminal ganglion. There are no reports describing the presence or absence of PrP Sc deposition in the tongues of cattle with BSE. Inoculation of mice with a tongue homogenate from cattle with BSE did not result in detectable prion infectivity, but the mouse prion bioassay cannot detect levels of BSE below 10 4.1 LD 50 per g of tissue (17,55). The findings of the present study, and the ability of BSE to target brain stem regions that are synaptically connected to the tongue, indicate that the Specified Risk Material Regulations (15), which do not completely exclude tongue from human consumption, need to be reevaluated in order to minimize human exposure to BSE and other prion diseases through ingestion of food products containing tongue. VOL. 77, 2003 AXONAL TRANSPORT OF PRIONS VIA THE HYPOGLOSSAL NERVE 589
|
2017-04-01T10:17:23.996Z
|
2003-01-01T00:00:00.000
|
{
"year": 2003,
"sha1": "49884724e2fb53c39a4eacd79fb1a8c2cb160895",
"oa_license": "CCBY",
"oa_url": "https://jvi.asm.org/content/77/1/583.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ASMUSA",
"pdf_hash": "3efbae68a9d94874af866a3266963c53383b0804",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
54920654
|
pes2o/s2orc
|
v3-fos-license
|
Sexual Webs Model for the Examination of Unsafe Sexual Behaviors and the Spread of Sexually Transmitted Diseases Including HIV/AIDS
Unsafe sex is the second most important risk factor for disability and deaths in the poorest countries and the ninth most important in developed countries. Globally, 30.8 million adults are living with HIV/AIDS and 340 million people are infected annually with sexually transmitted diseases. Unwanted pregnancies and sexually transmitted diseases including HIV/AIDS had been inexorably linked to sex, yet, there is no health behavior model focusing squarely on sexual attributes to provide analytical framework for the examination of unsafe sexual behaviors and the spread of sexual transmitted diseases including HIV/AIDS. This hinders the understanding of the roles of sexual attributes and contextual factors in influencing unsafe sex and the spread of related infections. The ‘Sexual Webs model’ has been constructed based on the individuals’ sexual attributes; levels of entanglement into the “sexual networks” known as “Sexual webs” for the examination of contextual issues influencing unsafe sexual behavior and the spread of sexually transmitted diseases including HIV/AIDS. Published qualitative research articles on sexual behaviors, and health behavior models were selected from the internet using Google and Google Scholar search. The research findings were synthesized using meta-ethnographic analysis. Research endeavors using the postulates of this model would provide better insight on the contextual issues influencing unsafe sexual behavior for policy formulation and program interventions to promote safe sexual practices.
Introduction
Unsafe sex is the second most important risk factor for disability and deaths in the poorest countries and the ninth most important in developed countries (Ezzati et al. 2002).Globally, 30.8 million adults are living with HIV/AIDS (WHO 2009) and 340 million people are infected annually with sexually transmitted diseases (WHO 2001).The yearly number of women with unwanted or unintended pregnancies is estimated at 80 million; 45 million commit abortion out of which 19 million are unsafe and maternal deaths from complications of unsafe abortion are about 68,000 women (WHO 2004a &b).Unwanted pregnancies and sexually transmitted diseases including HIV/AIDS had been inexorably linked to sex (Cares and Stones 1992), yet, there is no health behavior model focusing squarely on sexual attributes to provide analytical framework for the examination of unsafe sexual behaviors.This hinders the understanding of the roles of sexual attributes and contextual factors in influencing unsafe sex and the spread of related infections.The 'Sexual Webs model' has been constructed based on the individuals' sexual attributes; levels of entanglement into the "sexual networks" known as "Sexual webs" for the examination of contextual issues influencing unsafe sexual behavior and the spread of sexually transmitted diseases including HIV/AIDS.This model provides postulates that would enhance the quality of research findings on sexual behaviors for informed social policies to stem unwanted pregnancies and the spread of sexually transmitted diseases including HIV/AIDS.It is better than all the previous models for the examination sexual behaviors.
The models for the examination of health risk behavior and for which most interventions are based on have focused on psychosocial and environmental factors to describe objective factual happenings and behaviors under volitional control (Bandura 1986;Becker and Maiman 1975;Careal et al. 1997;Fisher and Fisher 1992;Howard and Mccabe, 1990;Proschaska and Velicer 1997;Rogers 1975;Sutton 1997;Fishbein et al.1991).They have ignored socio-cultural and other contextual factors (sexual attributes and 'sexual webs') that influence sexual behavior.Consequently, almost all the studies focusing on the impact of socio-cultural, economic and demographic factors on sexual behavior did not examine the interaction of these factors with sexual attributes and sexual webs to understand contextual issues surrounding sexual behaviors (Dunkle et al. 2004;Gregson et al. 1998;Pulerwitz et al. 2000;Soler et al 2000;Simon and Paxton 2004).Health behavior model that overcomes the limitations of the previous ones is required for research endeavors and program intervention to reduce unsafe sex, and the spread of sexually transmitted infections including HIV/AIDS.
A theory can be defined as a systematic way of understanding events or situation.It's consisting of a set of concepts, definitions and propositions that explain or predict these events or situations by illustrating the relationship between them (US National Cancer Institute 2005).Models themselves are not the facts but miniature representation of facts which illuminate the path of the researcher in search of these realities.A model is broader than theory-it consists of several theories brought together to explain a phenomenon or group of phenomena.Glantz et al. (1997) in their review of the articles published between 1992 and 1994 in health education, medicine, and behavioral science that used the health behavior change models for analytical framework, observed that the most commonly utilized models were Health belief Model (Becker and Maiman 1975;Janz and Becker 1984;Rosenstock 1974), The Theory of Reasoned Action/planned behavior (Montano et al. 1997;Ajzen and Fishbein 1980), Social Cognitive Theory (Bandura 1986), and the Transtheoretical model (Proschaska et al.1994;Proschaska andVelicer1997cited in Redding et al.2000).Oluwale (2005) proposes the convergence of Social learning, Diffusion of Innovation and Social network models for AIDS risk reduction in Sub-Saharan Africa, while Carael et al. (1997), andSweat andDenison (1995) provide Social Ecological model for health promotion.
Strength and Limitations of Heath Behavior Models
A simple overview of the models shows that the health belief model, Theory of reasoned action/planned behavior and the Transtheoretical model dwell more on the psycho-social factors at the individual level to predict health risk behavior, behavior change and maintenance of safe behavior.Prominent concepts in health belief model are perceived susceptibility, perceived severity, perceived benefits, perceived barriers, cues to action and self efficacy.The theory of reasoned action/planned behavior emphasizes behavioral intention, attitude, subjective and normative norms; and perceived behavioral control.The Transtheoretical model provides the stages of intentional behavior change which form a process from initiation of change to the point where change has occurred.Concepts associated with this theory are pre-contemplation, contemplation, preparation, action, maintenance, pros, cons, confidence and temptation.Others concepts are consciousness raising, dramatic relief, self-liberation, helping relationships, counter-conditioning, reinforcement management, stimulus control and social liberation (Redding 2000) The Social cognitive theory, Convergence of behavior change models and Ecological model for health promotion recognize the active role of the environmental factors on the behavior of the individuals.Key postulate of the Social cognitive theory is reciprocal determinism which is the interaction between the individual, his or her action and the environment.The Convergence model links Social learning, diffusion of information and Social networks theories.It emphasizes that social norms are best understood and influenced at the social network level within the existing chains of communication and natural flow of information.Social ecological model for health promotion identifies intrapersonal and interpersonal factors, institutional factors, community factors and public policy at the local, state and national levels to influence the behavior of the individual.Almost all program interventions to stem unsafe sexual behaviors are explicitly or implicitly driven by theory.The review of theory-driven interventions across the globe indicated that the emphasis on intrapersonal and interpersonal factors, the provision of skills acquisition training and the attempt to modify social norms are more effective at reducing risk behavior among participants (Diclemente and Wingood 1995).However, differences existed in effectiveness between target populations and different types of interventions.Interventions targeting sex workers were the most likely to observe increased in condom use, reduced incidence of STDs and unprotected sex (9 out of 10 studies).The effectiveness for other groups at risk was more varied.Thirteen out of 18 studies of African-American or Latino descent women were effective; 3 out of 10 studies for injecting drugs users; 1 out of 3 for partners of injecting drug users; 2 out of 3 for STDs Clinic patients; 4 out of 7 for US college students and 6 out of 14 studies for mixed gender community groups (Ickovics et al. 1998).
Despite the reported success of intervention programs among some groups at risk, there exists no compliance with behavior change initiatives among other groups.Auerbach et al (1994) states that most of the models are based on behaviors that is under intentional and volitional control, ignoring the fact that sexual behavior involves two people.It involves impulse and influenced by socio-cultural, contextual, personal and subconscious factors that may be difficult to influence.Alcohol and drug influence on sexual behavior stress the importance of understanding contextual issues surrounding sexual behavior.Some intervention programs to change risk sexual behavior produced null effect.This again pointed to the importance of understanding the relationships between context, population, approach and theoretical background.Branson et al. (1996) observed non impact among the inner city African American men and STD Clinic patients in USA.A randomized control trial among STD patients in the UK also produced a null effect.The intervention was guided by the social cognitive theory and the results showed a mild difference in self reported behavior change.James et al. (1996;1998) suggested that community and individual interventions should address the environment in which risk behavior occurs.It clear that program interventions aiming at reducing unsafe sexual behavior should address individual, social, cultural and economic differences which the previous health models have ignored.A theoretical framework with postulates that measure these differences is required for research into contextual issues influencing sexual behaviors.
Changes in Sexual Behavior
Studies have shown that sexual behaviors have changed due to secular and non secular factors in many countries across the globe (Welling et al. 2006).Attitude to sexual behavior has changed in response to socio-economic factors (poverty, education, and employment); demographic factors (age structure of population, timing of marriage, mobility and migration, seasonal labor, rural urban movement); and social disruption due to war and political instability (Mufune 2003;Zhen et al.2001).The phenomenon of transporting pornographic images from more sexual liberal societies to the conservative ones through the internet and other means of communication has impacted greatly on the social norms of those societies (Cameron et al. 2005;Simon et al.2004).Policies and legislations governing health care systems and public health strategies have also wrought changes to attitude to sex in many countries (Parker et al. 2000).The median age at first intercourse for women has fallen to about 15 years in countries of West Africa, East Africa, Central Africa and South Asia with increased levels of premarital sex (Wellings et al. 2006).Early initiation into sex is less likely to be protected against unplanned pregnancy and infections and associated with a larger number of sexual partners (Genuis and Genuis 2004;Giesecke et al. 1992;Harrison et al.2005).
Where contraception is practiced by sexual partners, it can be antidote for both unwanted pregnancies and sexually transmitted diseases including HIV/AIDS.Parker (2001) opined that the increase of unwanted pregnancies and sexually transmitted diseases indicate the gap between efforts to improve safe sexual practices and reality that is shaped by structural factors.WHO (2010) reports that in the year 2008, there were 2.7 million incidence cases of HIV/AIDS and 2 million HIV/AIDS related deaths worldwide.The global annual estimates of 80 million unwanted pregnancies and 68, 000 maternal deaths from complications of unsafe abortion (WHO 2004a &b); and the low contraceptive prevalence percent in African (23.7%),Eastern Mediterranean (42.8%) and other regions of the world(WHO 2010) need broad based rejuvenation of efforts to further understand the contextual issues surrounding unsafe sexual behaviors.To achieve this, we have constructed the sexual webs model for the examination of unsafe sexual behaviors and the spread STDs including HIV/AIDS.
Theoretical Conception
The theoretical conception of this research is that sexual behavior especially unsafe sex results to unwanted pregnancies and sexually transmitted diseases including HIV.Although there are contending opinions of what sexual act constitute safe or unsafe sexual behaviors; the belief that once the sexual outcomes of pregnancy and sexually transmitted diseases including HIV/AIDS are against the initial motives of the participants, it would be considered unsafe sexual behavior.Unwanted pregnancies, STDs and HIV/AIDS are linked to sexual behaviors; therefore, a model that focuses on sexual attributes and sexual webs (a form of sexual networks with beliefs and peculiar sexual practices) would provide better insight to contextual issues surrounding unsafe sexual behaviors.
Methods
A general search for articles was conducted through the Internet using Google search and Google scholar.The search was done using phrases such as "theories of behavior change"; "theories of sexual behavior"; "perception of AIDS and condom use"; "unsafe sex practices"; "HIV prevention"; "commercial sex workers"; "sexual behavior and sexually transmitted diseases"; "risk health behavior and HIV/AIDS"; "contraception and sexually transmitted diseases"; "programs for risk sexual behavior change" and "determinants of contraceptive method choice".Scientific articles that met our research interest were selected from different Journals in Public Health, Social Sciences and Health Education.These articles were published between 1974 and 2010.The articles on models of health behavior were reviewed and some of their concepts incorporated into the result of other synthesized research findings using meta-ethnographic analysis (Atkin et al.2008;Barnett-Page and Thomas 2009;Britten et al.2002;Campbell et al 2003 Noblit andHare 1988).These led to the construction of Sexual Webs Model; 117 articles were obtained in all, but other articles that did not meet our criteria and research interest were eliminated.Qualitative papers with clear research question(s), methods and findings drawn logically from the data were selected.In process of selecting the papers, the guidelines for assessing qualitative research as suggested by Atkin et al (2008) and Campbell et al (2003) .However, the fallacy of allowing the tail to wag the Dog (Barbour 2001) was avoided.The articles selected had their study sites in Australia, Africa, Europe and America.Finally, only ten were synthesized for this work.
Results
The data are extractions from qualitative research findings on sexual behaviors.These articles were published in Public Health, Social Sciences and Health Education research Journals.The findings of the various authors can be construed as the exhibition of sexual attributes of the individuals.These attributes are sexual capacity, sexual motivation and sexual performance (Kinsey et al. 1948;Kinsey et al. 1953).The act of engaging in sex brings the individuals into sexual relationships.The different sexual relationships or sexual networks are conceptualized in this work as 'sexual webs'.These sexual attributes are conceived in this work as defined below.
Sexual Capacity
It refers to the entire demographic, family, socioeconomic, community and global factors that influence the ability of an individual to negotiate and perform sex.
Sexual Motivation
This refers to the expected benefits or any other thing(s) that encourage individuals to engage in sex.The ways individuals intend to perform sex and obtain the expected benefits are part of motivation.
Sexual Performance
It refers to the things the individual actually do to enhance sex or during sexual encounters.
Sexual Webs
It refers to the different types of sexual relations or sexual networks.The terms of agreement and beliefs about sex; characteristics and sexual activities amongst sexual partners may define a sexual web.Terms of agreement are implicitly or overtly expressed which constitute rituals before or during sex (beliefs, gifts, drugs or/and alcohol use, romance or foreplay etc).Intergenerational sexual relations; sexual relations amongst drug and/or alcohol users; sexual relations involving private and brothel sex workers; secret sexual relations involving married individuals, widows, and widowers; sexual relations involving unemployed or employed single individuals; and sexual relations amongst adolescents and youths may define different sexual webs; instances where a sexual partner got fed up with the other's sexual debut and recent second encounter may be indication that they belong to different sexual webs.
Tables 2a and b.Synthesis of data The following concepts are important for the description of the characteristics and relationships among sexual webs:
Open or Infinite Sexual Web
This is a sexual web that has so many individuals that it is impossible to know each other.Sexual relations involving commercial sex workers are good example of this type of web.A migrant who starts other sexual relations in his or her new destination may extend this web to the new location.
Closed or Finite Sexual Web
This refers to a web with few individuals who know each other.An example of this may be a rich man with his wives and concubine.
Positive Sexual Web
It is a web that at least a member of it is infected with HIV/AIDS or/and sexually transmitted diseases and soon others will also be infected.A community with many positive sexual webs will experience rapid spread of HIV/AIDS or/and sexually transmitted diseases.
Negative Sexual Web
It is a web that none of its members is infected with HIV/AIDS or sexually transmitted diseases.
Mixed Sexual Web
This is a sexual web that its members exhibit different characteristics and sexual activities that are in consonant with two or more other identified sexual webs in the community.
Exclusive Sexual Partners
This is sexual relations between two or three individuals who stay together and monitor each other carefully to avoid the admission of another partner.
Transitivity Sexual Partner
If A = B, and B= C, and C=D, then A = D, the law of transitivity.There would be no direct sex between partners A and D, yet, A can infect D with HIV/AIDS and other sexually transmitted diseases (See Figure 2).If female 'A' 1 is infected with HIV/AIDS, she will infect male 'A', and male 'A' will infect female 'A' 2 .Female 'A' 2 will infect male 'B', and Male 'B' will then infect female 'B' 1 and female 'B' 2 .Female 'A' 1 is a transitivity sexual partner to male 'B'; that is Female 'A' 1 to male 'A' to female 'A' 2 to male 'B'; While female 'B' 1 and female 'B' 2 are transitivity sexual partners to male 'A'; that is either female 'B' 1 or female 'B' 2 to male 'B' to female 'A' 2 to male 'A'.This illustration applies to the spread of sexually transmitted diseases also.The illustration is also true amongst same sex partners.In that case it will be man to man, or woman to woman.
Conclusion
Sexual webs model provides better analytical framework for examination of unsafe sexual behaviors than the previous health behavior models.Unwanted pregnancies, STDs and HIV/AIDS are linked to unsafe sexual behavior; therefore, a model focusing on sexual attributes and sexual webs would provide a better insight into the contextual issues (gender, masculinity, sexual pleasure, procreation etc) surrounding unsafe sexual behavior.
About 30% to 60% of married men and 20% to 50% of married women do engage in at least one extramarital sexual encounter (Sponaugle 1989;Vangelishi and Gertenberger 2004); how do the issues of love; sexual satisfaction; sexual pleasure; procreation and others (sexual attributes) interact with types of sexual webs to influence unsafe sexual behavior in those clandestine sexual relations.Understanding the dynamics of unsafe sexual practices within the context of sexual attributes and sexual webs would be rewarding for both policy issues and program interventions.
The increase of unwanted pregnancies and sexually transmitted diseases including HIV/AIDS in some parts of the world indicate the gap between efforts to improve safe sexual practices and reality that is shaped by structural factors (Parker 2001).Attempts to further understand the influence of these structural factors on unsafe sexual behavior using the sexual webs model would be of immense benefits.
Limitations
This model may not have incorporated all variables required for understanding unsafe sexual behavior; however, it is an improvement over the previous ones that considered partly sexual behaviors.Researchers may identify other variants based or culture or location.They can include them in the model as it best suits their research interest.Some variables may not be relevant in certain context; such variables can also be dropped.
Figure 1 .
Figure 1.Sexual webs analytical framework for the examination of unsafe sexual behaviors and the spread of STDS and HIV/AIDS Four constructs-sexual capacity, sexual motivation, sexual performance and sexual webs (Figure 1) are critical in the analysis of the spread of HIV/AIDS and sexually transmitted diseases.Masculinity; poverty; government policies; age; support from parents; the environment (schools and home); gender issues; friends; religious beliefs; perceptions of place free from AIDS are all factors that influence sexual capacity; sexual capacity in turn affects sexual motivation and sexual performance.Sexual motivation influences sexual performance and vice versa; the achievements through sexual performance (for example: money; food; drug; pleasure; intimacy) would motivate the individual to perform further sex to achieve yet unattained goals (for example, material possession for future subsistence; looking forward for marriage).Sexual motivation and sexual performance entangles the individuals into the sexual webs.Thus the entire sexual attributes of an individual link him or her to sexual webs.The actual things the individuals do that constitute 'good' sexual performance and better results (achievements) are difficult to discard if the individuals still desire similar positive results.If unprotected sex or prolong drugs induced sex constitute good performance and better results, it will be difficult to discard except if the specific needs for such performance are addressed.Some of the sex workers in need of love, romance and intimacy got infected with HIV and sexually transmitted diseases through unprotected sex with their private partners (Warr and Pyett 1999).The limited penetration before condom use, and condom failure due to breakage and spillage would facilitate the spread of HIV/AIDS and other sexually transmitted diseases (Quirk et al 1999) Knowing contextual issues influencing unsafe sexual behavior among different sexual webs is critical for health risk behavior change initiatives.
Figure 2 .
Figure 2. Illustration of transitivity sexual partner in heterosexual relationIn Figure2above, male 'B' has sexual relation outside his two partners with female 'A' 2 as shown by the black thin arrow.If female 'A' 1 is infected with HIV/AIDS, she will infect male 'A', and male 'A' will infect female 'A' 2 .Female 'A' 2 will infect male 'B', and Male 'B' will then infect female 'B' 1 and female 'B' 2 .Female 'A' 1 is a transitivity sexual partner to male 'B'; that is Female 'A' 1 to male 'A' to female 'A' 2 to male 'B'; While female 'B' 1 and female 'B' 2 are transitivity sexual partners to male 'A'; that is either female 'B' 1 or female 'B' 2 to male 'B' to female 'A' 2 to male 'A'.This illustration applies to the spread of sexually transmitted diseases also.The illustration is also true amongst same sex partners.In that case it will be man to man, or woman to woman.
Table 1 .
Shows articles from which data was obtained
|
2018-12-06T00:54:01.952Z
|
2012-05-28T00:00:00.000
|
{
"year": 2012,
"sha1": "6c2f07273df221aa5e8a155ba8bf189d2909411d",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ass/article/download/17612/11795",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6c2f07273df221aa5e8a155ba8bf189d2909411d",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
54047055
|
pes2o/s2orc
|
v3-fos-license
|
Compressive Strength of Compacted Clay-Sand Mixes
The use of sand to improve the strength of natural clays provides a viable alternative for civil infrastructure construction involving earthwork.Themain objective of this note was to investigate the compressive strength of compacted clay-sandmixes. A natural clay of high plasticity was mixed with 20% and 40% sand (SP) and their compaction and strength properties were determined. Results indicated that the investigatedmaterials exhibited a brittle behaviour on the dry side of optimum and a ductile behaviour on the wet side of optimum. For each material, the compressive strength increased with an increase in density following a power law function. Conversely, the compressive strength increased with decreasing water content of the material following a similar function. Finally, the compressive strength decreased with an increase in sand content because of increased material heterogeneity and loss of sand grains from the sides during shearing.
Introduction
Civil infrastructure involving earthworks such as pavements, pipelines, and buildings is severely distressed in Regina, Saskatchewan, due to the expansive nature of the native soil [1].Chemical admixtures such as lime [2] and engineering techniques such as nailing [3] have been attempted in various projects within the city.The low success rate of these methods is attributed to the harsh local climate characterized by aridity and freezing temperatures and the interaction of the active clay with the additives.The use of inert materials provides an environmentally friendly option in improving the shear strength of indigenous soils while still being cost-effective to the consumer.
The behaviour of compacted clay-sand mixes depends on the amount of constituents, compaction characteristics, and test conditions.Shafiee et al. [4] reported that the undrained shear strength increases with increasing sand content.Likewise, Vallejo and Mawby [5] demonstrated that the shear strength is governed by the granular phase when the sand content is greater than 75% and by the cohesive phase when the clay content is greater than 40%.The predominance of clay matrix occurs when the clay content is more than 40% as confirmed by Wood and Kumar [6].Likewise, Prakasha and Chandrasekaran [7] concluded that the inclusion of sand grains in a clay matrix leads to an increase in pore pressure resulting in a decrease in undrained shear strength.The research on sand-bentonite mixes concludes that the shear strength of these materials increases with decreasing water content [8], increasing dry density Blatz et al. [9], and increasing confining pressure [10].
The triaxial shear test is frequently used because of the laterally restrained soil conditions in most geotechnical applications.Consequently, comparable data on unconfined compressive strength for clay-sand mixes is sparsely reported in the literature.Nonetheless, the unconfined compression test is useful for laterally exposed conditions and applicable to fine-grained soils under undrained loading [11], requires a short testing time, is easy to conduct data analysis, and is used in the design of road embankments, shallow footings, and retaining walls.
The main objective of this note was to investigate the compressive strength of compacted clay-sand mixes.The clay
Materials and Methods
The natural clay (NC) and river sand (RS) were retrieved from local test pits in and around Regina.The materials were obtained using the ASTM Standard Practice for Soil Investigation and Sampling by Auger Borings (D1452-09) and were transported to the Geotechnical Testing Laboratory at the University of Regina as per the ASTM Standard Practice for Preserving and Transporting Soil Samples (D4220-95(07)).Clay-sand mixes were prepared based on dry weights of the materials: CS-I (80% NC and 20% RS) and CS-II (60% NC and 40% RS).
The geotechnical index properties were determined for preliminary soil assessment according to standard ASTM test methods as follows: (i) specific gravity ( ) by the Standard Test Method for Specific Gravity of Soil Solids by Water Pycnometer (D854-10); (ii) particle-size analysis by the Standard Test Method for Particle-Size Analysis of Soils (D6913-04(2009)); (iii) liquid limit ( ), plastic limit ( ), and plasticity index ( ) by the Standard Test Method for Liquid Limit, Plastic Limit, and Plasticity Index of Soils (D4318-10).The clay and the sand were classified according to the Standard Practice for Classification of Soils for Engineering Purposes (Unified Soil Classification System (USCS)) (D2487-11).
The clay-sand mixes were prepared on dry mass basis and predetermined amounts of tap water were added.The samples were put in sealed plastic bags and left overnight to ensure uniform moisture distribution.Standard proctor tests were carried out as per Standard Test Method for Laboratory Compaction Characteristics of Soil Using Standard Effort (ASTM D698-12), by compacting the samples in a mould with 25 blows per layer over three layers.A 2.5 kg hammer was dropped from a height of 300 mm on to the sample layer.The water content () was determined by the Standard Test Method for Laboratory Determination of Water (Moisture) Content of Soil and Rock by Mass (D2216-10).
The unconfined compressive strength was determined according to the ASTM Standard Test Method for Unconfined Compressive Strength of Cohesive Soil (D2166-13).The compaction specimens were extracted from the moulds using a hollow steel tube and the former were trimmed to 50 mm diameter and 110 mm height.The height-to-diameter ratio was 2.2 which was within the range (2.0 to 2.5) specified by ASTM.The specimen's dimensions were determined using a Vernier caliper at three different locations.Strain was applied at a rate of 0.5 mm/min and the test was stopped when load decreased with increasing strain or until 15% strain was reached.The data were digitally recorded and stored in a portable computer.
Results and Discussions
Table 1 gives the geotechnical index properties of the investigated materials.The specific gravity of NC was found to be 2.75 which is typical of sedimentary clays (ranging between 2.4 and 2.95 [12]).The grain size distribution (Figure 1) showed 98% material finer than 0.075 mm and 64% material finer than 0.002 mm.The liquid limit was 63% and plastic limit was 28% thereby indicating a moderate water adsorption capacity.The clay was classified as high plasticity clay (CH).In contrast, the of RS was 2.65 which is typical for materials primarily composed of quartz.About 1% of the material was found to be finer than 0.075 mm (Figure 1).The coefficient of curvature ( ) was 1.2 and the coefficient of uniformity ( ) was 5.3.Overall, RS was classified as poorly graded sand (SP).Figure 2 presents the compaction curves for the investigated materials.The NC showed a maximum dry density of 1.52 g/cm 3 at an optimum water content ( opt ) of 27% which is close to the plastic limit (28%).Marinho and Oliveira [13] reported that for cohesive soils the optimum water content is within ±5% of the plastic limit.These data are similar to those reported by Azam and Chowdhury [14] for the same material.The corresponding values of dry density and optimum water content were 1.60 g/cm 3 at 22% for CS-I and 1.65 g/cm 3 at 20% for CS-II.The upward and leftward shift of the curves for increased sand content (reduced clay content) is attributed to a decreased void ratio along with a lower water requirement to lubricate the large specific surface areas of the clay particles [11].Conversely, the decrease in the maximum dry density values with an increase in NC content is due to an increased void ratio of the clay phase and a high water demand for clay particles lubrication.
Figure 3 shows the stress-strain plots for the investigated materials.Up to the optimum water content (27%), NC exhibited distinct peak stresses (7900 kPa, 7200 kPa, 7150 kPa) at strains ranging from 2 mm to 5 mm.All of these curves dropped sharply exhibiting a brittle material behaviour that was devoid of any residual strength.On the wet side of optimum, NC exhibited ductile behaviour with peak stresses of 1300 kPa and 800 kPa at strains ranging from 4 mm to 7 mm.On the dry side of optimum, the presence of large air-filled macropeds, which are aggregates of particles, can exhibit high strength, whereas on the wet side of optimum these macropeds soften [15].Compacted soils on the dry side of optimum have two families of pores, micropores (intraaggregate pores) and macropores (interaggregate pores), and have a continuous air phase and discontinuous water phase.On the wet side, the soil has a single family of pores: micropores, air is occluded and water phase is continuous [16].The brittleness on the dry side of optimum is mainly due to the flocculated clay structure (aggregates of particles) making it difficult for multiparticle assemblages to slide past each other and also due to the relatively low amount of water (discontinuous water phase) available for lubrication.However, the ductile behavior on the wet side of optimum is due to a dispersed clay structure (single particles or particle groups acting independently [17]) and also due to more lubrication offered from the continuous water phase making it easy for individual particles to slide past each other thereby generating strain before failure.
Samples CS-I and CS-II also exhibited a brittle response up to the dry side of optimum and a ductile response on the wet side of optimum.This was attributed to the dominance of clay fraction (>40% in CS-I and CS-II) over the sand fraction resulting in a similar behaviour to NC but with reduced peak stresses due to a reduced clay content.The peak stress and strain for CS-I were 5600 kPa and 3.7 mm in the former case.The corresponding values on the wet side of optimum were found to range from 2000 kPa to 300 kPa at strains ranging from 7 mm to 8 mm.Likewise, the peak stress and strain for CS-II on the dry side of optimum were 4500 kPa and 3 mm.The corresponding values on the wet side of optimum were found to range from 2600 kPa to 50 kPa at strains ranging from 2 mm to 8 mm. Figure 3 further indicates that, at high water contents, the axial stress is virtually independent of axial strain for CS-I and CS-II samples.This is attributed to the combined effect of the following: (i) strain softening associated with particle lubrication due to a continuous water phase at high sand contents; (ii) enhanced sample heterogeneity due to increased sand content; and (iii) loss of sand grains from the sample sides during shearing that resulted in higher strains.
Figure 4 plots the compressive strength (half the peak axial stress) with respect to dry density (Figure 4(a)) and water content (Figure 4(b)).The compressive strength increased with an increase in dry density in each of the materials.NC exhibited the highest increase in compressive strength followed by CS-I and then by CS-II.The compressive strength increased from 220 kPa at a dry density of 1.2 g/cm 3 to 4000 kPa at 1.5 g/cm 3 .The compressive strength for CS-I increased from 150 kPa to 2800 kPa when the dry density increased from 1.3 g/cm 3 to 1.6 g/cm 3 .Similarly, CS-II exhibited an increase in compressive strength from 30 kPa to 2200 kPa when the dry density increased from 1.3 g/cm 3 to 1.6 g/cm 3 .In contrast, increasing the water content showed a reverse trend.NC exhibited the highest increase in compressive strength with a decrease in water content followed by CS-I and then by CS-II.Overall, the data points exhibited scatter, especially for NC and CS-I for high compressive strength (dry side of optimum water content).This is mainly due to the nonuniform moisture distribution associated with the low unsaturated hydraulic conductivity of the clay as well as dead ends and high tortuosity within these samples.Despite the lowest maximum dry density, the clay exhibited the highest compressive strength.The higher degree of heterogeneity in the clay-sand mixes compared to the clay led to lower strength values because the failure plane had to pass through weakest zone in the sample Mullins and Panayiotopoulos [18].Furthermore, during shearing, sand grains fell out from the sides in the test samples of clay-sand mixes.This reduced the cross sectional areas of the samples available for taking the applied load and, as such, reduced strength.
Summary and Conclusions
The unconfined compressive strength was determined for compacted samples of a natural clay of high plasticity and clay-sand mixes containing 20% and 40% sand (SP).All materials exhibited a brittle behaviour on the dry side of optimum and a ductile behaviour on the wet side of optimum.For each material, the compressive strength increased with an increase in density following a power law function.Conversely, the compressive strength increased with decreasing water content of the material following a similar function.Finally, the compressive strength decreased with an increase in sand content because of increased material heterogeneity and loss of sand grains from the sides during shearing.
Figure 1 :
Figure 1: Grain size distribution of the investigated materials.
Figure 3 :
Figure 3: Stress-strain behaviour of the investigated materials.
Figure 4 :
Figure 4: Compressive strength versus dry density and water content.
|
2018-12-01T12:32:00.687Z
|
2014-11-06T00:00:00.000
|
{
"year": 2014,
"sha1": "4b9075521f04c5a13e5fd42ad2b1502a2f315f4a",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/amse/2014/921815.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b9075521f04c5a13e5fd42ad2b1502a2f315f4a",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
237792043
|
pes2o/s2orc
|
v3-fos-license
|
Beyond undocumented: Differences in the mental health of Latinx undocumented college students
Undocumented college students face several threats to their well-being and mental health. Different social locations, including whether students have Deferred Action for Childhood Arrival (DACA) status, students’ gender, and family factors may shape students’ ability to be well. How these factors work together to shape mental health outcomes among undocumented Latinx college students is not well understood. This study examines several factors (demographic, familial, immigration, and socioeconomic factors) associated with anxiety scores of undocumented Latinx college students who participated in the UndocuScholars Project national online survey in 2014. We observe three notable findings: (1) DACA recipients report heightened levels of anxiety, (2) women with DACA status report higher levels of anxiety compared to non-DACAmented undocumented college students and men with DACA, and (3) students whose families motivate them report lower levels of anxiety. Latinx undocumented college students are not a monolith; demographic, family, and socioeconomic factors matter.
Palabras clave Latino · Inmigrantes indocumentados · DACA · Salud mental
Ever since September of 2017, when the Trump administration announced their intention to terminate the Deferred Action for Childhood Arrivals (DACA) program, DACA has been threatened time and time again. In July 2020, the US Supreme Court ruled against the Trump administration's decision to suspend DACA. After numerous legal battles, the DACA program has been fully reinstated (and is accepting new applications as well as renewals as of 7 December 2020). The legal battle for DACA, despite its reinstatement, continues (e.g., Texas and several other states filed a lawsuit against the federal government over the legality of DACA; this issue was heard by the Texas federal court in December 2020, but there was no immediate ruling). This series of political moments displays, with much clarity, the uncertain nature of DACA. It reminded recipients and their supporters that DACA was never a permanent way to protect undocumented youth and young adults. The wellbeing of DACA recipients was likely dampened by these political moments (Venkataramani et al. 2017;Enriquez et al. 2018;Hamilton et al. 2020), but to date, we lack a clear understanding of both individual-and family-level factors that protect or worsen the mental health of DACA and non-DACA recipients. In the present study, we examine this issue in a sample of Latinx 1 undocumented undergraduates using survey data collected in 2014. Our study examines mental health in a period during which DACA was arguably more "stable" than it is today. Our findings reveal that, even during 2014, anxiety levels of undocumented Latinx undergraduates with and without DACA were troubling and varied on the basis of demographic, familial, and socioeconomic factors.
Across the United States, states have passed laws that reduce financial barriers to attend college by implementing in-state tuition for, and/or providing financial aid to undocumented students (NCLS 2019). Each year, 65,000 undocumented youth graduate from high school nationwide, and about 5% to 10% pursue higher education (Campaign for College Opportunity 2018). More than 450,000 undocumented individuals are enrolled in higher education in the United States (New American Economy 2020). Latinx individuals constitute 46% of the undocumented undergraduate population, and among DACA-eligible undergraduates, 65% were Latinx in 2018 (New American Economy 2020).
Despite state policies that facilitate college access, several other factors influence whether Latinx undocumented college students thrive. Though there is limited national-level data about the anxiety levels of Latinx youth and undocumented college students, previous studies in smaller regions find troubling results. Immigrant Latinx youth in North Carolina are at higher risk of having anxiety (28%) compared with US-born youth (13-20%) (Potochnick and Perreira 2010). A report from the UndocuScholars Project survey, the online survey of undocumented college students used in this study, found that about 29% of men and 37% of women in the Undo-cuScholars Project survey reported anxiety levels above the clinical cutoff (Teranishi et al. 2015). Within undocumented individuals, DACA-eligible adults tend to report lower levels of psychological distress compared to non-DACA eligible immigrants (Venkataramani et al. 2017).
A combination of risk and protective factors shape the mental health profiles of Latinx youth (Potochnick and Perreira 2010). DACA is an especially intriguing factor. Patler and Laster Pirtle (2018) found that DACA recipients and non-DACA recipients worry about family deportation at comparable levels and that DACA recipients have slightly fewer worries about their own deportation. Enriquez and colleagues (2019) found that undocumented undergraduates attending colleges in the University of California system are extremely worried about DACA's future. DACA's uncertainty, the continued risk of family members' deportation, and the effects of anti-immigrant sentiment may undermine the mental well-being of all undocumented individuals (Dreby and Stutz 2012;Gonzales et al. 2013), as well as undermine the promise that DACA was previously thought to have had (Hamilton et al. 2020).
It is important to understand the dimensions that shape the mental health of Latinx undocumented students for several reasons. 60% of the approximately 125,000 undocumented students who graduate high school every year are Latinx (Zong and Batalova 2019), and one in twenty US-born children have an undocumented parent (Passel et al. 2018). A substantial number of young Latinx persons have ties to the undocumented community either through direct family and/or community ties (Vargas et al. 2017). Regardless of documentation status, Latinx identity is associated with stereotypes and tropes about the undocumented population. Because of this racialized illegality, membership in the Latinx and undocumented populations becomes conflated in public discourse and by the media (Menjívar 2021;Enriquez et al. 2019;García 2017). This homogenizes the Latinx experience. Thus, our study aims to challenge monolithic portrayals of the Latinx experience and of the undocumented experience (Enriquez et al. 2018(Enriquez et al. , 2019Valdez and Golash-Boza 2018). Lastly, mental health has implications for retention in higher education. One study examined Latinx college students' cognitive disruption pre-and post-Trump's election and found that when students were prompted to think about their familial obligations post-Trump, they displayed attentional disruption, indicating that their ability to focus on school was compromised (Vasquez-Salgado et al. 2018). Latinx mental health is a matter of retention in higher education, which influences social mobility (Wyatt et al. 2017).
No study to date has examined how demographic, family, immigration, and socioeconomic factors work together to create distinct mental health experiences among undocumented Latinx undergraduates with and without DACA. Our findings add unique support for the argument that DACA does not eliminate angst related to immigration (Hamilton et al. 2020). We rely on unique data from a national online survey with measures on worries about family deportation and DACA status. We argue that the mental health of undocumented undergraduates is complex and that demographic, family, immigration, and socioeconomic factors create diverging experiences.
Mental health and DACAmented status
The Obama administration implemented the DACA program in 2012 as an executive order. This program provided eligible undocumented individuals permission to legally work in the United States and deemed them low priority for deportation. Eligible individuals received a work authorization card and a social security number, and gained access to resources that could improve their social and economic incorporation: they could apply for driver's licenses, gain health insurance, develop credit, and apply for Advanced Parole (Zhou and Gonzales 2019;Gonzales et al. 2014). If granted Advanced Parole, DACA recipients may have the opportunity to deepen relationships with family in their home countries if traveling for humanitarian, emergency, or educational reasons (Ruth et al. 2019). In states such as Connecticut and Maryland, having DACA means having access to financial aid (NCLS 2019). More than 800,000 young adults have benefited from DACA (Zong and Batalova 2019). Individuals who did not meet all the criteria, however, could not benefit from the program. Because of the Trump administration's temporary suspension of DACA in September 2017 and other changes during 2020, the DACA program shortly changed from a two-year renewal work authorization to a one-year program. Since DACA's reinstatement in its original form on 7 December 2020, DACA-eligible individuals can now apply for two-year employment authorizations and Advanced Parole again.
DACA once held the promise of a brighter future for young undocumented adults, but it has had positive effects (Lee 2018;Lim 2018) as well as unintended effects (Hsin and Ortega 2018). DACA relieved individuals of stressors associated with the lack of a social security number, reduced feelings of shame as well as isolation (Patler and Laster Pirtle 2018), and increased individuals' sense of belonging (Siemons et al. 2017). But, it also prompted undergraduates in four-year colleges to make difficult choices between engaging in the labor market full time or leaving college (Hsin and Ortega 2018). DACA, thus, may have competing influences on mental health.
Stress process theory
Stress process theory is useful for understanding risk and protective factors influencing mental health (Pearlin 1989). This theory posits that social status influences exposure to stressors, defined as any event that "challenges the adaptive capabilities of people" (Pearlin 2010, p. 208). Stressors may be acute or long-term/chronic. One stressor may lead to others (i.e., low-income status may lead to having more family responsibilities). Protective factors represent the resources individuals have at their disposal to mitigate the harmful effects of stressors; these include social supports, coping mechanisms, and beliefs. This study focuses on one of the multiple dimensions of mental health that stressors impinge upon, self-reported levels of anxiety (Pearlin 2010).
The stress process framework is useful for understanding how long-term stressors, such as the threat of deportation, influence undocumented Latinx college students (Vargas et al. 2019;Dreby 2012). Some individuals may have more resources at their disposal to mitigate the impact of stressors on mental health. In this study we examine a unique combination of factors that protect or dampen mental health.
Family deportation worries
The risk of family deportation may increase anxiety among DACAmented and non-DACAmented college students. Although DACA beneficiaries are less concerned about their own deportation, they still worry about their family's deportation Patler and Laster Pirtle 2018;Childs 2018;Castañeda and Melo 2014;Dreby 2015). As stated by Golash-Boza and Valdez, "the fact that their family members are not safe is never far from their minds" (2018, p. 546). Some family relationships and ties may worsen the mental health of undocumented individuals (Del Real 2018;Vargas et al. 2017).
The extent to which one worries about their family's deportation may influence whether DACAmented individuals have better mental health compared to individuals without DACA. If DACAmented students continue to heavily worry about their family's deportation, positive effects of DACA on mental health may disappear, making mental health levels between DACA and non-DACA recipients more similar.
Family as motivation and support
Latinx college students' perceptions about their families may matter for mental health. Emotional connection to one's family is associated with well-being among Latinx students (Gándara 1995;Hurtado et al. 1996;Rodriguez et al. 2003;Sánchez et al. 2005;Solberg and Villarreal 1997;Sy and Romero 2008).
Latinx undergraduates who are the first in their family to attend college are often proud about being a first-generation college student. One's family-based motivation to continue school promotes psychological well-being (Mount 2015). In promoting positive mental health outcomes, family-based motivation or ganas (a strong desire to overcome) may mitigate the effect of stressors on anxiety, especially stressors related to being undocumented (Allen et al. 2020;Punti 2018). An undocumented college student who is deeply motivated by her family to go to college may have better mental health than a student whose family does not represent a high motivational purpose, because family may be a positive force in the educational journeys of students of color (Yosso 2005). At the same time, strong family-driven desires to succeed may coexist with family demands that curtail educational journeys (Punti 2018).
Family has been relatively understudied in analyses of undocumented college students' anxiety. Previous studies on young Latinx undocumented undergraduates have focused primarily on social network supports outside of the home (such as peers in school and institutional gatekeepers) (Kam et al. 2020(Kam et al. , 2019Patler 2018). One study of Latina undocumented undergraduates in a rural town found that, in response to negative encounters with college personnel, undocumented college students feel isolated and do not feel supported by their college campus (Muñoz 2013). As a coping mechanism, students isolate themselves and keep silent because they are afraid to speak with counselors who may not understand them (Muñoz 2013). Undocumented students who want to minimize risk and keep their family's undocumented status a secret may choose to conceal their status to friends or teachers (Kam et al. 2019;Patler 2018). Using isolation and silence as coping mechanisms may make undocumented undergraduates rely on their families more if they find college campuses alienating (Muñoz 2013). If students are closer to their families because they feel they are around people who understand them, family may be an important source of support for students (Cobb et al. 2016). However, we know relatively less about how family-level sources of support may mitigate negative mental health outcomes.
Having positive perceptions about one's family may be protective for student mental health. Family members raise the academic aspirations of immigrant youth and are a source of motivation to continue in their educational journeys Tienda 1995, 1998;Katsiaficas 2015;O'Neil et al. 2016;Portes 2010). Pérez and colleagues (2010) found that undocumented undergraduates make sense of their educational journeys by referring to how their parents encouraged them to pursue college. Latinx immigrant students are often motivated by their immigrant families because they want to "repay" them for all their sacrifices (Alcántara 2018;Jabbar et al. 2019).
In addition to motivation, family may provide tangible resources that promote Latinx undergraduates' mental health. Family may provide support ranging from financial help for school, rent, and bills to educational advice. One study found that first-generation Latinx college students in Texas received financial assistance for tuition, rent, or bills from their families (Jabbar et al. 2019).
Gender and mental health
Gender, which we define as a binary variable because of the data limitations, may matter for how one internalizes their immigration status because gender shapes how individuals report well-being. Previous regional studies have found that Latinx college students may be at heightened risk of anxiety symptoms compared with non-Latinx young adults in college (Zvolensky et al. 2019), and that there is variation by gender. For instance, one study found that the percent of Latinx undocumented college students in California with anxiety levels above the clinical cutoff levels is 25% for men and 35% for women (Suárez-Orozco and López Hernández 2020). To place these numbers in context, recent estimates suggest this figure is about 31% among the general college student population (American College Health Association 2020). In the broader population, the prevalence of any severity of anxiety is about 19% among women and 11.9% among men (Terlizzi and Villarroel 2020).
Beyond gender differences in reporting mental health outcomes, Longest and Thoits (2012) found that the number of risk factors that men need to experience to express high levels of distress are very high compared with those of women. At the same time, the deportation regime is gendered, as deportations disproportionately affect men of color (Golash-Boza 2015). If undocumented Latinx men are aware of this surveillance, they may experience heightened anxiety levels. Whether they report this on surveys is difficult to detect, however. We examine one aspect of mental health using self-reported anxiety, and we keep in mind that gender ideologies, expectations, and scripts influence reports of anxiety (Longest and Thoits 2012;Hill and Needham 2013).
Gender, family responsibilities, and mental health
Gender may shape Latinx college students' mental health through familial responsibilities. Gender ideology shapes the social experiences of undocumented migrants (Donato et al. 2017;Enriquez 2017), and familial responsibilities of children are gendered from an early age (Orellana 2001;Quiroz-Becerra 2013, p. 147). In addition to schoolwork, Latina immigrant children disproportionately help with cleaning, cooking, caretaking, and translating for kin (Valenzuela 1999;Orellana et al. 2003;Estrada and Hondagneu-Sotelo 2013, p. 145). Socioeconomic status shapes immigrant children's participation in the work of their parents (Estrada and Hondagneu-Sotelo 2013, p. 145). Among Latinx undocumented college students, gendered norms may create differences in the amount and type of familial responsibilities women and men engage in.
According to Estrada, children's "own labor contributions are what make it possible for their families to survive the structural economic and employment barriers they face in the lower sector of the economy" (Estrada 2019, p. 16). These contributions may persist in undocumented college students, but we know little about the mental health consequences of familial responsibilities among DACAmented and non-DACAmented Latinx undergraduates. Supporting family has varied consequences. Vallejo found that Mexican middle-class individuals in Southern California who grew up economically disadvantaged provided financial support to kin during college and through adulthood, but these responsibilities at times hindered social mobility (Vallejo 2012).
Limited studies focus on gender and family factors among undocumented young adults. Enriquez (2017) found that undocumented men stall their family formation because they feel they cannot fulfill expectations to be caretakers of their own family. Pressures and expectations remain gendered within the undocumented community, and social life is not gender-neutral for undocumented individuals.
If women follow hegemonic gender roles, then Latinas in the study might carry heavy burdens of worry and familial responsibilities in addition to college-related stressors. Women with DACA may provide for their families in ways they may not have been pressured to before having DACA. If women take on more labor for their families, they might have heightened anxiety scores. Undocumented Latino men may have gendered expectations to provide for their families regardless of DACA status; therefore, their expressions of mental health may not be as affected by DACA status.
Research questions of the present study
This study examines protective and risk factors that influence mental health, measured by self-reported anxiety, among Latinx undocumented undergraduates who participated in the UndocuScholars Project online survey. Our research questions are 1. What is the relationship between protective/coping factors (DACA, family motivation, family support) and risk factors (family deportation worries, gender, and familial responsibilities) and mental health?
a. Do DACA recipients have better mental health compared with non-DACA recipients? Do worries of family deportation reduce the mental health of DACA recipients? b. Given the large literature on gender and family factors, do women with DACA face similar mental health profiles as men?
Data and variables in the analysis
This study relies on data from the UndocuScholars Project, a study created in response to the need for research on undocumented college students (Teranishi et al. 2015). Because of their stigmatization, invisibility, and lack of available data, undocumented undergraduates are a "hard-to-reach" population (Marpsat and Razafindratsima 2010). Thus, novel strategies were employed to reach the sample. The primary method of recruitment of participants for this project was through a web portal and a strong, multi-platform social media campaign (e.g., Facebook, Twitter, Instagram) that focused on providing information about undocumented issues and provided the opportunity to have student voices heard by participating in the study. Additionally, participants were recruited by partnering with organizations that worked with undocumented students. Lastly, participants were recruited through flyers, announcements at college events, and word of mouth.
To take the survey, participants must have met these eligibility criteria: (1) Be between eighteen and thirty years of age; (2) identify as an undocumented, DREAMer, or DACAmented college student; and (3) have been enrolled as an undergraduate student in college or university in the past year. Because of the nature of the topic, participants were assured anonymity. Consent for participation was a simple checkmark before starting the survey. For the online survey, we did not store any identifiable information such as IP addresses. A data control protocol was implemented to reduce the number of mischievous survey data-for example, we flagged responses: when only a limited amount of time (< 10 min) was spent completing the survey, when there was a mismatch between language spoken at home and country of origin, and cases in which qualitative responses were in verbatim to others. When any of these occurred, the flagged surveys were checked by a group of research team members. Once the survey was deemed valid, each participant received a $20 gift card in return for their participation. The survey was made in Qualtrics within the project's website. To ensure anonymity, once a survey response was deemed legitimate, participants were sent a link with their gift card, and their email, the only information linking their responses to a personal information item, was deleted from the server (Teranishi et al. 2015). The total survey sample was 909 students, of which 807 identified as Latinx. Once missingness of key variables was taken into account, 660 Latinx college students remained in the analytic sample.
This study brought together experts from academia and the community. The UndocuScholars Project team included a student advisory board and a community advisory board. The second author of this article was a part of the UndocuScholars Project team. The first author was not part of the data collection team. Both authors are Latinx women, and the first author is an immigrant with a similar background as that of the study participants. Both authors have worked to improve access to college among undocumented youth. Importantly, the UndocuScholars Project is a product of a larger collaborative effort led by Suárez-Orozco and Teranishi (see Teranishi et al. 2015).
Anxiety measure
The seven-item generalized anxiety scale (GAD-7) (Spitzer et al. 2006) was used to assess clinical levels of generalized anxiety. The scale is valid for Latinx individuals in the United States (Mills et al. 2014). Participants responded to this prompt: "Over the last two weeks, how often have you been bothered by the following problems?" with sample response items including, "Trouble relaxing" or "Not being able to stop or control worrying," among others. Answer choices were rated on a four-point Likert-style scale (zero corresponded with not at all and three with nearly every day). We summed the raw scores on the seven items. This sum ranged from 0 to 21. However, one tricky aspect of this variable is that, because of an error in the survey, a small subset of respondents were not prompted to answer one of the items in the anxiety scale. Thus, we created a mean anxiety score (sum of one's anxiety score divided by the number of items answered).
Risk factors and covariates
Family responsibilities. Respondents were asked, "In a typical month, which of the following kinds of help do you PROVIDE to your family members (i.e., parents, siblings, grandparents, aunts/uncles, cousins)? (select all that apply)." There were six drop-down items: helping pay family's bills or expenses, helping family with errands/ household chores (child or elder health), tutoring or helping family members with homework or classes, translating, giving advice, or other. Each of these six items was made into a dichotomous variable, in which one meant that the student engaged in the activity and zero meant they did not. We summed each of these binary variables to create a continuous measure of the number of family responsibilities students reported.
Family motivation
This variable was based on the prompt, "My family responsibilities motivate me to continue with my college studies." Students were asked to indicate the degree to which they agreed. They could answer one (strongly disagree), two (disagree), three (neither agree nor disagree), four (agree), or five (strongly agree). If students answered four or five, we coded family motivation as one. Otherwise, it was coded as zero.
Family support
The family support variable was measured by the question, "In a typical month, which of the following kinds of help do you receive from your family members (i.e., parents, siblings, grandparents, aunt/uncles, cousins, select all that apply)?" Respondents were presented a checklist with these answers: (1) paying my expenses (e.g., housing, health or care insurance, credit card, phone bills, etc.), (2) paying for tuition, (3) helping with errands or practical tasks (i.e., rides to school or childcare, if applicable), (4) tutoring or helping me with homework and classes, and (5) helping me to solve problems/give advice. We created a sum based on whether individuals said yes (coded as one) or no (coded as zero) on each of these five items. Then, we created a categorical variable describing these groups: students who reported no support from family, students who reported receiving one or two of the supports on the list, and students who reported receiving three or more types of help from family.
Deportation worries. Students were asked, "How often are you worried that family members or friends might be detained or deported?" Answers were on a Likert scale ranging from one to four. Number one corresponded to "never" and number four to "most of the time." If students answered some or most of the time for the item mentioned above, they have a value of one for the deportation worries variable. If they did not, the value was zero.
Covariates
Other variables include a binary measure of gender, a continuous measure of age, a dummy variable indicating previous work experience, and a categorical variable of relative household income. Work experience was based on a survey question that asked respondents, "Have you had paid work experience thus far?" Available responses were yes or no, coded as a binary variable (1, 0, respectively). Relative household income was measured by a categorical variable of household income quartile representing whether respondents belonged above the 25th, 50th (median), or 75th percentile of household income. Parental education is included in the descriptive statistics to show class background. This item was based on a survey question that asked respondents about the highest level of education achieved by their mother and their father. Based on this, we created a binary measure indicating whether at least one parent had attended college.
Analysis plan
We use multivariate ordinary least squares (OLS) regression to analyze the association between DACA status and self-reported anxiety score. We add our independent variables of interest iteratively in order to examine any changes in coefficients once explanatory variables are added to the model. In addition to our successive regression models (Models 1-5 in Table 2), we add interaction terms. Interaction terms test for the joint effect of two independent variables on an outcome variable. We pursue two interaction terms because of our interest in the intersections of social locations. The first interaction term is between gender and DACA status, shown as "DACA X Men" in Table 3. This term indicates whether the association between DACA and mental health differs by gender. The second interaction term of interest is "DACA X Family deportation worries." This interaction term indicates whether family deportation worries influence anxiety differences between DACAmented and non-DACAmented students. Table 1 provides summary statistics of the key variables in the Latinx analytic sample (n = 660). Table 1 shows that over half of the analytic sample identify as women. The average age is twenty-one years old (SD: 2.65). The oldest respondent is twenty-nine years old. Twenty-seven percent of respondents report having at least one parent who is college educated. In terms of family demands and support, individuals reported having on average three (SD: 1.33) different types of family responsibilities. In terms of receiving support from family members, 7% reported not receiving any of the listed family supports (e.g., paying for expenses, paying for tuition, helping with errands/practical tasks, tutoring/helping with homework and classes, helping solve problems/give advice). Seventy-one percent reported receiving one or two types of support from their families. Twenty-two percent reported receiving three or more types of familial support. In terms of family motivation, 82% of Latinx undocumented undergraduates reported being highly motivated by family. Importantly, but not shown in Table 1, a majority Table 1 Summary of key variables, UndocuScholars Project data set 2014, Latinx undocumented undergraduates Income quartiles had relatively high missingness (3%) in the Latinx subsample. We did a simple imputation of this variable for the regression models that follow, and the tabulations in this table show the original non-imputed percentages. Income quartiles are as follows: individuals are in the top quartile if their household income was between $40,000 and 150,000; they are in the third quartile if their income is between $30,000 and 39,000; they are in the second income quartile if incomes are between $20,000 and 29,000); and they are in the first and lowest income quartile in households with incomes < $20,000. The mean anxiety score is the measure used in subsequent models of the sample are Mexican, South American, and/or Central American. In addition, over half of respondents live in California and are eligible for the California Dream Act. Seventy percent of the analytic sample are DACA recipients, and 58% reported worrying about their family's deportation. Over half reported having ever worked. Although a substantial number of students are from low-income households, there is variation. To capture the heterogeneity of income within Latinx undocumented undergraduates, we use a relative measure of income: income quartiles. The composition of students' household incomes in terms of quartiles are as follows: 18% are in the fourth quartile (household income between $40,000 and 150,000), 20% are in the third quartile (household income between $30,000-39,000), 28% are in the second income quartile (household income between $20,000 and 29,000), and 34% are in first and lowest income quartile (household incomes below $20,000). Table 2 shows the regression models that use anxiety score as the outcome variable with different predictor variables added iteratively. Model 1 includes demographic characteristics including gender, age, and household size. Model 2 adds family protective and risk factors (family responsibilities, family motivation, and family support). Model 3 adds one of the key immigration variables, DACA status, and Model 4 adds family deportation worries. Model 5 adds the socioeconomic variables ever worked and household income quartile.
Factors that predict anxiety levels
Model 1 in Table 2 shows that, consistent with previous research, students who identified as men reported lower anxiety levels compared to women. Age was associated with lower anxiety score, but this coefficient is small and not statistically significant. Household size had a positive association with anxiety score.
Model 2 in Table 2 adds family variables. Having a high number of family responsibilities does not seem to be associated with anxiety score at a statistically significant level, though students in the second quartile of family responsibilities have lower anxiety scores compared with students in the first quartile. Family motivation shows a strong negative association with anxiety score. This coefficient indicates that, compared with students who are not as motivated by their families, those who are have an anxiety score that is 0.151 points lower (equivalent to a change of one-fifth of a standard deviation in anxiety score). We do not find a statistically significant association between receiving different levels of support from family and anxiety score. We were surprised by this finding.
Models 3 and 4 include important immigration variables. Model 3 shows that compared with individuals without DACA, DACAmented individuals have anxiety scores that are 0.198 points higher, equivalent to about one-fourth of a standard deviation increase in anxiety score. Model 4 includes family deportation worries. The coefficient for this variable is statistically significant. Worrying frequently about family deportation is associated with an increase in anxiety score equivalent to 30% of a standard deviation increase in anxiety score. Notably, adding family deportation Table 2 Regression coefficients with anxiety score as the outcome, UndocuScholars Project data set 2014, Latinx undocumented undergraduates Coefficients are shown and standard errors are in parentheses, *** p < 0.01, ** p < 0.05, * p < 0.1 worries slightly attenuates the DACA coefficient, which reduces from 0.198 to 0.154. Despite attenuation, the DACA coefficient remains statistically significant. This indicates that the heighted anxiety exhibited by DACA recipients when compared with non-DACA recipients is partly (but not completely) explained by family deportation worries. An important feature of DACA is that it provided recipients a work permit and a social security number, increasing access to previously unavailable jobs and internships. It is possible that one source of stress among DACAmented students is pressure to work. When we added the work experience variable (in Model 5), we find that having ever worked is associated with higher anxiety. Adding the coefficient of the work experience variable further attenuates the DACA coefficient (reducing it to 0.0767 and rendering it insignificant at convention statistical levels). This means that DACA recipients' heightened anxiety relative to undocumented Latinx undergraduates without DACA was due to socioeconomic factors (work demands and family income) and family deportation worries.
The last variable we consider in the main models is relative income, measured by income quartile. Relative income may shape students' experiences as undergraduates because financial resources may cause stress and may limit the time students have to focus solely on school. Model 5 shows that, relative to students with families in the first/lowest income quartile, students from households in the highest income quartile had lower anxiety scores. The size of this coefficient is notable, as its coefficient is larger than the coefficient of DACA status in Model 3.
In sum, the regression models in Table 2 show tremendous complexities in the mental health of Latinx undocumented undergraduates. The main factors that seem to matter across the board are gender, family motivation, and family deportation worries. These factors retain their substantively significant associations with anxiety score even when controlling for family responsibilities, work experience, and income.
The DACA coefficient tells a complex and understudied story. Although DACA may have relieved students of some aspects of the undocumented experience such as the inability to legally work, DACAmented Latinx undergraduates have elevated anxiety levels (as seen in Models 3 and 4). We found that while DACA recipients may have relatively high levels of anxiety compared with individuals without DACA, accounting for family deportation worries and socioeconomic factors fully explains this trend (Models 4 and 5).
The role of gender
Given the extensive literature about gender differences in the expression of mental health outcomes, we examine whether the relationship between DACA and mental health varies by gender identity. We do this by adding an interaction term, "DACA X Men," to our previous model with demographic, family, immigration, and socioeconomic status control variables (Model 5 in Table 2). Results are shown in the first column (Model 1) in Table 3. For brevity, we show only the main independent variables of interest. Model 1 in Table 3 shows that the interaction term "DACA X Men" is negative and statistically significant. The negative sign shows that, for men, the relationship between DACA and self-reported anxiety is weaker compared with that of women. In other words, among DACA recipients, women have higher anxiety levels than men, but among students without DACA, levels of anxiety are more even across gender identity. Figure 1 shows the interaction term visually to ease interpretation.
The y-axis of Fig. 1 shows the linear prediction of anxiety score based on the regression in Model 1 of Table 3. The x-axis shows gender identity, and the first two bars show the predicted anxiety scores of undocumented Latinx students without DACA. The bars on the right of Fig. 1 show the predicted anxiety scores of DACA recipients. In sum, Fig. 1 suggests that women with DACA seem to have elevated levels of anxiety even when controlling for the age, household size, family responsibilities, family support, family deportation worries, work experience, and household income.
Indeed, women with DACA have higher anxiety levels compared with women without DACA and compared with men with DACA. One possible explanation for this trend is that women with DACA may face unique family pressures and family responsibilities, and may provide more emotional labor to their families because of their ability to legally work. However, even when we control for family responsibilities, this interaction persists. In light of these findings, it is important to caution interpretation of these results by stating that the outcome variable may reflect expressions of mental health. Men with and without DACA may not be as open to expressing their mental health in online surveys, which is what this study uses for the analysis. Table 3 OLS regression predicting mean self-reported anxiety score (same variables as Model 5 in Table 2) with added interaction terms Standard errors in parentheses, ***p < 0.01, **p < 0.05, *p < 0. Both models include controls in Model 5 in Table 2 Variable
The role of worrying for family
The second interaction term of interest is "DACA X Family deportation worries." The coefficients of interest in this model are included in Model 2 in Table 3. The coefficients in this model are the same as those in Model 1 in Table 3, with the exception of the interaction term. Among those with DACA, those who worry about their family's deportation frequently report high levels of anxiety. The interaction coefficient is positive and is nearing statistical significance. For illustrative purposes, we also graph this interaction in Fig. 2. The y-axis is the same as that in Fig. 1, representing the linear prediction of anxiety score. The most important takeaway from Fig. 2 is that worries about family deportation do not seem to be associated with the anxiety levels of Latinx undocumented undergraduates without DACA. On the other hand, family deportation worries seem to elevate the anxiety of DACA recipients. This is important because it indicates that DACA recipients with high levels of worries about family deportation may have concerns that their families are at risk of surveillance from the state and of potential deportation. We think this is related to having DACA because non-DACAmented students with high levels of family deportation worries showed less anxiety compared with their DACAmented counterparts who were also highly worried about their family's deportation. This finding shows that, despite the promise of DACA, family-level deportation risk remains a threat to mental health. DACA in itself does not provide the tangible resources to thrive and be well.
There is a possibility that gender and worrying patterns are related. We looked more into descriptive patterns on this. Figure 3 shows that both women and men worry about their families. It does not seem to be the case that worries about family are exclusively present among women with DACA. Both men and women with Table 3. All continuous covariates are held at their mean. The No DACA label corresponds with undocumented students not protected by the DACA program at the time of the survey (in 2014). Lines show 95% confidence intervals DACA express high levels of worry about their family's deportation. It is men without DACA that seem to report relatively low levels of worry about their family's deportation compared both to women without DACA, and to men as well as women with DACA. Relating this to our previous finding, men with DACA report slightly lower levels of anxiety, and women with DACA have significantly higher levels of anxiety. We posit that this interaction cannot be explained away by family worries. Future researchers may wish to explore this nexus further.
Heterogeneity in the undocumented experience
The contributions of this paper are multifold. First, our most unique finding is that DACA does not protect the mental health of undocumented Latinx college students. Before our analysis, we expected that DACA recipients might have reduced anxiety scores thanks to their gained access to a range of rights, a decrease in stigma, and an increase in sense of belonging. After all, their protection for deportation in 2014 (the time of the survey data collection) seemed promising. Yet, our results show that DACA recipients had heightened anxiety due to a combination of demographic factors, family deportation worries, and work experience. Women with DACA appear to have elevated anxiety scores. Why might this be, and what does it mean?
First, undocumented Latinx college students with and without DACA have undocumented family members who may be subject to all the stressors associated with their family's immigration status. Family members of Latinx undocumented college students may be exposed to extreme inequalities in accessing education as well as occupational mobility and may face excessive barriers to health care. Non-DACA recipients also have these worries, but having high levels of worries about Table 3. All continuous covariates are held at their mean. The No DACA label corresponds with undocumented students not protected by the DACA program at the time of the survey (in 2014). Lines show 95% confidence intervals family deportation was more consequential for DACAmented Latinx college students compared with undocumented Latinx undergraduates without DACA.
The collective experience of undocumented family members and DACA recipients may increase exposure to surveillance, as DACA recipients provide the government their information such as address and history of addresses, potentially placing their family members without DACA protections at risk. In addition, DACA has always been a temporary measure (Gonzales and Ruszczyk 2021). DACA recipients may experience added stress about DACA renewals and about the future prospects of this program. Such worries, as can be seen by the recent threats to end the DACA program, would be well placed. This collective risk and worry dampens mental health (Gurrola and Ayón 2018).
It is possible that DACA recipients encounter unique stressors that add up and cumulatively make for a more stressful college experience. For instance, they may face pressures to work during college (Hsin and Ortega 2018). As our results show, having work experience helped explain the relationship between DACA and anxiety score. Importantly, DACAmented students in general may encounter acute anxiety-producing moments in their colleges because they may have to explain their liminal status as DACA recipients. For instance, one study in Colorado found that DACAmented students encountered issues during hiring processes because university personnel did not know how to handle the work permit documentation that accompanies DACA (Muñoz 2013). Moreover, all students in this study, regardless of DACA, reported rather high levels of anxiety, reflecting the overall possibility that colleges may not be prepared to provide undocumented undergraduates the resources to thrive and be well. Despite the passage of policies that facilitate access to college for undocumented students, professors may not understand the undocumented experience, campuses may have police on campus, and undocumented students may have the added stress of educating others about their undocumented experience (Muñoz 2013).
This study finds important heterogeneity in the undocumented student experience. As previously found, undocumented students differ in how they navigate their undocumented experience (Patler 2018). Factors that shape how undocumented students navigate their college experience include age of arrival and co-ethnic networks, among other factors (Patler 2018). We add to this literature by highlighting previously understudied factors that also matter. These include gender, socioeconomic status, and previous work experience. One previous study found that students with DACA may face the decision to choose work or attend school if they attend four-year universities, but that community college students with DACA adjust the units they take to accommodate their work schedules (Hsin and Ortega 2018). Interestingly, one study found that DACA recipients framed their receipt of DACA as a way to help support their families, which they reported motivated them to continue having hope that in the future they could repay their families (Luna and Montoya 2019). We suspect that the combined pressure to work during college may add stressors to the lives of Latinx undocumented college students who may already be facing institutional contexts not conducive to seeing them thrive. Given previous work that shows family contributions and responsibilities are gendered, women with DACA may have high levels of emotional and financial responsibilities to their families. This may be one reason for their elevated anxiety scores. The findings on anxiety scores of Latinx DACAmented women underline the need to support college student mental health using a gender-sensitive approach (Ai et al. 2015).
This study contains limitations. First, our data may not be representative of the Latinx undocumented college student population at the national level. It is possible that students with high levels of anxiety about their immigration status may be less likely to partake in surveys in general. If this is the case, our findings would underestimate anxiety scores. It is important to note that this study is not an assessment of clinical diagnoses. In addition, we did not capture potentially important differences in campus cultures and institutional resources by campus type. Recent research in higher education indicates these factors matter for student outcomes and resources that may shape student well-being (Garcia 2019;Reyes 2018). Last, we did not capture the mental health and gender relationship beyond the gender binary, and we encourage future researchers to include expansive gender identities thoughtfully in their surveys of undocumented individuals, as more recent surveys have done (Enriquez et al. 2020).
Despite limitations, this study holds important implications for data collection on Latinx college students. Research about Latinx college students that does not include information about family's deportation risk may miss important family-level factors that shape student well-being. One in twenty US-born children live in mixedstatus families (Passel et al. 2018). Thus, the present study on undocumented Latinx college students demonstrates the possibility that US-born Latinx students' mental well-being may be threatened if they have undocumented family members.
Our research suggests that DACA alone does not provide young adults with tangible resources that protect against anxiety, such as authorization to reside in the United States, and it does not protect them from worrying of others' risk of deportation. These findings help immigrant advocates argue that it is not enough to maintain the DACA program; undocumented immigrants are facing significant stress and angst, which must be relieved through other pathways of citizenship and/or liberation for immigrants, as well interventions to halt deportations.
The DACA program has been a target of the Trump administration. Although the data used in this study was collected in 2014, it may reflect a conservative estimate of the mental health of undocumented Latinx college students. Arguably, the political climate and the uncertainty about the future of DACA may heighten the anxiety of undocumented Latinx college students. In addition, the overall anti-immigrant presidential administration from 2016 to 2020 may have caused extreme stress to the broader Latinx as well as immigrant community with family ties to undocumented individuals.
We also find some promising results. Students who reported being motivated by their family have more positive mental health outcomes. This suggests that family may be a positive source of meaning-making and may represent a form of community cultural wealth (Yosso 2005) for undocumented Latinx college students. It also shows that, in response to volatile political contexts, Latinx college students forge ways of coping and resisting (Castrellón et al. 2017). In sum, the mental health of undocumented Latinx students is complex. There is heterogeneity emerging from multiple dimensions including gender, family motivation, DACA status, family deportation worries, and socioeconomic status. Given the current political moment and recent uncertainty with DACA, the wider public may need to be reminded that the promise of DACA may have been short-lived, and that permanent paths to legal status for undocumented young adults and their families may be necessary to promote their mental health, which is important for the retention of Latinx undergraduates in higher education.
|
2021-09-01T15:16:16.000Z
|
2021-06-17T00:00:00.000
|
{
"year": 2021,
"sha1": "e2cdb3f1bb19c23c2dc3804c9eb986b57cb9c0de",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1057/s41276-021-00325-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "247a1826343e3d9e3abcc52b1369ba263c636633",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
232228735
|
pes2o/s2orc
|
v3-fos-license
|
Knockdown of SLC39A4 Expression Inhibits the Proliferation and Motility of Gallbladder Cancer Cells and Tumor Formation in Nude Mice
Purpose Gallbladder cancer (GBC) is a common malignancy of the biliary tract and is characterized by rapid progression and early metastasis. Elucidating the molecular mechanisms of GBC could help to develop better treatment strategies. Materials and Methods Human GBC cell lines (GBC-SD and NOZ) were applied to determine the capacity of the proliferation and migration of cells using the MTT assay, colony formation, wound-healing assay as well as the Transwell™ assay. A nude xenograft was used to evaluate tumor growth in vivo. Results Using two types of GBC cell lines, we found that absence of solute carrier family (SLC) 39A4 (which encodes the zinc transporter ZRT/IRT-like protein [ZIP]4), could suppress the proliferation and migration of cells. Additionally, absence of ZIP4 could impair growth of xenografts in nude mice. While, over-expression of SLC39A4 could promote the GBC cell proliferation and migration, and inhibit apoptosis. We revealed that SLC39A4 might affect GBC progression by modulating the signaling pathways responsible for the survival, energy supply and metastasis of cells, and indicated that SLC39A4 could serve as a novel therapeutic target for GBC. Conclusion SLC39A4 promoted the viability and motility of GBC cells, and tumor formation in nude mice. We demonstrated an oncogenic potential for SLC39A4.
Introduction
Gallbladder cancer (GBC) is aggressive, is more common in females, and is characterized by rapid progression and early metastasis. Treatment options are surgery, chemotherapy and radiotherapy. 1,2 However, GBC is usually diagnosed late due to a lack of early signs and clinical symptoms, which limits therapy choices and undermines a better prognosis. Thus, identification of specific diagnostic markers and elucidation of the underlying molecular mechanisms of GBC are very important. Indeed, the molecular pathology of GBC is poorly understood despite extensive research efforts. 3,4 Zinc is a vital element in the human body. It is not only a catalytic cofactor for several enzymes, it also has key roles in the signal transduction involved in cell differentiation, tissue development, and metabolism. 5,6 Zinc deficiency is associated with several diseases, 7-10 but zinc levels in most tumor cells are increased due to abnormal overexpression of zinc importers, which allow them to survive. 11 Solute carrier family (SLC) 39 (also known as ZRT/ IRT-like protein [ZIP]) is responsible for transferring zinc from the extracellular space and organelles into the cytosol. 12 Among them, ZIP4 (which is encoded by SLC39A4) is the major transporter for zinc uptake. Its aberrant expression has been found in different types of cancers. In hepatocellular carcinoma (HCC), SLC39A4 suppresses the apoptosis and promotes the migration of cells. In addition, SLC39A4 mediates drug resistance in non-small-cell lung cancer (NSCLC). Moreover, it serves as a prognostic marker in multiple cancer types. [13][14][15][16][17] Nevertheless, the role of SLC39A4 in GC is not clear.
We wished to assess the impact of SLC39A4 on the proliferation and migration of GBC cells. We also wished to clarify the changes of signaling pathways in response to SLC39A4 deficiency.
Cell Culture and Construction of a Stable Cell Line
Two human GBC cell lines, GBC-SD and NOZ, were obtained from the Japanese Collection of Research Bioresources Cell Bank (Tokyo, Japan). Both cell types were cultured in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS) at 37°C in an atmosphere of 5% CO 2 . HEK293T cells were maintained in Dulbecco's modified Eagle's medium supplemented with 10% FBS at 37°C in an atmosphere of 5% CO 2 .
GBC-SD cells and NOZ cells were infected with lentivirus encoding SLC39A4 short hairpin (sh) RNA or control lentivirus co-expressed with green fluorescent protein (GFP) according to manufacturer instructions. Briefly, HEK293T cells were used for lentivirus production. Stable cell lines expressing shSLC39A4 or control shRNA were sorted against GFP fluorescence 48 h after infection using fluorescence activated cell sorting and maintained in growth medium supplemented with puromycin (2 µg/mL) for one week.
Colony Formation
Cells were seeded in six-well plates at 1×10 3 /well and then incubated for 13 days (GBC-SD) or 8 days (NOZ). Incubation was followed by staining with 0.1% Crystal Violet for 20 min at room temperature. Colonies containing ≥50 cells were counted manually under a microscope. Each assay was undertaken in triplicate.
Wound Healing
Cells were plated in 96-well plates and cultured overnight. Wounds were made in confluent monolayer cells using the 96 Wounding Replicator (V&P Scientific, San Diego, CA, USA). Cells were cultured in medium supplemented with 0.5% FBS. Wound healing was detected at 0 h and 48 h within scraped lines. Representative fields at different time points were photographed (×100 magnification). Migration area was analyzed by Celigo™ (Nexcelom, Lawrence, MA, USA).
Transwell™ Migration
Cells were plated in the upper chamber of the apparatus with serum-free medium in the Transwell with inserts of pore size 8-μm (3422; Millipore, Billerica, MA, USA). The lower chambers were filled with complete culture medium. After incubation for 24 h, cells on the upper-membrane surface were removed by cotton tips. Then, membranes were washed with phosphate-buffered saline (PBS), fixed with 4% paraformaldehyde, washed twice with PBS, and stained with 0.2% Crystal Violet. The migrated cells from nine fields were counted under a light microscope (×200 magnification). All experiments were done in triplicate. Technology, Beijing, China) for 15 min in the dark at room temperature. Samples were detected using a BD Accuri™ C6 Plus flow cytometer (BD Biosciences). Data were quantified and analyzed using a Guava easyCyte™ flow cytometer (EMD Millipore, Burlington, MA, USA).
Xenograft Model in Nude Mice
Animal studies were conducted in compliance with the regulations on management of animal welfare set by the Chinese Association for Laboratory Animal Sciences (Beijing, China). The protocol for animal experiments was approved by the Ethics Committee of Zhongshan Hospital (Shanghai, China).
RNA Extraction and Real-Time Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR)
RNA was extracted using TRIzol™ Reagent (Invitrogen, Carlsbad, CA, USA) and used for complementary (c) DNA synthesis with PrimeScript™ RT Reagent Kit with gDNA Eraser (TaKaRa Biotechnology, Shiga, Japan). RT-qPCR was conducted with SYBR™ Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) using the ABI Prism 7500 Real-time PCR System (Applied Biosystems).
Western Blotting
Cells were washed with cold PBS. Proteins were extracted with lysis buffer supplemented with a proteinase inhibitor. Cell lysates were centrifuged at 12,000 × g for 10 min at room temperature. Supernatants were collected and protein concentrations were quantified using a bicinchoninic acid kit (P0010S; Beyotime Institute of Technology
Statistical Analysis
Each experiment was carried out at least three times independently. Data are the mean ± SD. Comparisons between two groups were carried out using two-tailed unpaired t-tests. P < 0.05 was considered significant.
Establishment of SLC39A4 Stably Knocked-Down GBC Cells
Studies have uncovered a tumor-promoting role of ZIP4 (SLC39A4) in various cancer types. 14,15,17,18 We wished to explore its potential relationship to GBC. First, we knocked-down SLC39A4 expression in a GBC cell line (GBC-SD) with lentivirus expressing three SLC39A4specific shRNA sequences. shSLC39A4-2 had relatively higher efficiency in downregulating its expression of mRNA and protein ( Figure 1A and B). Thus, lentivirus harboring shSLC39A4-2 was chosen to infect another GBC cell line: NOZ. Real-time RT-qPCR and Western blotting showed that expression of ZIP4 (SLC39A4) had been inhibited significantly ( Figure 1C and D).
Knockdown of SLC39A4 Expression Suppressed Cell Proliferation and Colony Formation
ZIP4 has been shown to repress apoptosis and promote the cell cycle in pancreatic cancer cells and HCC cells. 13,17 We wondered if this was also the case in GBC cells. Cell proliferation was inhibited significantly by silencing of SLC39A4 expression in NOZ cells ( Figure 2A) and in GBC-SD cells ( Figure 2B). The ability to form colonies was also decreased dramatically in SLC39A4 knocked-down cells ( Figure 2C and D), which suggested that ZIP4 was required for the proliferation of GBC cells.
Knockdown of SLC39A4 Expression Impeded Cell Migration
It has been reported that ZIP4 is overexpressed in NSCLC, and is associated with enhanced migration of cells. 14 In HCC, ZIP4 expression is also activated and cell migration increased. 17 Next, we sought to discover what would happen after SLC39A4 expression was silenced. In the wound-healing assay, NOZ/shSLC39A4 cells and GBC-SD/shSLC39A4 cells migrated much more slowly than control cells ( Figure 3A and B). Also, in the Transwell migration assay, silencing of SLC39A4 expression inhibited transmigration of NOZ cells considerably. An identical result was obtained in GBC-SD /shSLC39A4 cells ( Figure 3C and D). Taken together, these data indicated that ZIP4 facilitated migration of GBC cells.
Knockdown of SLC39A4 Expression Induced the Apoptosis and Cycle Arrest of Cells
To reveal the mechanisms underlying repression of cell proliferation, we analyzed the cell-cycle distribution of GBC cells after knockdown of SLC39A4 expression. After knockdown of SLC39A4 expression, the number of apoptotic GBC-SD cells and NOZ cells increased ( Figure 4A-D). We observed accumulation of GBC-SD/shSLC39A4-2 cells in the G0/G1 phase compared with that in control cells, and an obvious reduction in the number of GBC-SD/shSLC39A4-2 cells in the S phase ( Figure 4E and F). Knockdown of shSLC39A4 expression also led to increased proportions of NOZ cells in the G0/G1 phase and decreased proportions in the S phase ( Figure 4G and H), suggesting that cell-cycle arrest and apoptosis contributed considerably to inhibition of cell proliferation.
Overexpression of SLC39A4 Promoted the Proliferation, Cycle Progression, and Migration of Cells
To exclude the off-target effects of shRNA knockdown, we overexpressed SLC39A4 in GBC-SD cells and NOZ SLC39A4 was efficiently overexpressed in GBC-SD and NOZ cells ( Figure 5A). As expected, GBC-SD cells and NOZ cells transfected with exogenous SLC39A4 exhibited enhanced viabilities ( Figure 5B). The proportion of GBC-SD cells in the G0/G1 phase showed an obvious decline, but an increase in the proportion of GBC-SD cells in the S and G2/M phases after SLC39A4 overexpression was documented ( Figure 5C and D). In SLC39A4overexpressed NOZ cells, the number of G0/G1-phase cells also showed a decline but their number in the G2/ M phase increased ( Figure 5E and F). These results indicated that the division activity of GBC-SD cells was motivated considerably by increased expression of SLC39A4. Besides, more GBC-SD cells and NOZ cells with SLC39A4 overexpression transmigrated than cells with the control vector ( Figure 5G-J).
Knockdown of SLC39A4 Expression Repressed Tumor Formation in Nude Mice
SLC39A4 expression was related positively to the proliferation and colony formation of GBC cells in vitro. Hence, we proceeded to discover if SLC39A4 expression could affect tumor formation in vivo. After subcutaneous injection of control cells or NOZ/shSLC39A4 cells, xenografts were allowed to grow freely up to 18 days. As expected, tumors derived from NOZ/shSLC39A4 cells were restrained obviously compared with those in control groups ( Figure 6A and B), and the average weight of tumors was also lighter ( Figure 6C). In accordance with these findings, expression of c-Met and Ki-67 was downregulated in tumor tissues of mice with knockdown of SLC39A4 expression ( Figure 6D). These results suggested that SLC39A4 might serve to potentiate tumorigenesis.
Downregulation of SLC39A4 Expression Influenced the Signaling Pathways Involved in Tumor Progression
Based on the results detailed above, we tried to identify the signaling pathways that were affected by SLC39A4 insufficiency. As expected, expression of genes that regulated cell proliferation, such as EGFR, CDK4 and c-MET, was decreased ( Figure 7A and B). 19,20 Besides, expression of genes that promote metastasis and vascular formation, such as GJA1 and SYK, was repressed ( Figure 7A and B). [21][22][23] In particular, expression of VEGFC, a positive regulator of lymph-node metastasis and lymphangiogenesis, 24,25 was downregulated in response to SLC39A4 absence ( Figure 7B). Inflammation and modulation of the immune microenvironment have key roles in influencing tumor progression. 26,27 RUNX3 aids in the control of immunity and inflammation. 28 After silencing of SLC39A4 expression, RUNX3 expression was reduced ( Figure 7A). Also, RUNX3 has been found to correlate with anchorage-independent growth in pancreatic cancer cells. 29 Moreover, expression of BMP4 and SGK1 was downregulated ( Figure 7A and B), both of which have been found to favor tumor-cell survival and whose inhibition could lead to DovePress apoptosis. [30][31][32] All the above data illustrated that ZIP4 deficiency could affect the different signaling pathways involved in tumor progression.
Discussion
The gallbladder stores bile and is located under the liver. GBC is the most common malignancy of the biliary tract, and often occurs in women. There are limited treatment options for GBC because it is often detected late and specific targets for therapy have not been identified. 2 Therefore, obtaining specific diagnostic markers for GBC is very important to better understand and treat this disease. However, little is known about the mechanisms of GBC. 4 Zinc is an essential mineral for life. It is indispensable for most enzymes to carry out catalytic activities and for nucleic acids to be synthesized. Zinc inadequacy leads to growth retardation, impaired immune function, and delayed healing of wounds. [5][6][7]9 Conversely, excess zinc can cause disorders. For instance, a large intake of zinc results in low copper status and reductions in the levels of high-density lipoproteins. 8,10 Moreover, the zinc level has been found to be increased in some tumor cells. 33 As a result, zinc concentration must be controlled tightly.
In humans, intracellular zinc homeostasis is regulated by two major zinc transporter families: the SLC30 (ZnT) family and SLC39 (ZIP) family. 12 The latter is responsible for zinc influx and could play a crucial part in malignancies in humans. 33,34 For instance, SLC39A6 promotes the proliferation and invasion, and inhibits apoptosis, in esophageal squamous cell carcinoma (ESCC) cells. 35 Suppression of SLC39A7 can abrogate survival of colorectal cancer cells. 36 Also, in metastatic breast cancer, SLC39A10 expression is correlated positively with lymphnode metastasis. 37 Expression of SLC39A4 (which encodes ZIP4) has been reported to be correlated positively with progression of pancreatic cancer. 13,38 Also, activated ZIP4 inhibits apoptosis of HCC cells and enhances their cell cycle and migration. 17 In NSCLC, SLC39A4 expression has been shown to be associated with strengthened cell migration, cisplatin resistance, and poor survival. 14 SLC39A4 could even serve as a prognostic marker in ESCC. 15 All the observations mentioned above indicate that SLC39A4 could be used as a therapeutic target for human cancers. We wished to determine the effects of SLC39A4 on proliferation of GBC cells in vitro and in vivo, as well as the molecular signaling mechanisms involved. We discovered that downregulation of SLC39A4 expression could repress the growth and migration of cells significantly, as well as tumor formation in nude mice. Taken together, these results implied an oncogenic character for SLC39A4 in GBC cells.
Gene expression analysis showed that SLC39A4 regulated the expression of VEGFC and SYK. The latter is downstream of SRC kinases and VEGFC is the ligand of VEGFR3. SYK is involved in vascular development, and VEGFC has been shown to promote lymphangiogenesis and lymph-node metastasis. 24,25,39 These phenomena are consistent with the effects of SLC39A10 mentioned above. 37 c-MET and CDK4 are important oncogenes which regulate GBC cell survival, metastasis and cell cycle. [40][41][42] Combination of CDK4/6 and c-MET inhibitor have been applied for the treatment of glioblastoma. 43 Here, we showed that CDK4 and c-MET were regulated by SLC39A4. Both mRNA and protein abundance of CDK4 and c-MET was reduced in GBC cells with silenced SLC39A4. Thus, we proposed that SLC39A4 knockdown suppressed GBC growth and migration partly through downregulation of CDK4 and c-MET.
Owing to inhibition of ZIP4 expression, zinc concentrations may change in the cytosol of tumor cells. We found that various aspects of the behaviors and signaling pathways of GBC cells were influenced if SLC39A4 expression was downregulated. Considering zinc as a vital element for numerous enzymes to function normally, we wondered if these outcomes were caused by changes in the zinc concentration, or if ZIP4 participated in these pathways directly. Additional investigations are needed to clarify the exact roles of ZIP4 in GBC.
Conclusions
We demonstrated an oncogenic potential for SLC39A4. Suppression of SLC39A4 expression in GBC cells weakened the viability and motility of GBC cells, and tumor formation in nude mice. Conversely, high expression of SLC39A4 could promote the growth and migration of tumor cells. Based on these findings, we speculate that SLC39A4 is very likely to be a novel target for developing new strategies for treating GBC. Importantly, this study may help us gain some novel insights of the molecular events triggering GBC progression.
|
2021-03-16T05:32:21.225Z
|
2021-03-08T00:00:00.000
|
{
"year": 2021,
"sha1": "f6774539cc2220e6e3b3162ba74eede73ffbd793",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=67412",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6774539cc2220e6e3b3162ba74eede73ffbd793",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211205247
|
pes2o/s2orc
|
v3-fos-license
|
Randomized Exploration for Non-Stationary Stochastic Linear Bandits
We investigate two perturbation approaches to overcome conservatism that optimism based algorithms chronically suffer from in practice. The first approach replaces optimism with a simple randomization when using confidence sets. The second one adds random perturbations to its current estimate before maximizing the expected reward. For non-stationary linear bandits, where each action is associated with a $d$-dimensional feature and the unknown parameter is time-varying with total variation $B_T$, we propose two randomized algorithms, Discounted Randomized LinUCB (D-RandLinUCB) and Discounted Linear Thompson Sampling (D-LinTS) via the two perturbation approaches. We highlight the statistical optimality versus computational efficiency trade-off between them in that the former asymptotically achieves the optimal dynamic regret $\tilde{O}(d^{7/8} B_T^{1/4}T^{3/4})$, but the latter is oracle-efficient with an extra logarithmic factor in the number of arms compared to minimax-optimal dynamic regret. In a simulation study, both algorithms show outstanding performance in tackling conservatism issue that Discounted LinUCB struggles with.
INTRODUCTION
A multi-armed bandit is the simplest model of decision making that involves the exploration versus exploitation trade-off [20]. Linear bandits are an extension of multiarmed bandits where the reward has linear structure with a finite-dimensional feature associated with each arm [2,13]. Two standard exploration strategies in stochastic linear bandits are Upper Confidence Bound algorithm (LinUCB) [1] and Linear Thomson Sampling (LinTS) [8]. The former relies on optimism in face of uncertainty and is a deterministic algorithm built upon the construction of a high-probability confidence ellipsoid for the unknown parameter vector. The latter is a Bayesian solution that maximizes the expected rewards according to a parameter sampled from the posterior distribution. Chapelle and Li [10] showed that Linear Thompson Sampling empirically performs better and is more robust to corrupted or delayed feedback than LinUCB. From a theoretical perspective, it enjoys a regret bound that is a factor of √ d worse than minimax-optimal regret bound Θ(d √ T ) that LinUCB enjoys. However, the minimax optimality of optimism comes at a cost: implementing UCB type algorithms can lead to NP-hard optimization problems even for convex action sets [7].
Random perturbation methods were originally proposed in the 1950s by Hannan [15] in the full information setting where losses of all actions are observed. Kalai and Vempala [16] showed Hannan's perturbation approach leads to efficient algorithms by making repeated calls to an offline optimization oracle. They also gave a new name to this family of randomized algorithms: Follow the Perturbed Leader (FTPL). Recent works [4,5,17] have studied the relationship between FTPL and Follow the Regularized Leader (FTRL) algorithms and also investigated whether FTPL algorithms achieve minimaxoptimal regret in full and partial information settings.
Abeille et al. [3] viewed Linear Thompson Sampling as a perturbation based algorithm, characterized a family of perturbations whose regrets can be analyzed, and raised an open problem to find a minimax-optimal perturbation. In addition to its significant role in smartly balancing exploration with exploitation, a perturbation based approach to linear bandits also reduces the problem to one call to the offline optimization oracle in each round. Recent works [18,19] have proposed randomized algo-rithms that use perturbation as a means to achieve oracleefficient computation as well as better theoretical guarantee than LinTS, but there is still a gap between their regret bounds and the lower bound of Ω(d √ T ). This gap is logarithmic in the number of actions which can introduce extra dependence on dimension for large action spaces.
A new randomized exploration scheme was proposed in the recent work of Vaswani et al. [23]. In contrast to Hannan's perturbation approach that injects perturbation directly into an estimate, they replace optimism with random perturbation when using confidence sets for action selection in optimism based algorithms. This approach can be broadly applied to multi-armed bandit and structured bandit problems, and the resulting algorithms are theoretically optimal and empirically perform well since overall conservatism of optimism based algorithms can be tackled by randomizing the confidence level.
Linear bandit problems were originally motivated by applications such as online ad placement with features extracted from the ads and website users. However, users' preferences often evolve with time, which leads to interest in the non-stationary variant of linear bandits. Accordingly, adaptive algorithms that accommodate timevariation of environments have been studied in a rich line of works in both multi-armed bandit [9] and linear bandit. With prior information of total variation budget, SW-LinUCB [12], D-LinUCB [22], and Restart-LinUCB [25] were constructed on the basis of the optimism in face of uncertainty principle via sliding window, exponential discounting weights and restarting, respectively. Recently, Zhao and Zhang [24] discovered a technical mistake shared in three prior works and presented a fix which deteriorates their dynamic regret bounds from ). In addition, Luo et al. [21] and Chen et al. [11] studied fully adaptive and oracle-efficient algorithms assuming access to an optimization oracle when total variation is unknown for the learner. It is still open problem to design a practically simple, oracle-efficient and statistically optimal algorithm for non-stationary linear bandits.
CONTRIBUTION
In Section 2, we explicate, in the simpler stationary setting, the role of two perturbation approaches in overcoming conservatism that UCB-type algorithms chronically suffer from in practice. In one approach, we replace optimism with a simple randomization when using confidence sets. In the other, we add random perturbations to the current estimate before maximizing the expected reward. These two approaches result in Randomized Lin-UCB and Gaussian Linear Thompson Sampling for stationary linear bandits. We highlight the statistical opti-mality versus oracle efficiency trade-off between them.
In Section 3, we study the non-stationary environment and present two randomized algorithms with exponential discounting weights, Discounted Randomized Lin-UCB (D-RandLinUCB) and Discounted Linear Thompson Sampling (D-LinTS) to gracefully adjust to the timevariation in the true parameter. We explain the tradeoff between statistical optimality and oracle efficiency in that the former asymptotically achieves the optimal dynamic regretÕ(d 7/8 B 1/4 T T 3/4 ), but the latter enjoys computational efficiency due to sole reliance on an offline optimization oracle for large or infinite action set. However it incurs an extra (log K) 3/8 gap in its dynamic regret bound, where K is the number of actions.
In Section 4, we run multiple simulation studies based on Criteo live traffic data [14] to evaluate the empirical performances of D-RandLinUCB and D-LinTS. We observe that when high dimension and a large set of actions are considered, the two show outstanding performance in tackling conservatism issue that the non-randomized D-LinUCB struggles with.
PRELIMINARIES
In stationary stochastic linear bandit, a learner chooses an action X t from a given action set X t ⊂ R d in every round t, and he subsequently observes a reward Y t = X t , θ + η t where θ ∈ R d is an unknown parameter and η t is a conditionally 1-subGaussian random variable. For simplicity, assume that θ 2 ≤ 1 and, for all x ∈ X t , x 2 ≤ 1, and thus | x, θ | 2 ≤ 1.
As a measure of evaluating a learner, the regret is defined as the difference between rewards the learner would have received had it played the best in hindsight, and the rewards actually received. Therefore, minimizing the regret is equivalent to maximizing the expected cumulative reward. Denote the best action in a round t as x t = arg max x∈Xt x, θ and the expected regret as . To learn about unknown parameter θ from history up to time t − 1, H t−1 = {(X l , Y l ) 1≤l≤t−1 }, algorithms rely on l 2 -regularized least-squares estimate of θ ,θ ls t , and confidence ellipsoid centered fromθ ls t . We defineθ ls l and λ is a positive regularization parameter.
RANDOMIZED EXPLORATION
The standard solutions in stationary stochastic linear bandit are optimism based algorithm (LinUCB, Abbasi-Yadkori et al. [1]) and Linear Thompson Sampling (LinTS, Agrawal and Goyal [8]). While the former obtains the theoretically optimal regret boundÕ(d T ), the latter empirically performs better in spite of its regret bound √ d worse than LinUCB [10]. In finite-arm setting, the regret bound of Gaussian Linear Thompson Sampling (Gaussian-LinTS) is improved by (log K)/d as a special case of Followthe-Perturbed-Leader-GLM (FPL-GLM, Kveton et al. [19]). Also, a series of randomized algorithms for linear bandit were proposed in recent works: Linear Perturbed History Exploration (LinPHE, Kveton et al. [18]) and Randomized Linear UCB (RandLinUCB, Vaswani et al. [23]). They are categorized in terms of regret bounds, randomness, and oracle access in Table 1, where we denote K = max t∈[T ] |X t | in finite-arm setting.
There are two families of randomized algorithms according to the way perturbations are used. The first algorithm family is designed to choose an action by maximizing the expected rewards after adding the random perturbation to estimates. Gaussian-LinTS, LinPHE, and FPL-GLM are in this family. But they are limited in that their regret bounds,Õ(d √ T log K), depend on the number of arms, and lead toÕ(d 3/2 √ T ) regret bounds when the action set is infinite. The other family including Ran-dLinUCB is constructed by replacing the optimism with simple randomization when choosing a confidence level to handle the chronic issue that UCB-type algorithms are too conservative. This randomized version of LinUCB matches optimal regret bounds of LinUCB as well as the empirical performance of LinTS.
Oracle point of view : We assume that the learner has access to an algorithm that returns a near-optimal solution to the offline problem, called an offline optimization oracle. It returns the optimal action that maximizes the expected reward from a given action space X ⊂ R d when a parameter θ ∈ R d is given as input.
Definition 1 (Offline Optimization Oracle). There exists an algorithm, A.M.O., which when given a pair of ac- Both the non-randomized LinUCB and RandLinUCB are required to compute spectral norms of all actions x V −1 t,λ in every round so that they cannot be efficiently implemented with an infinite set of arms. The main advantage of the algorithms in the first family such as Gaussian-LinTS, LinPHE, and FPL-GLM is that they rely on an offline optimization oracle in every round t so that the optimal action can be efficiently obtained within polynomial times from large or even infinite action set.
Improved regret bound of Gaussian LinTS : In FTL-GLM, it is required to generate perturbations and save d-dimensional feature vectors {X l } t−1 l=1 in order to obtain perturbed estimateθ t in every round t, which causes computation burden and memory issue for storage. However, once perturbations are Gaussian in the linear model, adding univariate Gaussian perturbations to historical rewards is the same as perturbing the estimatê θ t by a multivariate Gaussian perturbation because of its linear invariance property, and the resulting algorithm is approximately equivalent to Gaussian Linear Thompson Sampling [8] as follows.
It naturally implies the regret bound of Gaussian-LinTS is improved by (log K)/d with finite action sets [19].
Equivalence between Gaussian LinTS and RandLin-UCB : Another perspective of Gaussian-LinTS algorithm is that it is equivalent to RandLinUCB with decoupled perturbations across arms due to linearly invariant property of Gaussian random variables: If perturbations are coupled, we compute the perturbed expected rewards of all actions using randomly chosen confidence level Z t ∼ N (0, a 2 ) instead of Z t,x . In the decoupled RandLinUCB where each arm has its own random confidence level, more variations are generated so that its regret bound have extra logarithmic gap that depends on the number of decoupled actions. In other words, the standard (coupled) RandLinUCB enjoys minimax-optimal regret bound due to coupled perturbations. However, there is a cost to its theoretical optimality: it cannot just rely on an offline optimization oracle and thus loses computational efficiency. We thus have a trade-off between efficiency and optimality described in two design principles of perturbation based algorithms.
PRELIMINARIES
In each round t ∈ [T ], an action set X t ∈ R d is given to the learner and it has to choose an action X t ∈ X t . Then, the reward Y t = X t , θ t + η t is observed to the learner where θ t ∈ R d is an unknown time-varying parameter and η t is a conditionally 1-subGaussian random variable. The non-stationary assumption allows unknown parameter θ t to be time-variant within total variation budget It is a nice way of quantifying time-variations of θ t in that it covers both slowlychanging and abruptly-changing environments. For simplicity, assume θ t 2 ≤ 1, for all x ∈ X t , x 2 ≤ 1, and thus | x, θ t | 2 ≤ 1.
In a similar way to stationary setting, denote the best action in a round t as x t = arg max x∈Xt x, θ t and denote the expected dynamic regret as where X t is chosen action at time t. The goal of the learner is to minimize the expected dynamic regret.
In a stationary stochastic environment where the reward has a linear structure, linear upper confidence bound algorithm (LinUCB) follows a principle of optimism in the face of uncertainty (OFU). Under this OFU principle, three recent works of Cheung et al. [12], Russac et al. [22], Zhao et al. [25] proposed sliding window linear UCB (SW-LinUCB), discounted linear UCB (D-LinUCB), and restarting linear UCB (Restart-LinUCB) which are non-stationary variants of LinUCB to adapt to time-variation of θ t . First two algorithms rely on weighted least-squares estimators with equal weights only given to recent w observations where w is length of a sliding-window, and exponentially discounting weights, respectively. The last algorithm proceeds in epochs, and is periodically restarted to be resilient to the drift of underlying parameter θ t .
Three non-randomized algorithms based on three different approaches are known to achieve the dynamic regret boundsÕ(d 7/8 B 1/4 T T 3/4 ) using Bandit-over-Bandit (BOB) mechanism [12] without the prior information on B T , but share inefficiency of implementation with Lin-UCB [1] in that the computation of spectral norms of all actions are required. Furthermore, they are built upon the construction of a high-probability confidence ellipsoid for the unknown parameter, and thus they are deterministic and their confidence ellipsoids become too wide when high dimensional features are available. In this section, randomization exploration algorithms, discounted randomized LinUCB (D-RandLinUCB) and discounted linear Thompson sampling (D-LinTS), are proposed to handle computational inefficiency and conservatism that both optimism-based algorithms suffer from. The dynamic regret bound, randomness, and oracle access of algorithms are reported in Table 2.
WEIGHTED LEAST-SQUARES ESTIMATOR
First, we study the weighted least-squares estimator with discounting factor 0 < γ < 1. In the round t, the weighted least-squares estimator is obtained in a closed form,θ wls This form is closely connected with the covariance matrix ofθ wls t . For simplicity, we denote V t = W t,λW −1 t,λ W t,λ . Lemma 2 (Weighted Least-Sqaures Confidence Ellipsoid, Theorem 1 [22]). Assume the stationary setting where θ t = θ . For any δ > 0, While Lemma 2 states that the confidence ellipsoid t,λ W t,λ ≤ β t } contains true parameter θ t with high probability in stationary setting, the true parameter θ t is not necessarily inside the confidence ellipsoid C t in the non-stationary setting because of variation in the parameters. We alternatively define a surrogate parameterθ t = W −1 t,λ ( t−1 l=1 γ −l X l X T l θ l + λγ −(t−1) θ t ), which belongs to C t with probability at least 1 − δ, which is formally stated in Lemma 4.
RANDOMIZED EXPLORATION
In this section, we propose two randomized algorithms for non-stationary stochastic linear bandits, Discounted To gracefully adapt to environmental variation, the weighted method with exponentially discounting factor is directly applied to both RandLinUCB and Gaussian-LinTS, respectively. The random perturbations are injected to D-RandLinUCB and D-LinTS in different fashions: either by replacing optimism with simple randomization in deciding the confidence level or perturbing estimates before maximizing the expected rewards.
Discounted Randomized Linear UCB
Following the optimism in face of uncertainty principle, D-LinUCB [22] chooses an action by maximizing the upper confidence bound of expected reward based onθ wls t and confidence level a. Motivated by the recent work of Vaswani et al. [23], our first randomized algorithm in non-stationary linear bandit setting is constructed by replacing confidence level a with a random variable Z t ∼ D and this non-stationary variant of Ran-dLinUCB algorithm is called Discounted Randomized LinUCB (D-RandLinUCB, Algorithm 1),
Discounted Linear Thompson Sampling
The idea of perturbing estimates via random perturbation in LinTS algorithm can be directly applied to non-stationary setting by replacingθ ls t and Gram matrix V t,λ with the weighted least-squares estimatorθ wls t and its corresponding matrix V t = W t,λW −1 t,λ W t,λ . We call it Discounted Linear Thompson Sampling (D-LinTS, Algorithm 2). The motivation of D-LinTS arises from its equivalence to D-RandLinUCB with decoupled perturbations Z x,t for all x ∈ X t in round t as . Perturbations above are decoupled in that random perturbation are not shared across every arm, and thus they obtain more variation and accordingly (log K) 3/8 larger regret bound than that of D-RandLinUCB algorithm that is associated with coupled perturbations Z t . By paying a logarithmic regret gap in terms of K at a cost, the innate perturbation of D-LinTS allows itself to have an offline optimization oracle access in contrast to D-LinUCB and D-RandLinUCB. Therefore, D-LinTS algorithm can be efficient in computation even with an infinite action set.
Algorithm 2 Discounted Linear Thompson Sampling
Input: λ ≥ 1, 0 < γ < 1, and a > 0 Oracle : X t = arg max x∈Xt x,θ Play action X t and receive reward Y t
ANALYSIS
We construct a general regret bound for linear bandit algorithm on the top of prior work of Kveton et al. [18].
The difference from their work is that an action set X t varies from time t and can have infinite arms. Also, nonstationary environment is considered where true parameter θ t changes within total variation B T . The expected dynamic regret is decomposed into surrogate regret and bias arising from total variation.
Surrogate Instantaneous Regret
To bound the surrogate instantaneous regret E[ x t − X t ,θ t ], we newly define three events E wls , E conc t , and E anti t : The choice off t (x) is made by algorithmic design, which decides choices on both c 1 and c 2 simultaneously. In round t, we consider the general algorithm which maximizes perturbed expected rewardf t (x) over action space X t . The following theorem is a extension of Theorem 1 [18] to the time-evolving environment.
Theorem 3. Assume we have λ ≥ 1 and c 1 , c 2 ≥ 1 satisfying P (E wls ) ≥ 1 − p 1 , P (E conc t ) ≥ 1 − p 2 , and P (E anti t ) ≥ p 3 , and c 3 = 2d log( 1 γ ) + 2 d T log(1 + 1 dλ(1−γ) ). Let A be an algorithm that chooses arm X t = arg max Xtft (x) at time t. Then the expected surrogate instantaneous regret of A, E[ x t − X t ,θ t ] is bounded by Proof. Firstly, we newly define ∆ x = x t − x,θ t in round t. Given history H t−1 , we assume that event E wls holds and letS t = {x ∈ X t : (c 1 + c 2 ) x V −1 t ≥ ∆ x and ∆ x ≥ 0} be the set of arms that are undersampled and worse than x t givenθ t in round t. Among them, let U t = arg min x∈St x V −1 t be the least uncertain under-sampled arm in round t. By definition of the optimal arm, x t ∈S t . The set of sufficiently sampled arms is defined as S t = {x ∈ X t : (c 1 + c 2 ) x V −1 t ≤ ∆ x and ∆ x ≥ 0} and let c = c 1 + c 2 . Note that any actions x ∈ X t with ∆ x < 0 can be neglected since the regret induced by these actions are always negative so that it is upper bounded by zero. Given history H t−1 , U t is deterministic term while X t is random because of innate randomness inf t . Thus surrogate instantaneous regret can be bounded as, Thus, the expected surrogate instantaneous regret can be bounded as, The third inequality holds because of definition of U t that is the least uncertain inS t and deterministic as follows, The last inequality works because λ min The second last inequality holds since on event E ls t , The fourth inequality holds since for any y ∈ S t ,f t (y) ≤ y,θ t + c y V −1 t ≤ y,θ t + ∆ y = x t ,θ t .
Vt in D-RandLinUCB algorithm, and thus Vt where Z t,x ∼ N (0, a 2 ) by the linear invariant property of Gaussian distributions. Thus, N (0, a 2 I d ). If we assume a 2 = 14c 2 1 , then Proof. (a) We denote perturbed expected reward as f t (x) = x,θ wls t +Z t x −1 Vt for D-RandLinUCB. Thus, where a 2 = 14c 2 1 .
(b) In the same way as the proof of Lemma Vt where Z t,x ∼ N (0, a 2 ). Thus, where a 2 = 14c 2 1 .
Dynamic Regret
The dynamic regret bound of general randomized algorithm is stated below.
). Let A be an algorithm that chooses arm X t = arg max Xtft (x) at time t. The expected dynamic regret of A is bounded as for any integer D > 0, Proof. The dynamic regret bound is decomposed into two terms, (A) expected surrogate regret and (B) bias arising from time variation on true parameter, The expected surrogate regret term (A) is bounded by The first inequality holds due to Theorem 3. . The second inequality works because both dynamic regret and surrogate regret are upper bounded by 2T and c 1 + c 2 ≥ 2. Also, the last inequality holds by Lemma 11 in Appendix A.2. For any integer D > 0, the bias term (B) is bounded as The second inequality holds by interchanging the order of summations and W −2 t,λ With the optimal choice of c 1 , c 2 and a derived from Lemma 4-6, the dynamic regret bounds of D-RandLinUCB and D-LinTS are stated below. If B T is unknown, D-RandLinUCB together with Bandits-over-Bandits mechanism enjoys the expected dynamic regret of O(d The detailed proof of Theorem 7 and Corollary 8 and 9 are deferred to Appendix A.2. The details for the case of unknown B T are deferred to Appendix B. Note that exponentially discounting weights can be replaced by sliding window strategy or restarted strategy to accommodate to evolving environment. We can construct sliding-window randomized LinUCB (SW-RandLinUCB) and sliding-window linear Thompson sampling (SW-LinTS), or restarting randomized LinUCB (Restart-RandLinUCB) and restarting linear Thompson sampling (Restart-LinTS) via two perturbation approaches, and they maintain the trade-off between oracle efficiency and theoretical guarantee. With unknown total variation B T , we can also utilize Banditsover-Bandits mechanism by applying the EXP3 algorithm over these algorithms with different window sizes [12] or epoch sizes [25,24], respectively.
Trade-off between Oracle Efficiency and Minimax Optimality : Corollary 8 shows that D-RandLinUCB does not match the lower bound for dynamic regret, Ω(d 2/3 B 1/3 T T 2/3 ), but it achieve the same dynamic regret bound as that of three non-randomized algorithms such as SW-LinUCB, D-LinUCB and Restart-LinUCB. However, D-RandLinUCB is computationally inefficient as D-LinUCB in large action space since the spectral norm of each action in terms of matrix V −1 t should be computed in every round t. In contrast, D-LinTS algorithm relies on offline optimization oracle access via perturbation and thus can be efficiently implemented in infinite-arm setting, and even contextual bandit setting. As a cost of its oracle efficiency, D-LinTS achieves the dynamic regret bound (log K) 3/8 worse than that of D-RandLinUCB in finite-arm setting. There exist two variations in D-LinTS; algorithmic variation generated by perturbing an estimateθ wls t and environmental variation induced by time-varying environments. Two variations are hard to distinguish from the learner's perspective, and thus the effect of algorithmic variation is alleviated by being partially absorbed in environmental variation. This is why D-LinTS and D-LinUCB produce d 3/8 gap of dynamic regret bounds with infinite set of arms which is less than d 1/2 gap between regret bounds of LinUCB and LinTS in the stationary environment.
In simulation studies 1 , we evaluate the empirical performance of D-RandLinUCB and D-LinTS. We use a sample of 30 days of Criteo live traffic data [14] by 10% downsampling without replacement. Each line corresponds to one impression that was displayed to a user with contextual variables as well as information of whether it was clicked or not. We kept campaign variable and categorical variables from cat1 to cat9 except for cat7. We experiment with several dimensions d = 10, 20, 50 and the number of arms K = 10, 100. Among all one-hot coded contextual variables, d feature variables were selected by Singular Value Decomposition for dimensionality reduction. We construct two linear models and the model switch occurs at time 4000. The parameter θ in the initial model is obtained from linear regression model and we obtain true parameter θ in the second model by switching the signs of 60% of the components of θ . In each round, K arms given to all algorithms are equally sampled from two separate pools of 10000 arms corresponding to clicked or not clicked impressions. The rewards are generated from linear model with additional Gaussian noise of variance σ 2 = 0.15.
We compare randomized algorithms D-RandLinUCB and D-LinTS to discounted linear UCB (D-LinUCB) as a benchmark. Also, we compare them to linear Thompson sampling (LinTS) and oracle restart LinTS (LinTS-OR). An oracle restart knows about the change-point and restarts the algorithm immediately after the change. In D-RandLinUCB, we use truncated normal distribution with zero mean and standard deviation 2/5 over [0, ∞) as D to ensure that its randomly chosen confidence bound belongs to that of D-LinUCB with high probability. Also, we use non-inflated version by setting a = 1 when implementing both LinTS and D-LinTS [23]. The regularization parameter is λ = 1, the time horizon is T = 10000 and the cumulative dynamic regret of algorithms are averaged over 100 independent replications in Figure 1.
We observe the following patterns in Figure 1. First, two randomized algorithms, D-RandLinUCB and D-LinTS outperform the non-randomized one, D-LinUCB when action space is quite large (K = 100) in figure 1d, 1e, and 1f. In the setting where the number of arms is small (K = 10), however, non-randomized algorithm (D-LinUCB) performs better than two randomized algorithms once relatively high-dimension feature is considered ( figure 1b and 1c), while three nonstationary algorithms show almost similar performance when feature is low-dimensional (figure 1a).
Second, D-RandLinUCB always works better than D-LinTS in all scenarios. Though D-LinTS can enjoy oracle efficiency in computational aspect, it has slightly worse regret bound than D-RandLunUCB. The difference in theoretical guarantees can be empirically evaluated in this result. The poor performance of D-LinUCB in large action space is due to its very large confidence bound so that the issue regarding conservatism can be partially tackled by randomizing a confidence level in D-RandLinUCB.
Lastly, the interesting observation in figure 1f, nonrandomized algorithm D-LinUCB shows better performance in recovering a reliable estimator after experiencing a change point than other two competitors in the initial phase. It takes longer time for randomized algorithms to recover their performance. This is because the agent cannot distinguish which factor causes this nonstationarity it is experiencing: either randomness inherited from algorithm nature or environmental change. However, randomized algorithms eventually beat the nonrandomized competitor in the final phase.
CONCLUSION
For non-stationary linear bandits, we propose two randomized algorithms, Discounted Randomized LinUCB and Discounted Linear Thompson Sampling which are the first of their kind by replacing optimism with a simple randomization in UCB-type algorithms, or by adding the random perturbations to estimates, respectively. We analyzed their dynamic regret bounds and evaluated their empirical performance in a simulation study.
The existence of a randomized algorithm that enjoys both theoretical optimality and oracle efficiency is still open in stationary and non-stationary stochastic linear bandits.
|
2019-12-11T23:34:12.000Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "3f3cb7f9fb9fdda257b258564ece4e62e01b6f9e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3f3cb7f9fb9fdda257b258564ece4e62e01b6f9e",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
266824875
|
pes2o/s2orc
|
v3-fos-license
|
Non-Contact Wi-Fi Sensing of Respiration Rate for Older Adults in Care: A Validity and Repeatability Study
In recent years, considerable effort has been directed towards non-contact Wi-Fi sensing applications such as fall detection and vital sign monitoring. For emerging technologies in healthcare, it is essential to assess the validity and repeatability of new measurement instruments before real-world implementation. However, the existing literature has not addressed the clinical validity and repeatability of respiration rate measurements obtained from Wi-Fi CSI. This study draws on medical instrumentation statistics to address this research gap by investigating the validity and repeatability of Wi-Fi sensing in measuring respiratory rates. For this purpose, we first implement a non-contact Wi-Fi Channel State Information respiration rate sensing system using off-the-shelf ESP32 devices and signal processing methods. Then, we evaluated the validity of the Wi-Fi sensor’s respiration rate measurement against respiration belt NUL-236 as a ground truth. The Bland-Altman method provided homoscedastic results across the standard range of respiration rates of older adults [12, 28] BPM achieving a validity of [1.29, 1.06] BPM, allowing us to analyze measurement repeatability at a single point. Hence, we assessed the measurement repeatability at 14 BPM using the spread of the data and the implications of random error in the measurements. The Wi-Fi CSI measurements dataset and corresponding belt data were made available for the validity and repeatability experiments. By providing appropriate measurement validity and repeatability metrics, care professionals can make informed decisions about the acceptability and generality of non-contact Wi-Fi sensing systems in measuring respiratory rate.
I. INTRODUCTION
In most countries, including the United Kingdom, medical and public health advancements have contributed to an increase in life expectancy and the quality of life over the past few decades [1], [2].However, the population ages 65 and over suffer the highest morbidity and mortality rates due to geriatric disorders, such as illness and functional decline, as well as injury-related conditions [3].In response to the The associate editor coordinating the review of this manuscript and approving it for publication was Li Yang .ageing population's needs, there is an increasing demand for health services and monitoring solutions.
With the use of emerging technologies, we are able to move away from traditional hospital settings and provide patient-centric care.The ability to remotely monitor patients would have a positive impact on older people's quality of life, as they prefer to age in place and remain independent [4].Additionally, the resulting continuity in patient records can provide higher resolution data in terms of health status, thereby helping to detect and predict health disorders [5].It has been demonstrated that continuous monitoring of patient health status in hospital wards is more effective than manual assessments during nursing rounds in identifying deteriorating patients [6].Incorporating sensing technology into everyday objects and the environment in order to monitor the health status of older individuals can alleviate the strain placed on the health system.A sustainable alternative to some aspects of traditional care could exist as a result of connected systems and the Internet of Things.
An individual's physiological parameters include body temperature, blood pressure, heart rate (HR) and respiration rate (RR), which provide a general picture of their health status [7].An investigation has found that RR and HR changes are more reliable indicators of cardiopulmonary arrest than any other vital sign [8].In addition, the changes in RR and HR correlate with illnesses such as sleep disorders, cardiovascular disease, neurodegenerative disease, fall risk and mental stress.
Traditionally, vital signs have been monitored using wearable sensors, but these are not convenient for longterm monitoring.The current gold standard device for RR monitoring is the Capnograph, which uses a nasal probe or a respiratory mask on the patient [9].RR can also be determined by monitoring thoracic exhalations using a sensing belt [10].The use of these solutions may be considered obtrusive and restrictive for older persons; contact-free vital sign monitoring solutions are therefore preferred for continuous long-term care.
The validity and repeatability of non-wearable sensors play a critical role in their long-term usability.If an instrument contains significant errors, it is unlikely to serve its purpose or provide accurate data for making important decisions.It is therefore imperative that a health care professional determine the amount of error that is acceptable between an intrusive but accurate device versus a non-intrusive but less accurate device that will not interfere with their care decision-making for older patients.Previous studies have evaluated the validity of wearable sensors for RR measurements [11], [12]; however, no research has yet been conducted to ascertain the validity and repeatability of Wi-Fi sensing as a method of RR measurement.Previous studies have focused on system implementation rather than on measurement assessment for clinical and care use [13], [14] [15].This study addresses this research gap by developing methods to investigate the validity and repeatability of Wi-Fi sensing measurements for respiration rate estimation in older adults.
This study aims to contribute to the growing field of Wi-Fi sensing research by introducing an analytical framework and experimental measurement methodology for assessing the viability of Wi-Fi-based sensing as an instrument for respiratory monitoring in care.The main contributions of this study are as follows: • The experimental design and evaluation of the validity of non-contact Wi-Fi Channel State Information (CSI) sensing using a low-cost ESP32 Microcontroller Unit (MCU).This was done for the resting RR range for older adults against a ground-truth respiration belt logger NUL-236 by the Bland-Altman method.The validity of respiration rate measurements has been previously evaluated for wearable devices, but no work has addressed this for non-contact Wi-Fi sensing, which is vital for assessing its measurement robustness for clinical adaptation [11], [12].
• The development of an experimental evaluation technique to assess the repeatability of Wi-Fi sensing-based RR measurements accordingly.Since the Bland-Altman method produced homoscedastic results, this enables the examination of the repeatability of measurements at a single point in the respiration range of 14 BPM.
Although previous studies have addressed accuracy metrics [13], [14], [16], [17], there has been no detailed examination of the repeatability of RR measurements using Wi-Fi CSI sensing to date.
For the aforementioned experiments, a comprehensive dataset of Wi-Fi CSI measurements paired with the corresponding belt data was collected and made available on IEEE Dataport for research reproducibility purposes [18].The accompanying signal processing code will be made available in the repository upon the completion of the project.
II. RELATED WORKS A. UNOBTRUSIVE SENSING
It is the purpose of unobtrusive vital sign monitoring to obtain long-term data collection without encumbering users with wearables by integrating sensors into everyday environments and objects [19].Hence, vital signs can be continuously measured or over time without interfering with the patients' daily lives, which enables the detection of physiological anomalies and data-informed prediction of disorders [5].
In terms of unobtrusive sensing, we only discuss Radio Frequency (RF)-based sensing methods for RR for brevity.RF-based techniques consist primarily of radar and Wi-Fi sensing implementations.
The types of radars that are used for vital sign measurements are Doppler Continuous Wave (CW), Frequency Modulated Continuous Wave (FMCW), and Impulse Radio Ultra Wide Band (UWB-IR).CW radar sensing methods are dependent on cardiorespiratory displacement.They are based on the Doppler frequency shift incurred due to target movement between the transmitted and received signal of a radar transceiver [20].Additionally, a Doppler-based sleep monitoring system was proposed and evaluated in [21] for sleep stage classification based on vital signs and on-bed movements.
Unlike CW radars, which measure only the Doppler frequency at the target, FMCW also measures the range using chirp signals.In [22], low-power FMCW sweeping from 5.46 GHz to 7.25 GHz they used every 2.5 milliseconds to extract vitals through walls and multi-person scenarios.FMCW has a lower resolution for relative motion than CW-Doppler.Hence, since CW and FMCW utilise the same hardware, [23], [24], [25] use a hybrid approach to achieve an absolute distance accuracy of less than 4 cm and millimeter-scale accuracy for relative motion at the 5.8 GHz ISM band.
Alternatively, the UWB-IR measures the target range by transmitting short pulses, computing the time delays in the received pulse amplitudes, and extracting vital signs using distance information.It has the advantage of having a smaller size and lower power consumption than the CW Doppler radar [26].In [27], a method based on autocorrelation was used to extract RR and HR periodic waveforms, as well as subject location.Vitals signs extraction in the presence of random body movement was studied in [28], using active motion cancellation by direct signal fusion from two RF sensors.
Although radar-based methods are effective and precise for Line-of-Sight (LoS) detection and even through walls [22], they require expensive customized hardware which prevents wide-scale deployment [13], [29].
B. WI-FI SENSING
The most widely adopted wireless access globally is Wi-Fi in terms of devices and infrastructure.This proliferation has been enabled by the widespread use of Wi-Fi chipsets in laptops and smartphones, and ease of configuration and low maintenance of W-Fi [30].This has led to the ubiquity of Wi-Fi in homes, offices, and public environments.Furthermore, Wi-Fi's use of unlicensed spectrum bands has unhindered wide-ranging IoT devices and solutions from emerging [30].
Earlier Wi-Fi sensing studies used the Received Signal Strength Indicator (RSSI), which measures the total received power at the receiver.It provides coarse-grained information and is bounded by the sum of the power of each element of the CSI matrix.RSSI has been used for coarse gesture recognition [41] and for RR estimation [42], and also presents a module for sleep apnoea detection.
However, RSSI measurements fluctuate because they are sensitive to environmental noise [39].The patient must be close to the LoS of the transceivers to achieve a good estimate.Thus, limiting vital signs monitoring in practical applications.Meanwhile, CSI allows for the examination of each subcarrier's amplitude and phase information separately, allowing for a finer-grained and wider sensing area [39].
1) WI-FI SENSING IN CARE
An activity and fall recognition system using CSI amplitude was proposed in [33], which differentiates between sitting, standing, walking and falling, and could be utilized in ambient assisted living as an initial phase to analyze the behaviour of the older people.For older adults in independent living, approximately 50% of their falls occur at home; hence, RT-fall [34] implements real-time activity segmentation using CSI phase difference to detect fall events starting from standing or walking positions.Since gait is an effective biomarker in assessing functional decline, GaitWay [35] was designed to unobtrusively capture gait speeds while walking using CSI, extract gait features, as well as recognize the gait of different users.Furthermore, the CSI phase difference between antennas was used to detect nocturnal seizures in [43] to support patients with epilepsy and caregivers.
2) WI-FI SENSING FOR VITAL SIGN MONITORING
A model of respiration detection using Wi-Fi CSI was introduced in [29] leveraging the Fresnel Zone model and Wi-Fi radio propagation, which has informed the RR extraction performed in our work.Micro-movements can be extracted from Wi-Fi CSI signals, including those induced by respiratory and cardiac activities.Together with macro-movements such as falls and rollovers during sleep, they help provide more information about an individual's health status.For instance, Liu et al. [39] used CSI amplitude information to extract RR during sleep, as well as sleeping posture and rollover events during sleep.Multi-person respiration monitoring during sleep has been implemented in [44] on three persons, suggesting that a respiration state analysis would be necessary to map measurements to each target subject, assuming that each subject follows a different respiratory pattern.This has then been achieved in [45] by modeling CSI-based multi-person respiration sensing as a blind-source separation problem using multiple antennas.A sleep-stage recognition program was implemented for in-home sleep monitoring using respiratory data in [15].In addition to RR, body movements during sleep were used in [14] for sleep monitoring using deep learning and prior knowledge of sleep medicine.Indeed, combining vitals with movement information enables advanced health analyses previously unavailable for unobtrusive modalities.
Beyond the vital signal extraction mechanisms, the effect of practical conditions on the quality of the extracted signal is a crucial domain to examine.For example, in [46], RR and HR were extracted during sleep while evaluating the effect of transmitter-to-receiver distance, sleeping posture, obstacles, and packet transmission rate.Furthermore, in [47], the CSI phase difference between two antennas was exploited to track RR and HR, and the effects of Non-LoS tracking, transmitter-to-receiver distance, and packet transmission rate were analyzed.Furthermore, signal processing techniques can be exploited to improve the quality of the extracted vital signs.For instance, the CSI phase difference was used in [32] with directional antennas where the most informative subcarriers were fused to obtain HR estimates to improve the signal 6402 VOLUME 12, 2024 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
quality.Expanding further on the aspect of signal fusion, the complementarity of the CSI phase and amplitude was exploited in work by Zeng et al. [48] to achieve full area coverage without leveraging multiple subcarriers.Published studies, however, are yet to address the robustness of measurements across RR ranges and different breathing depths.
Diverse RRs naturally result in inversely proportional breathing depths when measured on the same subject.This is attributed to fixed physiology and individual respiratory mechanics.In the context of respiratory motion, the anteroposterior displacement of the chest exhibits a range of 4.2 to 5.4 mm, whilst the mediolateral dimension demonstrates variability between 0.6 and 1.1 mm during conventional inhalation and exhalation procedures [13], [49].The motion due to the chest displacement gives rise to variations in the dynamic path of the CSI.The ability of a tool to accurately measure respiration regardless of the rate and the corresponding depth is a mark of its universality and is crucial for the clinical setting.There are undoubtedly various concerns to address within Wi-Fi-based vital signs extraction; however, none of the previous works tried to address the validation of the non-contact instrument as a medical device which we aim to consider in this work.
3) WI-FI CSI SENSING MEASUREMENT DEVICES AND TOOLS
Even though CSI has been included since IEEE 802.11n [50], the access to CSI directly from Wi-Fi chipsets is limited to specific hardware and software tools.For example, the first CSI collection tool is the Linux 802.11nCSI tool [51], which is based on an Intel 5300 Network Interface Card (NIC).However, it only collects up to 30 subcarriers and requires firmware modifications [50].On the other hand, the Atheros CSI tool [52] works with Atheros 802.11NICs and obtains all the 56 subcarriers for 20 MHz bandwidth without tampering with the firmware.Nonetheless, the aforementioned NIC based solutions do not support standalone operation and remain impractical for large-scale deployment [53].
The Nexmon CSI extractor utilizes the Broadcom chipset in the Nexmon 5 Android smartphone to obtain CSI data from all the 56 subcarriers in the 20 MHz bandwidth as a standalone solution [54].The Nexmon-based solution requires modification and may interfere with the warranty of the device.Alternatively, the ESP32 CSI toolkit [55] and the Wi-ESP tool [53] are based on the ESP32 MCUs and exhibit the least hardware-software dependency [50], [53].They provide a flexible, low-cost Wi-Fi sensing solution that enables large-scale deployment [56].
The ESP32 CSI sensing capabilities have been previously explored for applications such as crowd-counting and occupancy monitoring [57], [58], human presence and fall detection [56], as well as human activity recognition [55].However, to the best of our knowledge, no study has been conducted to date that has implemented an ESP32-based respiratory rate measurement instrument nor evaluated its measurements to the acceptability of its use as a medical device.Thus, we aim to address this gap in research by developing a Wi-Fi CSI-based RR sensing system using commercial off-the-shelf (COTS) ESP32 MCUs and investigating its measurement validity and repeatability in the context of the care of older people.
C. VALIDATION OF NEW MEDICAL INSTRUMENTS
Medical laboratories are often required to assess the degree of agreement between two measurement techniques [59].In order to validate a new technology for application in clinical medicine, it needs to be compared with older and more established methods [60].We may wish to determine whether a new inexpensive and unobtrusive technique produces results that are comparable to a well-established method with sufficient agreement for clinical purposes [61].The Bland and Altman method is essential for method comparison studies with the aim of validating new medical devices [61].Using this approach, measurement instruments that capture continuous variables measuring the same construct can be assessed [62].
For the Bland and Altman method, the statistical limits of agreement between the two measurement methods were constructed based on the mean and standard deviation of the difference in measurements.The limits of agreement are defined as [ d − 1.96s, d + 1.96s], where d is the bias or mean difference, and s is the standard deviation of the difference.Given enough samples collected, if the error distribution can be determined to be normal, 95% of the differences will lie between those limits of agreement [61].Normality of the distribution of differences is a prerequisite for this analysis [61].
Previous studies have evaluated the validity of wearable sensors for RR measurements [11], [12], where several devices were assessed for their validity, and the most reliable device was adopted for extended investigations.Furthermore, medical staff evaluated a non-intrusive manual device, such as a stethoscope, for its measurement validity during assessment [63].Nevertheless, no work has been conducted to date that addresses the clinical validity of noncontact Wi-Fi sensing as an RR measurement device, which is the gap we aim to target in this study.
D. REPEATABILITY OF MEASUREMENT INSTRUMENTS
The importance of investigating measurement errors from random and non-random sources lies in determining the appropriateness of the measurement method and instrument for different contexts [64].A crucial aspect of the usability and the long-term implementation of non-wearable sensors is the measurement repeatability of the instrument.An instrument riddled by enormous random errors is most likely not fit for its purpose, let alone be a suitable variable for making important decisions.For instance, in real patient scenarios, the risk of obtaining an erroneous estimate of RR is high because it is related to the patient's health condition, and an instrument with reliable measurements is required [64].
The repeatability of the Wi-Fi sensors can be determined by measuring the spread of the data around the sample mean, calculating the standard deviation [61], [64], and obtaining the confidence intervals for repeated measurements.The more consistent the repeated measurement results are, the higher the repeatability of the measurement process.In a repeatability study, variations in measurements taken on the same subject can be attributed only to errors in the measurement process [64].To quantify the repeatability of the measurements, the experimental conditions of the study must remain constant using the same measurement method [64].
Previous studies on non-contact RR sensing have assessed the repeatability of acoustic-based sensors [65] and polymer humidity sensors [66].However, to the best of our knowledge, no study has examined the repeatability of non-contact Wi-Fi sensing for RR monitoring which we address in this work.
A. HARDWARE DESCRIPTION
We used two ESP32-DevKitC-VE embedded devices in our work as Wi-Fi sensors.One ESP is programmed to act as the Access Point or Transmitter (TX), and the other is set as the Receiver (RX).The development kit supports the 802.11n protocol and allows access to CSI data without hardware tampering [53].
The data are transmitted and captured with the built-in omnidirectional PCB antenna in the development kit, where the transmission power is 20 dBm (100mW) at 2.4 GHz abiding by the IEEE and ETSI standards.The data were sent from the RX to a PC through a universal serial bus (USB) cable to a USB to universal asynchronous receiver-transmitter (UART) bridge with a maximum transmit rate of 3 Mbps.
We used an additional measurement device as a ground-truth signal for respiration: a scientific grade Neulog Respiration Monitor Belt logger sensor (NUL-236).A belt logger wrapped around the chest and measured the air pressure in the belt, which varied with the subject's breathing.
To minimize human error while maintaining a constant RR throughout the experiment, a metronome application was used as a guide for respiratory movements.The metronome guided the participant to inhale and exhale with alternate beats, where the beat rate of the metronome was set to double the intended RR.
B. DATA ACQUISITION 1) SOFTWARE TOOLS AND SETTINGS
We use the esp32-CSI-tool to obtain CSI data using the IEEE 802.11n 2.4 GHz Wi-Fi communication standard [55].The USB baud rate was set to 1843200 bits per second, and the wireless packets were transmitted at 120 packets/s (PPS).Subsequently, the Wi-Fi CSI data were collected by a MacOs laptop, is time-stamped with UNIX epoch time, and saved in a.CSV file format.We processed the saved complex CSI data once the data for the experiment were acquired.
A software application is provided as part of the NUL-236 respiration belt logger, which facilitates the visualization and data collection of the respiration waveform.There is no standard unit of measurement for the waveform data obtained from the sensor, and it can be rescaled.Samples captured by the NUL-236 were labeled against time and saved in a.CSV format, with a sampling rate of 100 samples/s.
2) EXPERIMENTAL DESIGN
Essentially, an experiment consists of a series of measurements aimed at testing the relationships between several variables.With respect to our particular study, we aimed to investigate the relationship quality between Wi-Fi CSI and the micro-motion of the chest and abdomen as a result of breathing.The validity of a measurement device is its ability to demonstrate that the experimental process successfully measures the quantity with little to no systematic error.Furthermore, a reliable instrument must minimize random error in its measurements by providing consistent results of repeated readings.
We designed an experimental procedure to measure the validity and repeatability of RR measurements using the Wi-Fi CSI amplitude.A test space of 3 m × 3 m in a testing environment closely replicating a standard care living room setting for individual older persons monitoring where such a device would be most beneficial.The TX and the RX are placed 3 m apart at a height of 0.85 m perpendicular to the ground, with a LoS distance of the TX-RX crossing the middle of the test space.The participant was seated approximately 0.9 m away from the middle of the LoS of the TX-RX pair.The labeled setup is shown in Fig. 1 resembling the setup illustrated in Fig. 2. The TX and RX were carefully placed in the test space based on the study's requirements.The test subject was advised to remain stationary during the testing period to control the variable of motion and isolate breathing chest movements from the effect of motion artifacts for the purpose of the datasets.
In this study, we collected two datasets, one for validity and one for repeatability [18].To test the validity of the Wi-Fi CSI RR sensing system, we performed the experiment 17 times with RRs ranging from 12 to 28 breaths per minute (BPM).Although our system captures RR ranging from [9,37] BPM expected from humans, [12,28] BPM is considered the expected RR range during rest for older adults as described in [67] including Tachypnea and hence the choice of RR range in this study.The duration of each data capture experiment was 120 seconds.Breathing slower than 12 BPM is indicative of Bradypnea, while faster than 24 BPM is of Tachypnea.Sample durations of 30-, 60-and 120-seconds were assessed to evaluate the effect of window width.This was done similarly to work in [68], where 30-seconds was considered common in clinical practice, 60seconds as the ideal counting duration, and 120-seconds as a larger sample.For repeatability, we evaluated the consistency of our measurements through experiment repetition with the RR set to 14 BPM.It is repeated for 30 times with all the factors controlled for, as n = 30 is the Large Enough Sample Condition.The accuracy of the instrument was also assessed based on the repeatability experiment data, which is another form of validity.
C. SIGNAL PRE-PROCESSING AND RR EXTRACTION
Python 3 was used to implement the pre-processing and RR extraction from the raw CSI data in this study.The signal processing workflow is illustrated in Fig. 3. First, we obtained the CSI Amplitude data from the complex CSI, as shown in Fig. 3 (a), after extracting them from the.CSV timestamped file.Time indexing is essential in RR tracking applications.Unfortunately, due to packet loss, transmission delays, and other processing delays, the received packets are not evenly distributed over time.Hence, we interpolate and downsample the signal from 120 PPS to a rate of 40 PPS, using the Fourier method, as shown in Fig. 3 (b).Resampling and interpolation help in outlier removal by reducing the spurious effects occurring from hardware-introduced errors.It also evenly the incoming signal over time and reduces the computational complexity, preparing it for the Discrete Wavelet Transform (DWT).
We use a DWT-based filtering technique in contrast to the Fourier-based finite impulse response filters, in which the latter would require additional signal conditioning.Signal conditioning techniques such as the Hampel filter [69], Savitsky-Golay filter [44], and median or mean filters are used to remove the noise.To prevent hardware or environmental noise from interfering with the performance of the Fourier-based filter, it is necessary to complete this step before implementing the filter.However, this signal conditioning may distort the signal [70].On the other hand, this conditioning is not required before applying the DWT introducing fewer distortions to the signal [70].Furthermore, wavelet analysis is used on the time-series of Wi-Fi sensor data; it is used for data which is non-stationary in nature, and it preserves any sharp transitions in the signal better than other types of filters [71].
The down-sampled CSI data are transformed to the wavelet domain using the DWT with a 'db4' wavelet, as it is the most appropriate wavelet for extracting RR signals, further reducing the effect of outliers [72].We apply a 7-level decomposition and maintain the sixth and seventh detail coefficients while nullifying the approximation coefficients and the lower-level detail coefficients.This wavelet filtering technique only reconstructed frequencies from [0.15625, 0.625] Hz, corresponding to [9.375, 37.5] BPM.This range includes the typical RR for older adults of [12,28] BPM, which we used to evaluate the sensing system.The reconstructed signal containing the frequencies of interest is shown in Fig. 4 (c).
Principal Component Analysis (PCA) helps separate respiratory body movements from noise, as movement causes correlated effects across subcarriers.Subcarriers experiencing the most variance due to movement are considered the most sensitive to movement, hence the variance is preserved using PCA [73].Principal Components (PCs) capture the primary features of respiration movement data, suppress noise, and reduce dimensionality [69].Furthermore, since they preserve only the correlated data due to variations in the dynamic path of the CSI, they ensure the generality of our system in measuring RR independent of the shape and size of the subject.Using this method ensures that we can recover CSI change patterns independent of phase offset potentially introduced by hardware and software errors.The first PC captures highly correlated noise due to hardware imperfections; therefore, we used the second PC because it contains more of the respiration waveform without the noise corresponding to the internal state changes in the hardware [74], [75].
The combination of the DWT filter and PCA ensures that regular movements such as walking, tremors, and restless leg syndrome are not picked up by our system as they are not regular enough periodicity-wise or lie outside of the frequency range.The extracted respiration signal, in comparison to the respiration belt, can be seen in Fig. 4 (d) and Fig. 4 (e).To extract the final RR estimate, we obtained the peak of the power spectral density of the second PC, as demonstrated by Fig. 4 (f).The data pre-processing and RR are illustrated in Fig. 4 and are implemented for 20 BPM and a 120-second analysis window.A zoomed-in comparison between the W-Fi CSI obtained respiration versus the belt data is displayed in Fig. 5, where we can see the peaks from both modalities coincide.
A. VALIDITY 1) AGREEMENT: A METHOD-COMPARISON STUDY
In Fig. 6(a), we can note that 95% of the differences in measurement between the Wi-Fi sensor and the respiration belt for a 30-seconds sample duration are accounted for with limits of agreement ranging between [−6.05, 4.66] BPM with a bias of −0.70 BPM between the two instruments.Whereas in Fig. 6(b) we can note that 95% of differences in measurement between the Wi-Fi sensor and the respiration belt for a 60-seconds sample duration are observed within limits of agreement ranging between [−1.29, 1.06] BPM with a bias of −0.11 BPM between the two instruments.Finally, for the 120-second time window in Fig. 6(c) we can note that 95% of differences in measurement between the Wi-Fi sensor and the respiration belt are accounted for with limits of agreement ranging between [−0.27, 0.21] BPM with a bias of −0.03 BPM between the two instruments.
In addition, we find from the Bland-Altman plot that there is no proportional bias; therefore, the scatter of the plot is homoscedastic.Homoscedasticity was observed because the bias did not vary with increasing mean difference values, nor did the scatter change in variance with the mean difference values.Consequently, we can apply absolute statistics to obtain the instrument's repeatability from a single point along the expected respiratory scale for older adults of [12,28] BPM.Furthermore, since the RR value obtained is consistent with the behaviour predicted by theory and that measured by the respiration belt, this agreement proves construct validity [62].The Bland-Altman plots and calculations obtained in this study used the Pingouin package in Python 3, which is based on Pandas and NumPy libraries, specifically the pingouin.plot_blandaltman()function [76].
2) ACCURACY
Accuracy metrics supporting the results of the validity of the Wi-Fi sensor for RR measurement in older adults are presented in this section.In Table 1., the results of the accuracy and error metrics for each sample duration are presented.The accuracy results were obtained based on the data set of 30 repeated experiments for 14 BPM RR since the data set is larger.A 120-second sample duration results in a smaller Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) and more accurate results than using a 60-second or 30-second sample duration.With the inclusion of more data points in the analysis window, the accuracy of the measurements increased.The error cumulative density function (CDF) is calculated using the Wi-Fi obtained data as observation and the data as ground truth, while the error is smoothed with a gaussian filter with σ = 1.In the graph of the error CDF in Fig. 7, we can see that we obtained approximately 80%, 72% and 68% of error below 1 BPM for 120-seconds, 60-seconds, and 30-seconds respectively.This result is in line with our expectation since a longer sample duration captures smaller magnitudes than shorter sample By providing the accuracy and error metrics per sample duration for the Wi-Fi sensor, we can assess the validity of the measurements.
B. REPEATABILITY
The repeatability of Wi-Fi as an RR measurement instrument was evaluated by making 30 measurements for a RR of 14 BPM, each with a duration of 120-seconds.The choice of RR of 14 BPM was subjective, based on the participant's most comfortable breathing rate.Depending on the homoscedasticity of the previously obtained Bland-Altman plot, it appears that it is appropriate to select a single point of RR for analysis.Hence, any point within the range of [12,28] BPM is suitable to select for analyzing the repeatability statistics.
Table .2 displays the results of repeatability for sample durations of 30-seconds, 60-seconds and 120-secondswhich is calculated using the standard deviation of the Wi-Fi CSI RR measurements, and the associated confidence interval with a 95% confidence level.We can note that the confidence interval width for the 60-second sample duration is double that of the 120-second sample duration.These results are expected because the resolution of the Fourier Transform increases as the duration of the sample window increases, where 1 T s × 60 is the resolution in BPM [22].Although the 30-second sample duration obtains a smaller interval width than the 60-seconds duration, the confidence interval does not contain the expected RR value of 14 BPM; hence, the sample mean does not equal 14 BPM at the level of 0.05 significance.We conclude that a wider sample duration yields more repeatable results attributed to obtaining a smaller standard deviation and narrower confidence intervals.
V. DISCUSSION
In general, an experiment is valid if it measures the quantities it intends to measure.In previous studies, quality evaluations of Wi-Fi respiration sensors have mostly focused on comparing it to a ground-truth device and evaluating the correlation [77], [78].However, correlation is limited to investigating the strength of the linear relationship between two variables.Correlation is not regarded as a measure of agreement; it is a measure of association [61] and cannot be used to evaluate the interchangeability and validity of the device, which is necessary for clinical evaluation.
Our objective is to determine whether the RF sensor and the respiration belt can be used interchangeably if the readings of the two devices agree within acceptable limits.This type of comparison is frequently conducted for medical instruments when a new measurement method is less precise but less invasive or more affordable than the ground-truth or gold standard [61].This is the first work of its kind to determine the limits of agreement between a Wi-Fi sensor and a respiration belt to assess how the two devices agree on measurements.To provide markers for evaluating the suitability and the generality of implementing Wi-Fi sensing in the context of health care, experiments were conducted for the normal RR range of older adults.
We apply the Bland-Altman method to different sampling durations of 30-, 60-and 120-seconds to examine the effect of window width on the validity of the Wi-Fi sensor.It is evident from Fig. 5 that the Limits of Agreement as well as the bias decrease as the sample duration for the time window increases, indicating an improvement in validity and hence the reliability of the Wi-Fi sensor's measurements.An acceptable range for the limits of agreement must be determined a priori by the clinical or care staff before implementation, which could depend on patient risk and health conditions.Typically, inter-observer variability of respiration in a clinical setting may account for a difference of 2-6 BPM [68].
For the set of measurements taken in a lab setting, the performance was on par with the wearable and contact sensors discussed in [11] and [12] for the 30-second time widow Wi-Fi sensor, and exhibited better performance when 60-an d 120-seconds windows were used.In [11], the narrowest limits of agreement obtained are [−5.6,6.4] BPM with a bias of 0.4 BPM using a mattress embedded sensor against thoracic impedance pneumography.Meanwhile, in [12] the best agreement was obtained using a chest band sensor with limits of [−9.99, 6.8] BPM with a bias of -1.60 BPM against a cardiac test face mask.While for we obtain limits of agreement of [−1.29, 1.06] BPM with a bias of −0.11 BPM for the Wi-Fi sensing system against the NUL-236 respiratory belt using a 1-minute window analysis.A summary of the comparison of the results of our device against some of the best-performing devices mentioned in [11] and [12] is listed in Table .3. While our study's findings are confined to controlled laboratory conditions, they exhibit significant potential.
The Bland-Altman method obtains the limits of agreement, but it cannot determine whether these limits are acceptable.The acceptability of the limits of agreement between these two devices must be defined a priori by a clinical or a professional, with the health risk of older patients in mind.For instance, the limits of acceptability can be predefined as ±3 BPM, as in [11].If the limits of agreement are found to be clinically insignificant, we may say that the two devices are interchangeable [59].Interchangeability demonstrates the instrument's validity and acceptability according to predefined criteria.Although the results of this study cannot evaluate device interchangeability, the validity and agreement are assessed for the range of standard RR of older adults, providing an appropriate analysis for Wi-Fi's use as a medical device for RR measurement for older people.
The second form of validity concerns the accuracy of the device.We applied the accuracy and error metrics to the repeated RR values of the experiments.These metrics evaluate the closeness of the measured value to the ground-truth value and hence can be mostly attributed to systematic errors.Accuracy was also evaluated for varying window widths and showed improved metrics with increasing sample duration.Presenting accuracy metrics is essential as systematic error tolerance must be determined before implementing the Wi-Fi sensor for RR estimation, and the sensor must be calibrated to an acceptable degree fit for use in the care of older persons per patient risk and health condition.
Since the Bland-Altman plot is homoscedastic, the repeatability of the plot can be determined using absolute statistics.The standard deviation and confidence intervals characterize the spread of the measurements around the mean value and uncertainty in the Wi-Fi sensing device.As expected, the uncertainty around the mean value decreases with increasing sample duration, as does the confidence interval width.The RR inversely influences the breathing depth, and due to the homoscedasticity of the plot, the model is generalizable across RRs in range and their corresponding breathing depths.The precision of the Wi-Fi sensor for RR estimation informs clinicians and care professionals regarding the degree of random errors present in the sensor.Prior to implementing a monitoring system, a random error tolerance assessment must be similarly conducted for repeatability on a wider participant pool because it can affect important healthcare decisions.
A. CLINICAL IMPACT
Nurses usually manually assess vital signs during ward rounds, a situation in which the monitoring frequency is low and adverse events are often missed [79].Manual counting methods suffer from high inter-observer variability.Two simultaneous observers measured the RR and obtained considerably wide Limits of Agreement of [−4.2, 4.4] [80].However, continuous or automated monitoring devices would help capture adverse events in patients more effectively.Rubio et al. [12] presented a comparison of four wearable devices worn simultaneously against a ground-truth; however, ill patients found the sensors to be intrusive, which would affect patient adherence.Using an unobtrusive alternative, such as Wi-Fi sensing, provides a more acceptable alternative for older patients.
This validation method comparison study performed with a Wi-Fi sensor against an RR belt offers an evaluation and interpretation of the instrument agreement.Furthermore, the use of correct statistical methods to evaluate the accuracy of a measurement device will provide the end-user with a better understanding of the implications of adopting a new measurement methodology.In this case, the Bland-Altman method is discussed in the medical statistics and instrumentation literature as a metric for validity and interchangeability.Additionally, this study was one of the first to assess the clinical acceptability of using Wi-Fi sensing as a non-contact tool to measure RR in the context of care of older adults.
VI. CONCLUSION
This study aimed to conduct the first investigation on the validity and repeatability of Wi-Fi Channel State Information (CSI) sensing for respiratory rate measurements in the context of caring for older adults as a medical device.As a first step, we validated the performance of the ESP32 Wi-Fi sensor against the respiration belt logger NUL-236 as a ground-truth device within the typical respiratory range for older individuals, from 12 to 28 breaths per minute, using the Bland-Altman method thus confirming the generalizability of the model across the respiratory range.Furthermore, as the validity results are homoscedastic in nature, we can evaluate the repeatability of the measurements at a single point.These repeated measurements were also used to measure the precision and accuracy of the Wi-Fi sensor against the ground-truth respiration belt to determine the effects of random and systematic errors.The dataset of Wi-Fi CSI measurements, along with the corresponding belt data, was collected and made available for the validity and repeatability experiments.
The interchangeability of a medical device depends on its acceptance by clinical or care staff.Providing an appropriate appraisal of a measurement device would support professionals in adapting and deploying non-contact Wi-Fi sensing in older patients in care.This study addresses these points by providing validity and repeatability assessments to facilitate the interchangeability of Wi-Fi CSI sensing as a medical respiratory rate device for older adults.As this study was conducted in a controlled laboratory environment, data collection was limited to an independent living scenario with one quasi-stationary subject.Further investigations should be conducted to include a longitudinal multi-participant study informed by this work to better understand the interchangeability between Wi-Fi CSI respiration sensing and ground-truth devices.Future work will address different multi-sensor placements to explore optimal sensor locations for data fusion in the context of care of older adults, as well as abnormal respiration pattern detection during sleep to monitor health conditions and pathologies.
FIGURE 2 .
FIGURE 2. Independent living scenario: Seated in a living area.
FIGURE 4 .
FIGURE 4. Signal processing results for 20 BPM and 120 seconds sampling time.
FIGURE 5 .
FIGURE 5.Comparing the respiratory waveform obtained using Wi-Fi CSI and the respiratory belt for 20 BPM between[60,80] seconds.
FIGURE 6 .
FIGURE 6. Validity: Bland and Altman plots for the Wi-Fi sensor and the respiration belt.
FIGURE 7. Error CDF for different sampling duration.
|
2024-01-08T16:42:37.475Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "d8214c52475d52e689b0ce83424a90e5fff9ca5c",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10380607.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "e6e27f143da21bfc663eceb75281de06fdc0048a",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
104384294
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Rice Husk Biochar on Growth, Water Relations and Yield of Maize (Zea mays L.) under Drought Condition
The present experiment was conducted to study the impact of rice husk biochar on growth, water relations and yield of maize (BARI Hybrid Bhutta9) under drought (60 and 40% of FC) conditions. Four doses of rice husk biochar @ 0, 5, 10 and 20 t/ha were applied as an amendment in soil before sowing of seeds. Results revealed that drought stress reduced plant height, relative water content and grain yield of maize. But rice husk biochar at different doses improved the above mentioned characters under drought conditions. Under 60% of FC, the highest plan height, leaf water content and yield were 196.67 cm, 79.86% and 89.75 g/plant, respectively when biochar was applied @ 20 t/ha but it was 173.33 cm, 78.32% and 84.57 g/plant, respectively under 40% of FC when biochar was applied at the same dose. It may be concluded that, rice husk biochar @ 20 t/ha showed the best result to promote growth, water relation traits and yield of maize under drought condition.
Introduction
Maize (Zea mays L.) is the third most important cereal crop in Bangladesh, after rice and wheat. It can be cultivated year round. The crop is high yielding, rich in nutrient and has diversified uses. The demand of maize in Bangladesh is primarily from the commercial feed processing industry especially poultry sector is using 80% of its aggregate maize production (excluding imports) (WPSA, 2013). Therefore, production of maize needs to be increased. Growth and yield of maize are severely affected by drought (WPSA, 2013) in winter season where rainfall is low. Water absorption, imbibition and metabolic enzymatic activation are hindered under drought condition which reduces the grain germination. Drought stress inhibits the photosynthesis of plants, causes changes of chlorophyll contents and damages the photosynthetic apparatus (Escuredo et al., 1998) which ultimately reduce growth promoters (Praba et al., 2009). Under drought stress, cell expansion of leaf is reduced due to low turgor which is controlled by the processes related to cellular water uptake and cell wall extension that resulted in decreased leaf area and weight. The yield and biochemical composition of a plant mainly depends on growth conditions, which is markedly affected by water availability (Paclik et al., 1996).
Therefore the experiment was undertaken to assess the effects of rice husk biochar on growth, water relations and yield of maize variety BARI Hybrid Bhutta-9 under drought conditions.
Location of the experiment
The pot experiment was conducted in the Department of Agronomy, Bangabandhu Sheikh Mujibur Rahman Agricultural University (BSMRAU), Gazipur from November 2016 to March 2017.
Experimental soil characteristics
The soil used in the experiment was sandy loam in texture, organic carbon contain 0.60%, total N 0.05%, available P 0.08 mg/100 g, exchangeable K 0.33 cmol c kg -1 and CEC 14.58 cmol c kg -1 dry soil and the pH was 7.1.
Fertilizers application
The soil in the pot was fertilized uniformly with 2.0, 1.15 and 0.9 g urea, triple super phosphate and muriate of potash corresponding to 525-250-200 kg urea, triple super phosphate and muriate of potash per hectare, respectively (BARC, 2012).
Treatments a) Biochar doses:
Rice husk biochar was mixed uniformly in the soil of each pot with 24.4, 48.8, and 97.6 g corresponding to at the rate of 5, 10 and 20 t/ha, respectively and biochar was not applied to control plot. b) Drought: Three water regimes i) Control (80% of field capacity), ii) 60% of field capacity (FC) and iii) 40% of field capacity (FC) were maintained from 4 th leaf stage of seedling up to maturity.
Crop establishment and drought imposition
Ten bold seeds of BARI Hybrid Bhutta-9 were sown in each plastic pot containing about 11 kg air dried soil. After seven days of germination, two uniform and healthy plants were allowed to grow in each pot. Drought stress was induced by withholding water completely. During the drought treatment period, wilting symptom was visually observed every day. The pots were weighed every other day to compensate the water loss by evapotranspiration according to Choudhury et al. (2014). In non-stress treatment, one liter water per pot was applied in alternate day up to maturity.
Experimental design and data recorded
The experiment was designed at Completely Randomized (CRD) consisting two factors with three replications. The data on plant height were recorded at vegetative (6 th leaf, 10 th leaf, 14 th leaf) and reproductive (tasselling, cob initiation and maturation stage) stages. Water relation traits (relative water content, water saturation deficit, water uptake capacity) of maize leaf were recorded at flowering stage. Yield and yield contributing parameters were recorded at maturity after harvest. The recorded data were statistically analyzed by "CropStat" (IRRI, 2007) software to examine the significant variation of the results due to water stress. The treatment means were compared by Least Significance Difference (LSD) test at 5% level of significance (Gomez & Gomez, 1984).
Plant height at vegetative stage
The height of maize at vegetative stage varied due to different doses of biochar under drought conditions (Table 1). At 6 th leaf stage, plant height was reduced due to drought and higher reduction was found at 40% of FC compare to the 60% of FC. But applications of biochar increased plant height under both of these drought conditions. Highest plant height (43.80 cm) was measured when biochar was applied @ 20 t/ha and it was 42.03 cm at 40% of FC at same dose of biochar. At 10 th leaf stage, under control condition plant height was measured 95.43 cm when biochar was applied @ 20 t/ha but the plant height was 90.40 cm when no biochar was applied. At 60% of field capacity and 40% of field capacity higher height of maize were 93.00 cm and 91.20 cm, respectively when biochar was applied @ 20 t/ha. At 14 th leaf stage, under control condition shortest plant (150.60 cm) was obtained when no biochar was applied, on the other hand it was highest (169.33 cm) when biochar was applied @ 20 t/ha. Under 60% of field capacity and 40% of field capacity longer plant were 154.33 cm and 145.00 cm, respectively when biochar was applied @ 20 t/ha. In a column, figures with same letters were not significant at 5% level. In a column, figures with same letters were not significant at 5% level.
So it is clear that plant height is affected by drought and application of rice husk biochar increase plant height that means biochar mitigate drought effects on plant height. Plant height of maize was reduced due to drought at vegetative stages reported by Abukari (2014). Hussain et al. (2008) reported that by affecting cell turgidity drought impaired plant height. Lehmann et al. (2011) also reported that biochar promoted plant height of maize under drought conditions. Kim et al. (2016) found that application of biochar can increase soil water holding capacity which increased tissue water status and ultimately increased plant height.
Plant height at reproductive stage
Plant height differences of maize at reproductive stages indicated that plant height varied due to different doses of biochar under drought conditions (Table 2). At tasseling stage, under control condition highest plant height was found 190.00 cm when biochar was applied @ 20 t/ha, followed by 174.33 cm and 172.67 cm when biochar was applied 10 t/ha and 5 t/ha, respectively but the lowest plant height (164.00 cm) was measured when no biochar was applied. Drought stress reduced plant height of maize compared to the control and highest reduction was found under 40% of FC. But biochar increased plant height under drought conditions. At 60% of field capacity the highest height of maize was 184.33 cm when biochar was applied @ 20 t/ha and the shortest plant was 161.67 cm and when no biochar was applied. At 40% of field capacity the highest height of maize was 165.67 cm when biochar was applied @ 20 t/ha and the shortest plant was 136.67 cm when no biochar was applied. At cob initiation stage, under 60% of field capacity highest plant height of maize was190.33 cm when biochar was applied @ 20 t/ha, followed by @ 10 t/ha biochar (182.67 cm), @ 5 t/ha biochar (174.67 cm) and lowest plant height (170.00 cm) of maize was found when no biochar was applied. Under 40% of FC same trend also found in case of height of maize plant. At maturity stage, under control condition highest plant height of maize was 202.33 cm when biochar was applied @ 20 t/ha and the shortest plant was observed in no biochar treatment. At 60% of field capacity highest plant height of maize was 195.67 cm when biochar was applied @ 20 t/ha, followed by @ 10 t/ha biochar (185.67 cm), @ 5 t/ha biochar (178.33 cm) and lowest plant height (173.00 cm) was found when no biochar was applied. At 40% of field capacity lowest plant height of maize was 154.00 cm when no biochar was applied. But application of biochar plant height was increased. The highest plant height (173. 33 cm) was recorded at the rate of 20 t/ha biochar application followed by 163.00 cm for 10 t/ha and 156.67 cm for 5 t/ha. It was found that drought stress affected plant height at reproductive stages and biochar application in soil increased plant height under drought conditions. Drought induced plant height reduction was reported by Batool et al. (2015) in maize. Hardy et al. (2014) also reported that the addition of biochar improved plant height. In rice, drought stress during the vegetative stage greatly reduced the plant height (Manikavelu et al., 2006).
Water relation traits
Relative Water Content (RWC) of maize plant was reduced significantly under drought stress. Application of rice husk biochar at different doses increased water holding capacity of soil under drought conditions and thereby increased relative water content (RWC) of maize leaves (Table 3). Under control condition, 60% and 40% of field capacity the highest RWC of maize were 83.37%, 79.86% and 78.32%, respectively when biochar was applied @20 t/ha. The lowest RWC of maize were 66.93%, 63.75% and 62.25%, respectively when no biochar was applied. Water saturation deficit (WSD) of maize plant was increased significantly under drought stress and varied due to application of biochar with different doses (Table 3). Under control condition, the lowest WSD of maize leaf was 16.62%, when biochar was applied @ 20 t/ha and highest WSD of maize leaf was 33.06%, when no biochar was applied. At 60% of field capacity highest water saturation deficit of maize leaf was 36.24% when no biochar was applied 96
Shashi et al. /The Agriculturists 16(2): 93-101 (2018)
but lowest water saturation deficit (20.13%) was found when biochar was applied @ 20 t/ha. At 40% of field capacity lowest water saturation deficit was 21.17% when biochar was applied @ 20 t/ha. On the other hand highest water saturation deficit was 37.74% when no biochar was applied. Water uptake capacity (WUC) of maize was increased significantly under drought conditions but application of biochar decreased water uptake capacity (Table 3).At control condition the lowest WUC of maize was 1.52, when biochar was applied @20 t/ha and highest WUC of maize leaf was 1.90, when no biochar was applied. At 60% of field capacity highest water uptake capacity was 1.97 when no biochar was applied but lowest water uptake capacity was 1.55 when biochar was applied @ 20 t/ha.
At 40% of field capacity lowest water uptake capacity was 1.61 when biochar was applied @ 20 t/ha but the highest water uptake capacity was 2.02 when no biochar was applied. Akhtar et al. (2014) found that biochar increased RWC and water use efficiency of drought stressed tomato plants. Uzoma et al. (2011) also reported that biochar increased water status of maize tissue in sandy soil.
Reproductive growth of maize
The number of cob was 1.0 per plant which did not varied with drought levels and treatments (Table 4). Drought stress affected length of cob. When biochar was applied at different doses the cob length was gradually increased (Table 4). In a column, figures with same letters were not significant at 5% level.. In a column, figures with same letters were not significant at 5% level.
Under control condition the highest length of cob (17.66 cm) was found at 20 t/ha of biochar treatment and the lowest one (15.93 cm) was for control (no biochar). Under 60% of field capacity the highest length of cob was 15.33 cm at 20 t/ha of biochar and the lowest 13.23 cm was for control. Under 40% of field capacity the highest length of cob was 15.30 cm with was applied @ 20 t/ha of biochar and the lowest (12.10 cm) was for control. Cob diameter of maize reduced under drought stress. The highest reduction was for 40% of field capacity compare to that for 60% of FC but application of biochar increased diameter of cobs (Table 4). Under control condition highest cob diameter (3.90 cm) was found when biochar was applied @ 20 t/ha and it was lowest (3.50 cm) when no biochar was applied. Under 60% of field capacity highest cob diameter (3.65 cm) was found when biochar was applied @ 20 t/ha and it was lowest (3.20 cm) when no biochar was applied. Under 40% of field capacity highest cob diameter (3.50 cm) was found when biochar was applied @ 20 t/ha and it was lowest (3.15 cm) when no biochar was applied.
Yield and yield contributing characters
Number of seed per cob, 100 grain weight and grain yield varied significantly with biochar doses under drought conditions (Table 5). Under control condition highest number of seed per cob was 353.00 when biochar was applied @ 20 t/ha and lowest number of seed per cob was 163.00 when no biochar was applied. At 60% of FC highest number of seed per cob was 335.00 when biochar was applied @ 20 t/ha and lowest number os seed per cob was 147.33 when no biochar was applied. Under 40% of FC highest number of seed per cob was 334.66 when biochar was applied @ 20 t/ha and lowest number of seed per cob was 139.00 when no biochar was applied. Under control condition highest 100 grain weight (27.74 g) was found when biochar was applied @ 20 t/ha and it was lowest (21.88 g) when no biochar was applied. Under 60% of field capacity highest 100 grain weight (26.51 g) was found when biochar was applied @ 20 t/ha and it was lowest (20.71 g) when no biochar was applied. At 40% of field capacity highest 100 grain weight (25.00 g) was found when biochar was applied @ 20 t/ha and it was lowest (20.00 g) when no biochar was applied. Grain yield reduced due to drought but application of biochar maize grain yield increased (Table 5). Under control condition highest grain yield was 96.70 g/plant when biochar was applied @ 20 t/ha and lowest grain yield was 40.71 g/plant, when no biochar was applied. At 60 % of FC the highest grain yield was 89.78 g/plant when biochar was applied @ 20 t/ha and lowest grain yield was 35.92 g/plant when no biochar was applied. Under 40% of FC the highest grain yield was 84.57 g/plant when biochar was applied @ 20 t/ha and lowest grain yield was 27.84 g/plant when no biochar was applied. In a column, figures with same letters were not significant at 5% level. Estrada-Campuzano et al. (2008) observed that water stress reduced yield of triticale and reductions of yield have been reported in snap bean by Lakitan et al. (1992). Drought stress affect negatively on anthesis, grain filling of maize associated with reduction of number seed/cob, 100 grain weight and ultimately grain yield. Decrease of photosynthesis under drought conditions also affected grain yield. Drought stress affect negatively on anthesis, grain filling of maize associated with reduction of number seed/cob, 100 grain weight and ultimately grain yield. Increasing of cassava yield with biochar application has been shown by Islami et al (2011) and Mannan et al. (2016) reported that biochar increased pod yield of soybean under saline stress. Foster et al. (2016) observed biochar application increased maize yield semi arid conditions. Application of biochar increased photosynthesis efficiency, anthesis and grain filling thereby increased yield of maize.
Conclusions
From the obtained results it might be concluded that application of rice husk biochar had positive impact on growth, water relation traits and yield of maize under drought conditions. Among the doses of rice husk biochar, the rate of 20 t/ha dose presented the best performance to enhance plant height, leaf water content and yield of maize. So rice husk biochar might be used as a soil amendment to mitigate drought effects in maize.
Acknowledgement
We are grateful to University Grants Commission (UGC), Bangladesh for funding the research.
|
2019-04-10T13:13:35.102Z
|
2018-12-22T00:00:00.000
|
{
"year": 2018,
"sha1": "11ea4954d5a4c31d67bda7fda5e10bef174c0263",
"oa_license": null,
"oa_url": "https://banglajol.info/index.php/AGRIC/article/download/40347/30384",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36ecc00fbd7eca30e8a015e8f5fdb13aeb881f37",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
254123008
|
pes2o/s2orc
|
v3-fos-license
|
Decoupling dedifferentiation and G2/M arrest in kidney fibrosis
Understanding the cellular mechanisms underlying chronic kidney disease (CKD) progression is required to develop effective therapeutic approaches. In this issue of the JCI, Taguchi, Elias, et al. explore the relationship between cyclin G1 (CG1), an atypical cyclin that induces G2/M proximal tubule cell cycle arrest, and epithelial dedifferentiation during fibrogenesis. While CG1-knockout mice were protected from fibrosis and had reduced G2/M arrest, protection was unexpectedly independent of induction of G2/M arrest. Rather, CG1 drove fibrosis by regulating maladaptive dedifferentiation in a CDK5-dependent mechanism. These findings highlight the importance of maladaptive epithelial dedifferentiation in kidney fibrogenesis and identify CG1/CDK5 signaling as a therapeutic target in CKD progression.
The proximal tubule in the AKI-to-CKD transition Chronic kidney disease (CKD) affects approximately 800 million people worldwide and treatment options are limited (1). CKD is characterized by tubular atrophy, inflammation, interstitial fibrosis, and progressive loss of kidney function. One common cause of CKD is acute kidney injury (AKI) in a process termed the AKI-to-CKD transition. In the past, it had been thought that the kidney completely recovered after an episode of AKI, but over the last 15 years it has become clear that complete recovery is likely the exception -some degree of permanent damage and fibrosis exists after every episode of AKI, even if subclinical.
The molecular mechanisms regulating the AKI-to-CKD transition remain incompletely understood. In successful repair, injury induces cellular dedifferentiation, characterized by the loss of brush border and terminal differentiation markers (Sox9 and vimentin [VIM]) and the acquisition of a transient mesenchymal phenotype. This process is followed by cellular proliferation to replace neighboring epithelia lost through cell death, followed by redifferentiation and restoration of tubular function. Many studies have implicated aberrant proximal tubule injury responses as a central driver of the AKI-to-CKD transition (2,3). Some results indicate that a subset of the injured proximal tubules become arrested in the G 2 /M phase of the cell cycle, leading to adoption of a senescence-associated secretory phenotype (SASP) (4,5). This proinflammatory cell state promotes local inflammation and fibrosis that lead to CKD. Other work has identified a subset of dedifferentiated epithelia that fail to redifferentiate after injury, instead adopting a proinflammatory and profibrotic state variously termed "maladaptive" or "failed repair" (6)(7)(8)(9). Importantly, it is not yet clear whether G 2 /M-arrested cells and failed-repair epithelia are one and the same, although a parsimonious interpretation of the literature would indicate that they should be. Casting doubt on this hypothesis, recent single-cell RNA sequencing studies have suggested that failed-repair proximal tubule cells are not in a G 2 /M-arrested state, suggesting distinct proximal tubule cell states exist (6, 7).
Proximal tubule dedifferentiation and the CG1/CDK5 pathway
In this issue of the JCI, Taguchi, Elias, et al. (10) shed light on the relationship between proximal tubule dedifferentiation, G 2 /M arrest, and the AKI-to-CKD transition. The authors had previously identified cyclin G1 (CG1), an atypical cyclin induced by p53 and known to regulate G 2 /M arrest in other contexts, as a key player in the AKIto-CKD transition (5). In particular, they concluded that CG1 is specifically induced in the maladaptive proximal tubule and that it is sufficient to induce both cellular dedifferentiation and G 2 /M arrest that lead to CKD. In their current work, the authors subjected global CG1-knockout mice, which lack a kidney phenotype in health, to various models of the AKI-to-CKD transition. Knockout mice were clearly protected from the development of CKD after AKI, consistent with the proposed role of CG1 in promoting G 2 /M arrest and fibrosis (10).
The authors reasoned that if CG1 promotes CKD by inducing G 2 /M arrest, then inducing G 2 /M arrest should reverse the protective effect of CG1 knockout. Unexpectedly, knockout mice exposed to paclitaxel were still protected from CKD, despite the strong induction of G 2 /M arrest. These findings suggest that CG1 regulates proximal tubule fibrosis but independent of G 2 /M arrest. Follow-up studies showed that CG1-knockout kidneys were characterized by less dedifferentiation and proliferation of proximal tubules in the chronic injury phase, sug-Understanding the cellular mechanisms underlying chronic kidney disease (CKD) progression is required to develop effective therapeutic approaches. In this issue of the JCI, Taguchi, Elias, et al. explore the relationship between cyclin G1 (CG1), an atypical cyclin that induces G 2 /M proximal tubule cell cycle arrest, and epithelial dedifferentiation during fibrogenesis. While CG1knockout mice were protected from fibrosis and had reduced G 2 /M arrest, protection was unexpectedly independent of induction of G 2 /M arrest. Rather, CG1 drove fibrosis by regulating maladaptive dedifferentiation in a CDK5-dependent mechanism. These findings highlight the importance of maladaptive epithelial dedifferentiation in kidney fibrogenesis and identify CG1/CDK5 signaling as a therapeutic target in CKD progression.
of brush border and differentiation markers, which superficially resemble equivalent dedifferentiation events, neither CG1 nor CDK5 was required for successful proximal tubule repair (10). By contrast, this signaling pathway plays a critical role in driving epithelial dedifferentiation and fibrosis in CKD. Similarly, what roles, if any, do CG1 and CDK5 play in the AKIto-CKD transition? In mild or moderate AKI, the majority of proximal tubule cells successfully proliferate and redifferentiate after injury (11), processes that this work (10) shows do not require CG1 or CDK5. But left unresolved is whether the fraction of proximal tubule cells (~5%-10%) that take on a "failed repair" or "maladaptive" cell state after AKI do so as a consequence of activation of the CG1/CDK5 pathway. It stands to reason that they may well be related, since single-nucleus RNA sequencing of AKI-to-CKD models demonstrates that this minority cell population is not arrested at G 2 /M. These questions also await further experimental investigation.
That profibrotic cellular dedifferentiation in CKD can be targeted therapeutically by inhibition of CDK5 not only validates the kinase as a therapeutic target
Implications and unanswered questions
There are several implications of this work. Perhaps most importantly, these studies shift focus away from G 2 /M arrest as a central, required cell state for the development of fibrosis and CKD, as had been concluded previously. Deletion of CG1 inhibited both G 2 /M arrest and fibrosis but subsequent induction of G 2 /M arrest did not reverse this protective phenotype, providing strong evidence that, in this context at least, proximal tubule G 2 /M arrest was not sufficient to drive fibrogenesis (10). Instead, proximal tubule dedifferentiation in CKD appears to be the critical profibrogenic cell state, one that is regulated by the CG1/CDK5 axis. It is likely that G 2 /M arrest still plays roles in CKD, perhaps acting in addition to other profibrotic pathways rather than as a necessary and sufficient state.
Another intriguing implication of Taguchi, Elias, et al. (10) that clearly needs further investigation is the suggestion that proximal tubule dedifferentiation after AKI may be fundamentally different from dedifferentiation after CKD. While both acute and chronic injuries lead to the loss gesting that rather than drive fibrosis by inducing G 2 /M arrest, CG1 may be driving fibrosis by inducing proximal tubule dedifferentiation (10). CG1 activates cyclin-dependent kinase 5 (CDK5) through phosphorylation of tyrosine 15, and phosphorylated CDK5 was detectable in maladaptive proximal tubules of wild-type but not CG1-knockout kidneys. In vitro studies showed that CDK5 expression was sufficient to drive proximal tubule dedifferentiation, suggesting that CDK5 activation mediates CG1-dependent dedifferentiation. Since CG1-knockout mice were protected from the AKI-to-CKD transition, Taguchi, Elias, and colleagues next asked whether CDK5 is also a therapeutic target. They showed convincingly that indeed either pharmacologic inhibition of CDK5 with two different drugs or tubule-specific knockout of Cdk5 protected against development of CKD after AKI. Importantly, Cdk5 knockout reduced proximal tubule dedifferentiation, but did not reduce induction of G 2 /M arrest, further emphasizing a decoupling between dedifferentiation and G 2 /M arrest in the AKI-to-CKD transition (Figure 1) (10). Compared with WT CKD models, PTs lacking CDK5 have reduced dedifferentiation, characterized by reduced Sox9 and VIM expression. However, cells without CKD5 remain arrested in the G 2 /M phase of the cell cycle, similarly to WT cells. Notably, the resultant phenotype displays reduced fibrosis with CKD despite induction of G 2 /M cell cycle arrest (10).
fibrosis through the STING pathway. All of these hypotheses require testing.
In summary, Taguchi, Elias, and colleagues uncouple G 2 /M arrest from dedifferentiation and progression of fibrosis. They implicate a CG1/CDK5 signaling axis in regulating proximal tubule dedifferentiation and fibrosis and validate these proteins as therapeutic targets in CKD (10).
in fibrosis, but also suggests that other downstream pathways could represent additional antifibrotic targets as well. The signaling pathways either upstream of CG1 or downstream of CDK5 in proximal tubules remain undefined. Mitochondrial dynamics and dysfunction are increasingly recognized to play critical roles in driving both recovery from AKI as well as the progression of CKD (12). Mitochondrial dysfunction can lead to leakage of mitochondrial DNA into the cytosol, where it activates the cytosolic cGAS-stimulator of interferon genes (STING) DNA sensing pathway that then drives proinflammatory cytokine expression and renal fibrosis (13). In the brain, CDK5 has established roles in promoting mitochondrial fission and dysfunction and in some neuronal cell types this pathology leads to cell death (14). Given this context, it is intriguing to speculate that the CG1/CDK5 axis may link profibrotic epithelial dedifferentiation to mitochondrial dysfunction, inflammation, and fibrosis. For example, CDK5 phosphorylates the GTPase dynamin-related protein 1 (Drp1), and this phosphorylation at S616 increases Drp1 translocation to the mitochondria, accelerating fission (15,16). Proximal tubule-specific deletion of Drp1 promotes recovery after AKI (17), suggesting that a CDK5-dependent phosphorylation of Drp1 in CKD may cause mitochondrial fission, mitochondrial dysfunction, and potentially renal inflammation and
|
2022-12-02T06:17:22.941Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e24709a5e30032d922dc25cda5d1f53c9adf60ef",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0d80228df2ba90e50613195b195eb7e78dac9a98",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221082570
|
pes2o/s2orc
|
v3-fos-license
|
Sero-prevalence of Peste des Petits Ruminant (PPR) Virus in Sheep and Goat Population of Gilgit Baltistan Province of Pakistan
| Peste des petits ruminants (PPR) caused by PPR virus (PPRV), is a contagious disease of domestic and wild small ruminants. The disease is endemic in developing countries of African and Asian worlds including Pakistan, where several clinical cases in small ruminants (sheep and goat) have been frequently reported. Despite PPRV is endemic in Pakistan, information on disease serosurveillance of prevailing strains in Gilgit-Balitistan (GB) territory is scarce. Therefore, the current study was designed to assess the seroprevalence of PPRV and to evaluate potential risk factors involved in the transmission of PPR disease in four distinct locations of GB province. We reported occurrence and risk factor analysis of PPR in small ruminants (n=1000) originating from different places in district Gilgit using Hem agglutination Inhibition (HI) test followed by risk analysis through Open-Epi software. Serum samples including goats n=500 and sheep n=500 were collected from different herds situated at Naltar lake, Tattovat, Fairy meadows, Bangle, and Naltar. Overall a comparable prevalence was identified for both goat and sheep (46% vs 44%, P > 0.05). Future studies are necessary to further ascertain the study outcomes and elucidate the molecular epidemiology of prevalent strains in the said geographical locations for better disease control and management interventions. Novelty Statement | It is the first report from Gilgit-Balitistan which ascertain necessary intervention such as vaccination on mass scale, animal movement control etc for disease management in future. Article History Received: January 15, 2019 Revised: July 29, 2019 Accepted: June 02, 2020 Published: June 26, 2020 Authors’ Contributions MR conducted the research. TA supervised this study. NM, MFS MI and SY helped in sample collection and reviewing the article.
species, also effect camel from large animals and becoming a highly contagious viral disease. The most important finding of pathological process of PPR can be observed in the respiratory and digestive system. Morbidity and mortality rates also give an idea about the pathogenicity measure of virus. Vaccination is a way to control PPR, required on mass scale; losses can be limited by prevention but for this, a deep knowledge of pathogenesis of disease is very important. Pakistan has an agriculture-based economy. According to Economic survey of Pakistan (GOP, 2017), among the meat-producing animals, goats and sheep are regarded as ''poor-man cow'', and the need for HALAL meat and its export has increased the importance of these animals. However, the biggest obstacle is the irresistible disease, the lack of a legitimate and the appropriate vaccine. A high cost of rearing is associated with the occurrence of PPR in a herd in China (Banyard et al., 2014) and same could be considered true for similar setting in Pakistan e.g., Gilgit-Baltistan, where a large number of farmers are dependent for their routine livelihood on small ruminant. By seeing highly contagious nature of disease, its burden in affected population and subsequent economic losses, studies elucidating the nature of infection and its prevalent status to devise appropriate interventions is much required.
Materials and Methods
This research was performed to evaluate the rate of occurrence of PPR virus, in small ruminants from different areas in Gilgit Baltistan and included Tattovat, Fairy-Meadow, Naltar Lake and Bangle Naltar. Also, the study analyzed a potential association among a number of risk factors that could predispose occurrence of disease in animals and its subsequent spread in the surrounding susceptible population. The risk factors included animal species, age, gender and season. The study samples were collected during a period of one year from 2016 -2017 and were processed and analyzed at Faisalabad Institute of Research Science and Technology (FIRST), Faisalabad.
Collection of blood samples
The study involved aseptic collection of 1000 blood samples (500 for goat, 500 for sheep), through jugular vein. These samples were collected from animals that were reared under open-house grazing method, exhibited flu like symptoms such as cough, nasal discharge, high temperature and pustules were clinically suspected for PPR. Briefly, the test animal was properly restrained at the spot and the left side of the throat of animal was pressed to find the jugular vein. The neck was shaved to locate the jugular vein position as assess sero-positivity in small ruminants, 1000 samples (500 for goat, 500 for sheep) of blood, collected aseptically through jugular vein of sheep and goats reared under open-house grazing method (Forsyth and Barrett, 1995). The parameter like age of animal, breed, gender and geographical location along with locality was also noted during sampling. Blood samples were transferred to EDTA vacutainers for plasma separation. All samples were properly labeled by placing date and sample ID on the vacutainers. The ID was traced from the data file containing information about the age, breed, gender, locality and geographical location of collected sample. The samples were transferred to research lab of Microbiology, FIRST in ice packs.
Processing for plasma separation
Once reached in the laboratory, the blood samples were processed for plasma separation. The vacutainer was centrifuged at 4000 rpm for 10 minutes at 4 o C (Dhar et al., 2002). The plasma (supernatant) was separated and aliquots were made in pre-labeled sterile plastic microfuge tubes having capacity of up to 2 ml after that all samples were stored at -20 o C till its further use for serology. A brief history of herd animal health was taken and according to that animals were healthy apparently.
Hemagglutination inhibition assay
The hemagglutination inhibition test was performed to check the sero-positivity of blood samples for corresponding antibodies against PPR as per method taken from (Anderson and McKay, 1994), in Microbiology laboratory of FIRST, Pakistan.
Statistical analysis
All the data collected was entered into MS Excel (Mic Co.) computer program and Geometric mean titre (GMT) was calculated as suggested by (Brugh, 1978). An overall analysis was carried out by Chi-square analysis and 95% confidence interval (CI).
Results and Discussion
Since the study employed antibody-based detection of PPR, the findings of the study could be implicated to express current as well as previous exposure of animals to PPR. For this particular purpose, the study used HI for detection of PPRV antibodies and the outcome which falls within the range of 4log 2 -9log 2. Out of total samples processed, though the prevalence rate was almost comparable and a non-significant difference was observed for both animal species, it was higher in goat than sheep (46% vs 44%, P> 0.05). A brief summary of prevalence rate according to location in district Gilgit, is presented in (Table 1). While considering sex-based prevalence of antibodies corresponding to PPR, the occurrence was found to be significant (P < 0.05) again, however, it was higher in female (69.96%) as compared to male (63.92%). Nevertheless, evidenced by the calculated GMT, the amount of antibody titer was higher in male (15.92) than female (14.83) indicating their capacity to generate a strong immune response upon exposure to infection. Adult animals with age > 1 year (69.02) as compared to young animals with age <1 year (62.94%). Further, while the influence of age was analyzed, a high antibody titer was observed for animals greater than one year of age (17.91) as compare to GMT from the animals of less than one year of age (12.85). As influence of season in further dissemination of disease is concerned, there was a nonsignificant difference (P>0.05) between the prevalence rate according to season. The study shows higher seroprevalence of PPR virus in summer season 72.72% followed by 65.09% in winter, 64.91% in autumn and 61.73% in rainy season. The GMT was found to be higher (20.18) in summer season (Dec-Feb), 14.03 in autumn season (Sep-Nov) and lowest GMT (13.77) during rainy season ( Jun-Aug).
The main goals of the present study were to determine the seroprevalance of PPRV in domestic animals in an area near such as Gilgit Baltistan bordering the neighboring countries. As documentation on prevalence and seroprevalance on PPRV are not available from these areas before the present study. The current study has revealed a high seroprevalance of antibodies to PPRV were observed in goats as compared to sheep (Rehman et al., 2011). The findings of the present study are in agreement with previous reports on epidemiological studies of PPRV in domestic small ruminants, in which antibodies to PPRV showed high incidence in goats rather than sheep using various techniques (Dhar et al., 2002;Ozkul et al., 2002). But our study results are contrasting with the findings through another study (Khan et al., 2011), who observed a high seropositivity in sheep (51.3%) than the goats (39%) using monoclonal c-ELISA.
The present study also showed that, high prevalence of antibodies was detected from the animals of older than one year of age as compare to younger animals. These findings are in agreement with previous studies (Abubakar et al., 2017) from Pakistan, in which PPRV antibodies based prevalence found higher titers in adult animals of 1-2 years and > 2 years rather than < 1 years of age. Similarly, previous report by (Aziz ul Rehman et al., 2016) reported a high antibody revalence of 69.9% in 1-2 y old animals when compared to other age group. Keeping in view of the passive immunity from the vaccinated mother also contributed in the protection of younger animals aged up to 4 months from the disease outbreak from PPRV. Most outbreaks have been observed in humid condition perhaps due to virus survival in low temperature environment, seasonal prevalence is controversial regards incidence of PPRV antibodies because of manage mental practices, environmental, nutritional and socioeconomic conditions, under which animal is kept.
In conclusion, the study provides a preliminary insight towards the presence of PPR infection in small ruminant in Gilgit-Baltistan area. Further studies with a large dataset covering a wide geographical region is much necessary to better elucidate disease control and management strategies.
|
2020-07-02T10:09:32.918Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "5993c33a5b00e2486941a17d9dea8816a716d3b4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.17582/journal.pujz/2020.35.1.123.127",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5aeab714a7229c5f32e5a0a99520c94106f04bb1",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
55746333
|
pes2o/s2orc
|
v3-fos-license
|
Study on Horizontally Polarized Omnidirectional Microstrip Antenna
A horizontally polarized omnidirectional microstrip antenna is proposed in this paper. The structure of designed antenna is with two back-to-back horizontally polarized microstrip antenna elements. Gain variation on main radiation plane of this new antenna is analyzed and radiation theory is deduced; formula of directivity onmain radiation plane is given. Better omnidirectional characteristic of this antenna can be obtained by decreasing patch physical length. Both simulated and measured results verify the omnidirectional radiation patterns and input impedance characteristics. Good omnidirectional radiation patterns (gain variation in E-plane less than ±0.4 dBi) and input impedance characteristics are obtained; moreover, cross polarization less than −20 dBi is achieved.
Introduction
Omnidirectional antennas were applied in many communication systems.It's well known that an omnidirectional antenna can be easily realized by a vertical dipole like in [1][2][3][4].Microstrip antennas were also easily designed with omnidirectional radiation pattern, microstrip antenna array mounted on a circular cylinder was presented in [5], it needed more than eleven patches to achieve good omnidirectional pattern.The drawbacks of those cylindrical conformal microstrip antennas were difficult to fabricate and they need more than three patches to obtain omnidirectional radiation pattern.All back-to-back omnidirectional microstrip antennas in [6][7][8][9] were with omnidirectional radiation pattern, however they are either vertically polarized or circularly polarized, none of those works were horizontally polarized.A horizontally polarized omnidirectional microstrip antenna was described in [10], the antenna consists of three main components: a probe-fed main patch and two parasitic patches, placed conformally on a cylindrical structure, drawback of this antenna is a performance with only 1.46 % narrow bandwidth.
Horizontally polarized (HP) omnidirectional radiation pattern can be achieved by printed antennas.In paper [11], there were four notches cut out from the bottom conductor layer, and four microstrip lines fed them respectively on the upper layer to obtain HP omnidirectional radiation pattern.In paper [12], a horizontally polarized omnidirectional planar printed antenna for WLAN applications was presented.The printed Alford-loop-structure antenna consisted of two Z-shaped strips printed on the top and bottom plane of the FR-4 printed-circuit-board substrate.A horizontally polarized omnidirectional planar antenna in [13] is developed for mobile communications.The proposed antenna consists of four printed arc dipoles that form a circular loop for HP omnidirectional radiation.But it needs a complex feeding network which includes four baluns and an impedance matching circuit to excite the four printed arc dipoles.
Slot antenna or array may also obtain omnidirectional radiation like in [14][15][16][17].Paper [14] presented a three-element CPW-fed leaky wave folded slot antenna array with omnidirectional radiation pattern.By using the columnar structure, a gain variation of less than 1.1 dB was achieved in the azimuthal plane.A planar slot array antenna with omnidirectional radiation pattern in the horizontal plane was proposed in [15], the antenna with eight back-to-back slots was designed by employing the genetic algorithm implemented on a cluster system to achieve omnidirectional radiation characteristics.In paper [16], a dual-polarized diversity antenna with azimuthally omnidirectional patterns was designed inside a slender and low-profile columnar structure.The proposed antenna was composed of a cavity-backed notch for horizontal polarization and a folded slot for vertical polarization.Slot antennas or arrays may obtain horizontally polarized omnidirectional radiation pattern, but the structure were large in size, weight in heavy, and difficult to be fabricated.Some other special structures obtained horizontally polarized omnidirectional radiation pattern like in paper [18].Circular dipoles in the shape of a "C" produced a horizontally polarized omnidirectional radiation pattern, but the antenna was very large in size, hard to fabricate and not suitable for application.A traveling-wave antenna based on a tapered half mode substrate integrated waveguide was introduced in paper [19], the antenna used a direct transition from a coaxial connector and radiated from the open side of the waveguide, but this antenna was with low 80% efficiency for the whole bandwidth.
A new horizontally polarized omnidirectional microstrip antenna was proposed in this paper.The new designed microstrip antenna, which was a pair of patches, was with back-to-back structure.Two patches were fed by a Wilkinson power divider.The patches were symmetrically placed along vertical plane, adding that patches were horizontally polarized, so fields from the two patches of the designed antenna interfered constructively (add) in the horizontal direction and interfered destructively (cancel each other) in vertical direction.
The radiation theory of the back-to-back structure was studied.Software Ansoft HFSS was applied to design and simulate the antenna for the purpose to optimize the omnidirectional characteristic, drew the best technical parameters.The new proposed antenna was fabricated and measured.Measured results were shown to make the comparison with the simulated ones.Advantages of the designed antenna were compact in size, easy to be fabricated, with good omnidirectional characteristic, and good horizontal polarization performance.
Antenna design
The designed antenna consists of four layers as shown in Fig. 1(a), two external layers are single-fed rectangular-patch microstrip antennas with back-to-back structure.And two internal layers formed to be a classic stripline-type Wilkinson power divider, input port 1 is connected with a SMA connector, port2 and port 3 are two output ports.Two patches are connected with output ports of the power divider differently by probes, port 2 is used to feed the right microstrip patch shown in Fig. 1(a) and port 3 feeds the left patch.Outline size of the antenna in Fig. 1(b) is: length b=32mm, width a=10mm, height h=65mm.Fig. 1(c) is the front view of the designed antenna, length of the patch is l, width of the patch is w, length between the center point of the patch and the up edge of the substrate is c, and e is distance of the feeding point away from edge of the patch.Fig. 1(d) is the size of the equal-split Wilkinson power divider, which is for 50Ω system impedance in working band.Width of the 50Ω transmission lines is w1=2.5mm.And width of the quarter-wave transmission lines, in the divider should have a characteristic impedance of 70.7Ω, is w2=1.2mm.The dielectric constant of the external layer substrate is 16 and thickness is 4mm, and dielectric constant of the internal layer substrate is 2.65 and thickness is 1mm.
Analysis of the antenna structure
For main TM 01 mode, the E-field radiation is given by [20] where 0 0 sin cos sin cos , w is the effective width, l is the effective length.U 01 is the voltage of mode TM 01 at patch corner point.
The designed antenna under investigation can be treated as an array of two horizontal polarized microstrip antenna elements (antenna element 1 and antenna element 2) positioned along z-axis, as shown in Fig. 2. Antenna element 1 is placed on positive z-axis while antenna element 2 is on negative z-axis, so main radiation field of antenna element 1 covers upper hemisphere and that of antenna element 2 covers lower hemisphere.Two antenna elements are fed by signals with same amplitude and phase because of power divider.The total field radiated by the two elements, assuming no coupling and no difference in excitation between the elements, is equal to the sum of the two antenna elements.And radiation field radiated by element 1, placed on positive z-axis, in the y-z plane is given by Since effective length l is a little bit longer than physical length l ' , and effective length l is in proportion to physical length l ' , it's tenable that using physical length l ' to replace effective length l in formula (5).Thus for the two-element array of constant amplitude, total approximate formula for directivity in y-z plane of designed antenna is given by: Table I shows the formula calculated roundness and gain variation against physical length l ' .Taking l ' equal λ/4 for example, V dBi.Roundness and gain variation with other physical length l ' can be calculated by using the same procedure as shown in the table.Roundness and gain variation against physical length l ' are plotted Fig. 3, as physical length l ' decreases, the roundness increases and gain variation decreases, respectively.From table I and Fig. 3, the following results are clear: (a) The back-to-back microstrip antenna radiates omnidirectional pattern; (b) By decreasing the physical length l ' , roundness increases and gain variation decreases, respectively.In general, physical length l ' of patch is less than λ/2, and it can be decreased by increasing dielectric constant of substrate.In this way, gain variation of the omnidirectional pattern will be decreased and better omnidirectional radiation pattern be achieved by using higher permittivity material as shown in Fig. 4. however, it's well known that such patch size reduction (antenna is up to 1/20th of free space wavelength in this paper) often brings decreased bandwidth, increased losses (lower efficiency) and matching problems, so selecting a proper substrate dielectric constant and physical length l ' is quit important to achieve better antenna performance.In order to verify the accuracy of formula ( 6) calculated directivity in y-z plane of the proposed antenna, gain variation comparisons between formula calculated and HFSS simulated are made.For simplify the simulation process, power divider is removed from designed antenna, two patches are fed by two ports with same amplitude and phase respectively, as shown in Fig. 5.By choosing different substrate dielectric constant, different physical length l ' is got and different gain variation on main radiation plane is obtained.
There are total four different patch sizes, physical length l ' 1 of patch No. 1 is 61.5mm with substrate dielectric constant r is 2; l ' 2 of patch No. 2 is 44.1mm with substrate r is 4; l ' 3 of patch No. 3 is 31.4mmwith substrate r is 8; l ' 4 of patch No. 4 is 25.2mm with substrate r is 12.Four patch antennas from No.1 to No.4 are working in the same center frequency 2.35GHz.Normalized simulated radiation pattern (by Ansoft HFSS) in y-z plane with different patch size is indicated in figure 6.It's clearly seen that as physical length decreases, better HFSS simulated omnidirectional radiation pattern is obtained, which is with the same variation tendency as formula (6) calculated results.Table II summarizes the specified numeric comparison of gain variation calculated by formula (6) and by HFSS simulator by given different patch physical length.Physical length l ' of patch No.1 is 61.5mm, which is nearly 0.323 , and approximate formula for directivity in y-z plane is .By using the same procedure, gain variation of other patch physical length can be calculated as shown in table II.
Gain variation difference between formula calculated and HFSS calculated results is shown in Fig. 6.Gain variation difference of patch No. 1 is 1.6dBi, that of patch No. 2, patch No. 3, patch No. 4 is 0.6dBi, 0.2dBi, 0.03dBi, respectively.As physical length getting smaller, less difference between formula-calculated result and HFSS calculated result is achieved.Both results show that better gain variation performance is got as patch physical length gets shorter.
Fig. 8 gives the object of the designed horizontally polarized omnidirectional microstrip antenna connects with a SMA connector.Simulated and measured S11 of the designed antenna is shown in Fig. 9, band-width of the antenna is about 6.4% with center frequency 1.575GHz, and measured result agrees well with the simulated one.
Simulated three-dimensional radiation pattern of the designed antenna is shown in Figure 10, peak gain of the designed antenna is more than 2dBi.It is obvious that the radiation pattern is omnidirectional in y-z plane, gain variation in y-z plane less than ±0.5dBi is achieved.
Fig. 11 shows the simulated and measured main polarization pattern and cross-polarization pattern in E-plane and H-plane.Peak gain of the designed antenna is about 2.2dBi, antenna efficiency is higher than 88% in the operating band.Gain variation of the HFSS simulated result in E-plane pattern is about 0.58dBi.Physical length of the designed antenna l ' =22.1mm=0.116λ,formula calculated gain variation in E-plane pattern is 0.5dBi, which is coincides with the HFSS simulated result.Measured gain variation is about 1dBi and cross polarization level is -20dBi lower than that of main polarization.
Conclusion
A new horizontally polarized omnidirectional microstrip antenna was presented in this paper.And formula of directivity in main lobe radiation plane of designed antenna was deduced.The measured results agreed well with the simulated ones.Band-width of the designed antenna was about 6.4%, peak gain was about 2.2dBi, antenna efficiency was higher than 88% in the operating band, and cross polarization level was -20dBi lower than that of main polarization.The designed antenna was with good omnidirectional radiation patterns, gain variation in E-plane less than ±0.5dBi was achieved in both simulated and measured results.
Fig.1.model of the designed antenna
Fig. 2 .
Fig.2.Geometry of a two-element array positioned along the z-axis.
antenna element 2 rotates 180deg around X-axis and faces to negative z-axis compared to antenna element 1, far field radiation pattern of antenna element 2 in y-z plane is 1 and element 2 share one ground, in this case d=0, r1=r2=r.The total field radiated by the pair elements is given by
TABLE I ROUNDNESS
AND GAIN VARIATION AGAINST PHYSICAL LENGTH L
|
2018-12-13T13:10:58.981Z
|
2016-01-10T00:00:00.000
|
{
"year": 2016,
"sha1": "41cbd21298ea1e64aafb8f6cea29d214b1005aa0",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijap/2016/8214153.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "db33b28bf84ef4790bc069d44b49a00d81c5528a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics"
]
}
|
19089488
|
pes2o/s2orc
|
v3-fos-license
|
The global burden of disease study 2013: What does it mean for the NTDs?
The Global Burden of Disease Study is a landmark World Health Organization initiative that systematically quantifies the prevalence, morbidity, and mortality for hundreds of diseases, injuries, and risk factors of global health importance. In this article, the authors identify country-specific estimates of the prevalence or incidence of neglected tropical diseases, including cholera, typhoid and scabies.
The Global Burden of Disease Study (GBD) is a landmark initiative that systematically quantifies the prevalence, morbidity, and mortality for hundreds of diseases, injuries, and risk factors of global health importance. For the neglected tropical diseases (NTDs), the GBD 2010 confirmed a high disease burden for the 17 major NTDs prioritized by the World Health Organization (WHO) as well as for selected conditions also recognized as NTDs by PLOS Neglected Tropical Diseases, including amoebiasis, cholera, cryptosporidiosis, typhoid and paratyphoid fevers, trichomoniasis, venomous animal contact, and scabies (referred to here as "additional NTDs") [1]. The GBD 2013 is intended to be the first in a series of annual updates for the GBD studies, with its initial results published in 2015 in The Lancet [2][3][4]. Here, we review information on the NTDs published in the GBD 2013 capstone papers [2][3][4] and present new NTD data and updated burden estimates from the GBD 2013 study and new country-specific estimates. We show key outputs of GBD 2013 including country-specific estimates of prevalence or incidence and health-gap metrics for the aforementioned NTDs. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 prevalence, and years lived with disability (YLDs) [3]. All data presented in this table (except for rabies, cholera, cryptosporidiosis, and amoebiasis) are also available from the Institute for Health Metrics and Evaluation (IHME) website and were previously published in [3]. prioritized NTDs plus "other NTDs" globally in 2013 and at least 160 million cases of additional neglected diseases. GBD 2013 reveals some major and notable changes in prevalence or incidence of these diseases since 1990. The most notable is a 610% increase in dengue fever incidence, consistent with the widespread emergence of this disease in Asia, Africa, and the Americas beyond what would be expected due to changes in population demographics. Overall, the major Southeast Asian countries exhibit the highest incidence, as do selected countries in the Caribbean, Central America, and tropical areas of South America (Fig 1). South Asia and West African countries bordering the Gulf of Guinea also exhibit high incidence. In addition, there has been a nearly 175% increase in the estimated number of prevalent cases of cutaneous and mucocutaneous leishmaniasis, which is associated with the major increases from 1990-2013 in the conflict areas of the Middle East and Central Asia (Afghanistan: 138%, Iraq: 1,293%, and Syria: 1,660%) and in East Africa (Sudan: 2,009%). The marked increases in these countries may be linked to conflict-associated collapsed health systems and/or increases in reporting rates over time [5][6][7][8]. The increase in prevalence of cutaneous and mucocutaneous leishmaniasis by country is shown in Fig 2. Marked increases of over 50% in the estimated number of prevalent cases were also noted for leprosy and foodborne trematodiases.
GBD 2013 found that there have been substantial reductions (approximately 30%-40%) in prevalent cases of trachoma-attributable vision impairment, LF, and onchocerciasis. There were also considerable reductions in ascariasis, which a previous analysis has associated with trends in China [9]. These changes relate to increases in mass drug administration (MDA) programs over the last decade [10] both as school-based and community-based programs, the latter also due to the scaling up of LF and onchocerciasis control and elimination efforts (ascaris is susceptible both to benzimidazoles and to ivermectin). Considering the progress of these control and elimination programs, we expect to see a further reduction in disease burden and possibly elimination of disease transmission in the coming years in many countries [11][12][13][14]. However, to date, there has been no substantial impact on the prevalence of schistosomiasis and only modest impact for 2 of the soil-transmitted helminth infections (STHs)-hookworm and trichuriasis. Finally, other major trends noted in GBD 2013 include a 71% reduction in the number of cases of human African trypanosomiasis (HAT) infection. HAT is another NTD for which elimination may be plausible-especially the Gambian form-through case detection and treatment [15]. There have also been reductions in the number of cases of disease from rabies, cysticercosis, and cystic echinococcosis since 1990. However, cysticercosis and cystic echinococcosis still cause substantial morbidity (years lived with disability [YLDs]) and rabies continues to cause substantial mortality (see section Death and DALY Trends for NTDs in 2013 below for more information in YLDs, years of life lost [YLLs], disability-adjusted life years [DALYS], and deaths).
Regional prevalence and incidence distribution in 2013
GBD 2013 further identified the regions most affected by NTDs. Figs 3 and 4 highlight the disease-endemic countries burdened with either the highest prevalence (prevalent cases per 100,000 population), incidence (incident cases per 100,000 person-years) (Fig 3), or absolute number of cases (Fig 4) of NTDs. As one would expect, the burden of disease in DALYs was closely correlated to the number of cases.
As shown in Fig 4, India has the greatest number of cases of at least 10 different NTDs, followed by China (3), Democratic Republic of Congo (DRC) (2), and 1 each for Afghanistan, Brazil, and Nigeria. These numbers, while reflective of absolute burden, conflate disease prevalence and population size. By contrast, Fig 3 highlights areas where the prevalence of infection are highest, including less populous countries in sub-Saharan Africa, the Middle East and North Africa (MENA), Southeast and Central Asia, and the surprisingly high prevalence of helminth infections in Oceania. We next provide specific observations for each of the major NTD categories.
Prevalence of helminth infections
The prevalence of the STH infections-trichuriasis, hookworm infection, and ascariasis-is especially high in Oceania, Southeast Asia, and South Asia. Large middle-income countries Countries with the highest number of absolute cases for the diseases indicated and the estimated numbers in each country. These data are also available from the Institute for Health Metrics and Evaluation (IHME) website. Countries are color coded by Global Burden of Disease Study (GBD) regions. Abbreviations: CAR, Central African Republic; DRC, Democratic Republic of the Congo; STP, São Tomé and Principe. *Also includes mucocutaneous leishmaniasis, **Incident cases rather than prevalent cases, NOTE: As in Table 1, only symptomatic cases are estimated for dengue, trachoma, cystic echinococcosis, cysticercosis, and rabies.
such as India and China as well as Bangladesh, Indonesia, and the Philippines stand out for having the largest numbers of cases of these infections.
While India has the largest number of LF cases (with infections estimated at 7.1 million), LF prevalence is highest in Zambia and Eritrea. China and the Southeast Asian countries of Thailand and Laos exhibit the largest number of cases of foodborne trematodiases (FBT), with 67.3 million cases in China alone. Thailand and Laos also have the greatest prevalence of FBT. The largest numbers of cases of schistosomiasis infection are in Nigeria with 73 million cases, followed by Ethiopia, the DRC, and Kenya, while countries with the highest prevalence of schistosomiasis infection are Angola and Gabon of central sub-Saharan Africa followed by several countries in eastern sub-Saharan Africa. The DRC leads in the number of cases of onchocerciasis infection, estimated at 8.3 million, and also has the third highest prevalence of infection behind Liberia and South Sudan. India and China lead the world in cysticercosis and cystic echinococcosis disease cases with over 100,000 each, while Burkina Faso has the highest prevalence of disease from cysticercosis and Mongolia has the highest prevalence of disease from cystic echinococcosis.
Prevalence of protozoan infections
Among the 3 kinetoplastid infections, India has the largest number of visceral leishmaniasis (VL) cases at 62,000, although South Sudan and Sudan lead in prevalence. Brazil and Argentina have the largest number of Chagas disease infections with nearly 2 million cases each, while Bolivia has by far the highest prevalence-over 8,000 cases per 100,000 people. DRC has the largest number of absolute cases of prevalent and incident infections from HAT, estimated at 14,000 and 10,700, respectively. The Central African Republic had the highest prevalence and incidence of HAT infections in 2013.
Incidence and prevalence of viral and bacterial infections
India leads the world in number of dengue fever cases with 18.6 million followed by Indonesia with 11.1 million, with Oceania and Southeast Asian countries leading in terms of incidence. India has the largest number of trachoma cases at about 758,000, followed by Ethiopia, with the Sahelian nations of Ethiopia, South Sudan, and Mali leading in terms of prevalence. It is important to note here that GBD 2013 estimates for this disease only represent the prevalence of blindness and vision impairment due to trachoma. The largest numbers of rabies cases occur in India (with over 12,000), China, Pakistan, and Nigeria, while Myanmar and the Sahelian nations of Chad, Niger, and Somalia lead in terms of incidence. India, Brazil, and Indonesia have the largest number of prevalent cases of leprosy in the world at 333,000, 63,000, and 43,000, respectively, as well as incident cases. South Sudan and Madagascar lead in terms of disease prevalence, while the Oceanic countries of the Marshall Islands, Federated States of Micronesia, and Kiribati lead in terms of disease incidence.
Prevalence of other NTDs
Included in the group of "other NTDs" are a variety of diseases ranging from arthropod-borne viral infections to bacterial relapsing fevers to unspecified protozoan diseases and a variety of helminthic diseases for which limited disease burden data are available. Nevertheless, these diseases have an enormous impact. India has the highest number of cases of these diseases at 16.1 million, followed by China and Indonesia. Afghanistan and Yemen lead in terms of disease prevalence rates, followed by countries in western sub-Saharan Africa. Table 2 shows the estimated numbers of deaths and the age-standardized death rates in 2013 due to the 17 NTDs prioritized by WHO and other NTDs.
NTD-associated deaths in 2013
In all, it was estimated that 141,800 deaths could be attributable to the 17 NTDs prioritized by the WHO plus "other NTDs" in 2013. However, if the additional NTDs such as typhoid fever, cholera, paratyphoid fever, cryptosporidiosis, and amoebiasis are also included among the diseases in Table 2, together they are estimated to have caused over 500,000 deaths in 2013, roughly equivalent to the number of deaths from all motor vehicle road injuries or breast cancer and more than half of the malaria deaths [4]. The leading NTD killers in 2013 were VL, rabies, and Chagas disease. Among those neglected diseases not prioritized by the WHO in 2013, typhoid fever, cholera, and venomous animal contact were responsible for the largest number of deaths. A particularly disturbing trend was noted for VL, for which the number of cases, deaths, and YLDs have increased since 1990 (Tables 1-3). Rates, however, have been effectively static, suggesting that the increase in absolute numbers may also be due to demographic changes such as population growth and changes in population age structure. However, this finding also shows how little progress has been made in fighting this infection. When considered in terms of age and sex, the highest mortality caused by NTDs is also due to VL, primarily in the young, and Chagas disease, primarily in the elderly (Fig 5). "Other Table 3 NTDs" also caused higher mortality with increased age. Rabies caused substantial mortality across all ages and ascariasis was primarily a cause of mortality for children under 5 years of age. In general, for the other NTDs listed, mortality is highest in the youngest and oldest populations.
DALY trends for NTDs in 2013
As shown in Table 3, the GBD 2013 estimates for NTDs result in approximately 25 million DALYs, which is greater than the DALYs attributable to liver cancer, for instance [2]. The leading NTDs in terms of DALYs include VL, foodborne trematodiases, schistosomiasis, hookworm disease, and LF [2]. Among the additional neglected diseases, typhoid fever and cholera each cause more DALYs than VL. In addition, Table 3 shows the global burden of these diseases in terms of YLDs and YLLs. Onchocerciasis, the sixth highest cause of YLDs, was ranked highly in Liberia, Cameroon, and South Sudan in the top 10 leading causes of YLDs by country, predominantly due to onchocercal skin disease [3]. For most NTDs, YLDs account for a greater proportion of DALYs than do YLLs, and the most prevalent diseases (see Table 1) are also the ones that cause the most disability. In total, NTDs were responsible for 17 million YLDs and 8 million YLLs in 2013.
Evaluating the etiologic composition of DALYs by age, we see that VL dominates among the very young (<5 years), but ascariasis, dengue, rabies, and "other NTDs" are also important NTDs among pediatric age groups (Fig 6). For older school-aged children and adolescents, STH infections and schistosomiasis are the leading causes of DALYs. Among adults, foodborne trematodiases, LF (especially in males), and hookworm infection represent some of the highest disease burdens due to NTDs. Among adolescent and adult women, schistosomiasis and hookworm infection are also leading causes of DALYs. For hookworm infection, it is likely that the adult-onset DALYs are linked to its high prevalence among adults and the associated high risk of anemia in pregnant and lactating women [16]. Further, DALYs for schistosomiasis may have been even higher if the GBD 2013 considered female genital schistosomiasis, perhaps Africa's most common chronic gynecological disease, in these estimates [16]. especially those with the most DALYs from NTDs, DALYs were nearly halved from 1990 to 2013. While this clearly represents progress, Fig 7 also makes it clear that there is a lot of work to be done to reduce the substantial burden of these diseases, especially in sub-Saharan Africa and throughout Asia.
Considerations and limitations of the GBD 2013 results for NTDs
Our overall objective in this article is not to provide an in-depth critique of GBD 2013 methodology or data but rather to highlight findings that we consider of importance for the NTDs community. The GBD category of "other NTDs" includes a range of other neglected tropical diseases (relapsing fevers, typhus fever, spotted fever, Q fever, other rickettsioses, other mosquito-borne viral fevers, unspecified arthropod-borne viral fever, arenaviral haemorrhagic fever, toxoplasmosis, unspecified protozoal disease, taeniasis, diphyllobothriasis and sparganosis, other cestode infections, dracunculiasis, trichinellosis, strongyloidiasis, enterobiasis, and other helminthiases) but these are not modeled separately. No information from the GBD 2013 is currently available for Buruli ulcer, chikungunya virus (included under "other NTDs"), and yaws. Unless stated otherwise, estimates presented are for both symptomatic and asymptomatic cases.
Countries with the lowest NTD burden often lie within the high-income super region. However, we have noted previously that surprisingly high rates of NTDs also occur among the poorest residents of the world's largest economies: the Group of 20 nations (G20) plus Nigeria and other wealthy countries in the MENA, Asia, and the Americas (the concept of "blue marble health") [17]. The GBD 2013 confirms that high NTD burden occurs within the G20 nations and Nigeria [18]. However, gaps in the estimates remain. For example, GBD 2013 estimates for Chagas disease were restricted to endemic countries, and no estimates were made of imported cases in countries with large Latin American immigrant populations, such as the United States and Spain. However, the U.S. Centers for Disease Control and Prevention (CDC) estimates that there are at least 200,000 cases of Chagas disease in the US, which would place the US as the country with among the highest number of cases in the world [19]. In addition, there is evidence of triatomine insects infected with Trypanosoma cruzi and positive for human blood, as well as autochthonous transmission of Chagas disease in the US, especially in Texas [20]. Unfortunately, reporting of Chagas disease in the US is low, likely due to a lack of healthcare provider knowledge of the disease [21]. The exclusion of imported cases from nonendemic countries as well as underreporting of autochthonous transmission suggests that GBD 2013 is underestimating Chagas prevalence globally.
Also of interest is the impact of MDA for intestinal helminth infections, schistosomiasis, LF, onchocerciasis, and trachoma, which has been integrated and expanded on a global scale beginning in 2006 through financial support of the governments of the US (United States Agency for International Development's NTD Program) and the United Kingdom [22]. Control through MDA started at different time points for different diseases and countries, so progress has been heterogeneous [23]. For example, large-scale vector control for onchocerciasis (Onchocerciasis Control Programme in West Africa, OCP) started in West Africa in the mid-1970s. From 1995 onwards, the African Programme for Onchocerciasis Control (APOC) coordinated the gradual scale-up of MDA with ivermectin in the remaining endemic African countries and the OCP used ivermectin to control any recrudescence. By now, a majority of areas in need of treatment for onchocerciasis are receiving MDA [24][25][26]. The Global Program to Eliminate Lymphatic Filariasis (GPELF) has been in place since 2000, while large-scale treatment for STH infections and schistosomiasis started later [27]. Following its successful largescale use in Morocco, MDA of azithromycin has been included as part of WHO's surgery, antibiotics, facial cleanliness, and environmental improvement (SAFE) strategy for eliminating trachoma [28][29][30][31][32]. The trends we see for the major helminth infections may be potentially explained by the relatively recent start of widespread schistosomiasis control programs and low single-dose drug efficacies for hookworm and trichuriasis [33,34], given that we are already seeing substantial reductions in ascariasis, as highlighted above.
We also point out that prevalence estimates for HAT and leprosy for the GBD 2013 are derived in part from reported incidence figures and literature-based assumptions about the natural history of these diseases.
It is important to note that the GBD 2013 cysticercosis estimates only include cases of neurocysticercosis (NCC)-associated epilepsy, although this brain infection may cause several other neurological disorders [35]. The GBD estimates are based on the estimated prevalence of secondary epilepsy and the prevalence of NCC among people living with secondary epilepsy. Because not all infections of cysticercosis result in NCC, it is likely that there are more cysticercosis cases than those estimated by GBD 2013. Likewise, not all cases of NCC are associated with epilepsy but rather with severe chronic headaches, stroke, focal deficit, and dementia, to name a few conditions. Because such cases were not included in the current estimates, this may have led to an underestimation of the burden of cysticercosis by the GBD 2013. Moreover, this means that part of the disability incurred by other neurological and mental health disorders caused by NCC increases the DALYs of these diseases, making other disorders look less important. Cysticercosis estimates are based on sparse literature data on NCC prevalence among people with epilepsy (from 12 countries: Bolivia, Brazil, Burkina Faso, Colombia, Ecuador, Guatemala, Honduras, India, Mexico, Peru, South Africa, and Tanzania) combined with country-level covariate data on access to sanitation and proportion of the population that is Muslim. It has been pointed out appropriately by 1 of the reviewers for this manuscript that intermediate-host pig populations should be considered rather than a proportion of Muslim populations in order to adequately capture nations such as Chad, Ethiopia, and Sudan that have significant non-Muslim populations yet also have very few pigs. Unfortunately, none of these indicators measure precisely the risk factor of most interest for NCC, which is the exposure of humans to livable Taenia solium eggs in the environment. Such exposure, in turn, depends on the level of sanitation and the prevalence of human (T. solium) taeniasis in the population. The prevalence of taeniasis depends in turn on the consumption of undercooked pork meat. Therefore, although the presence of pigs may act as an indirect yet important indicator of NCC, it is not the only one. Other factors also play a role (people might eat wellcooked pork, which is common when the meat is consumed at home, or pigs may not ever be exposed to human feces in areas where their access to human feces is restricted). Better estimates of the burden of NCC will be feasible as more data on the actual prevalence of NCCassociated neurological disorders becomes available with the development of better diagnosis for infections of the brain [36]. Another challenge is that the very clustered nature of cysticercosis, NCC, and taeniasis makes it difficult to generalize data from small-scale studies to larger areas, making it difficult to evaluate the true burden of cysticercosis.
Similar to cysticercosis, the GBD 2013 cystic echinococcosis estimates relied heavily on modeling approaches to fill in data gaps. While this method does allow for a regional picture of where the condition is more prevalent, many individual country-level estimates will require additional verification and refinement. One example of where country-level estimates will need to be improved is in Asia. While portions of China and Central Asia are known to be highly endemic for cystic echinococcosis, most countries in Southeast Asia (e.g., Indonesia, Thailand, and Vietnam) are believed to be nonendemic for this disease. This observation is evidenced by both the lack of reports of autochthonous human cases from these countries and no reported animal infections. However, based on GBD country-level covariate information used to fill in data gaps, the numbers of estimated cases in these countries appear to be high, whereas current data indicate that there are few or no cases. The surprisingly high cystic echinococcosis case numbers predicted for Indonesia are an example of this phenomenon.
In 2010, the WHO Foodborne Disease Burden Epidemiology Reference Group (FERG) released their own calculations for the burden of foodborne diseases such as cysticercosis and cystic echinococcosis [37]. The difference in DALY estimates for some diseases between the FERG estimates and GBD 2010 were striking [2,38]. For cysticercosis, the GBD 2010 estimated 514,000 DALYs, whereas the FERG study estimated over 2.7 million DALYs. The GBD 2013 estimates 340,000 DALYs for cysticercosis-still far off from the FERG estimates. The FERG study used some of the same input data as GBD study, similar analytical methods, and many of the same weightings [39]. However, some different and important choices were made. In the case of cysticercosis, the FERG study allocated a much larger proportion of the epilepsy burden to cysticercosis, based on data from a systematic review [40]. Such differences may be considered important with respect to assessing changing burden over time and highlight the need to align methodologies in an open and transparent fashion.
For several NTDs, the number of deaths reported by the GBD 2013 is also likely to represent an underestimate. For example, urogenital schistosomiasis is a major cause of renal disease and bladder cancer in Africa and the Middle East [41], and yet only 5,500 deaths were ascribed to all of the world's 290 million schistosomiasis cases. It is likely that many of the schistosomiasis-related deaths are being classified in categories such as chronic kidney disease or bladder cancer, now linked to 956,200 deaths and 173,900 deaths, respectively [4]. Also, despite the official classification of Opisthorchis viverrini and Clonorchis sinensis as Group 1 carcinogens causing highly fatal cholangiocarcinoma [42], there were no estimated cases of deaths attributed to foodborne trematodiases. Previous estimates resulted in 7,000-8,000 deaths annually due to cholangiocarcinoma caused by these 2 foodborne liver fluke species, and even these numbers were considered too low [38,43,44]. However, it is likely again that these deaths were classified as cancer deaths and not as deaths due to foodborne trematodiases. Similarly, there are 183,400 deaths ascribed to iron-deficiency anemia [4], of which hookworm disease is a major cause [45], and yet no deaths are attributed to this NTD in the GBD 2013. These and other factors may result in underreporting of deaths due to diseases such as schistosomiasis and hookworm infection. Estimates of dengue may also be too low. The GBD 2013 estimates for deaths from dengue virus range from 8,365 in 1995 to 10,394 in 2010, but in 2013, they decreased to 9,100. Considering the dramatic increase in dengue incidence and geographic spread of dengue transmission seen in recent years as well as recent evidence on the underreporting of dengue deaths in well-funded surveillance systems, the number of deaths due to dengue virus may be substantially higher than that estimated by the GBD study for the year 2013 [46][47][48][49]. A recent systematic review and meta-analysis suggests that estimates of mortality attributed to Chagas disease may also be low [50]. Similarly, the GBD 2013 estimates 23,500 rabies deaths, but another recent estimate also based on modeling and extensive literature review estimated 59,000 annual deaths from rabies [51].
For similar reasons, the GBD 2013 also likely underestimates the DALYs attributed to the NTDs, especially for schistosomiasis and hookworm disease [1]. The DALYs for scabies may also be an underestimate, as the indirect effects of streptococcal infection on renal and cardiovascular function may not be appreciated. Conversely, the DALY estimates for foodborne trematodiasis nearly doubled between GBD 2010 and GBD 2013, from 1.9 million to 3.6 million DALYs [1,2]. This surge was caused in part by a revised disability weight for paragonimiasis. Efforts are underway to harmonize these changes in the upcoming GBD 2015. The corrected burden estimates for foodborne trematodiases might be in the 2.0-2.5 million DALYs range, which would also be in line with recently published WHO estimates [38]. For rabies, the GBD 2013 estimated 1.24 million DALYs, but similar to the death estimates, a recent review has estimated 3.7 million DALYs from rabies [51].
While the GBD 2013 provides timely and relevant NTD burden data, critical caveats need to be clearly stated and considered in the interpretation and application of these estimates. Among the most frequently mentioned and also critical gaps is the lack of highest-quality epidemiological data, which is not only an NTD-specific issue but of particular importance for this disease cluster. Another critical issue-again, not exclusive to NTDs-is the correct modeling of pathways from infection to disease and death and a correct attribution of the resulting YLDs, YLLs, and DALYs. Both aforementioned issues ask first and foremost for more primary data from NTD-specific research in order to strengthen the evidence base and the case of NTDs. However, there are also methodological decisions in the global burden of disease estimation which need to be carefully considered. For the sake of shortness, we would like to highlight just 2 of these methodological points which cover the spectrum from (1) practical issues that the NTD community can address immediately to improve the next generation of GBD estimates to (2) very fundamental decisions in the design of the DALYs, which the NTD community cannot directly influence within the massive GBD collaboration but can at least carefully observe and comment.
First, for many country estimates, the GBD disease modeling approach borrows strength from relevant data in neighboring/similar countries and from additional covariates from a massive covariate database. However, this may lead to some estimation errors, which are negligible at the global level but relevant at national scale. For instance, Australia is considered rabies free [52], but GBD reported 2 rabies deaths in Australia. GBD also reports 5 rabies deaths in the UK just in 2013. However, there have only been 4 rabies deaths in the UK since 2000 (all in individuals bitten by dogs when abroad). Identifying such inconsistencies and providing guidance to the GBD data analysis team (e.g., force certain country estimates to be strictly zero) would further improve the precision of GBD estimates. For example, as a potential strategy for other diseases, autochthonous human case reports on foodborne trematodiases have been reviewed and mapped by Fürst et al. Thereby, countries were classified (1) as having suitable national data that should be directly applied in burden estimation, (2) as having no suitable national data but case reports and where, consequently, national estimates should be predicted based on the data from similar countries and relevant covariates as the best option, and (3) as having no suitable national data, no case reports, and as being also not known for its endemicity, where the models should therefore not predict any cases [53].
Second, since GBD 2010, the GBD studies switched from incidence-to prevalence-based DALYs [54]. The exact effect of this fundamental decision on the burden estimation and the comparison of acute with chronic sequelae in populations experiencing varying dynamics is unclear. This is true in general and hence also for the NTD burden estimates. However, at least for some NTDs, the readers can refer to the WHO FERG estimates, which provide incidencebased DALY estimates, in order to obtain a more complete picture on the respective burden estimates [37][38][39].
Finally, terms such as prevalence or cases are not always clearly defined in GBD models for individual diseases, which can create some confusion when interpreting the meaning of the results. Future iterations of the GBD study will likely be more transparent, making interpretation simpler and comparison between estimates from other sources more clear.
Concluding remarks
The GBD 2013 highlights reductions in the global prevalence of some specific NTDs such as LF, onchocerciasis, trachoma, and ascariasis, likely due to MDA, water, sanitation, and hygiene (WASH) and other control measures, and HAT reductions, likely due to expanded efforts for case detection and treatment elimination strategies, especially for the Gambian form of the disease [10,15]. In contrast, we have not seen meaningful declines in diseases such as hookworm infection, trichuriasis, and schistosomiasis, while for dengue, leishmaniasis, and foodborne trematodiases, we have seen substantial increases [3]. Therefore, we need to consider adopting public health policies to address these trends and adapt our current approaches to specifically guide better disease surveillance, improved water quality and sanitation, affordable diagnostic tests, access to healthcare and medications, and further investments in new preventive and disease-control technologies. We also need to look at shaping NTD control policies in the countries where NTDs are highest, which include large middle-income countries such as India, China, and Brazil, where income inequality and the resulting inequality in access to healthcare, safe housing, clean water, and sanitation has allowed these diseases to persist despite economic growth [17]. However, GBD 2013 also highlights the high prevalence of NTDs in some of the smaller conflict-ridden nations and nations in a postconflict period. For example, Liberia, Central African Republic, South Sudan, and Afghanistan lead the world in several NTD categories. Creating new strategies to fight NTDs in such countries, which often have highly fractured health infrastructure and struggle to keep hospitals open, poses a different, perhaps more difficult policy challenge, but is one that should not be ignored. High NTD prevalence was also noted in several Oceanic and Southeast Asian countries and should also be addressed.
While there are some concerning trends revealed by the GBD 2013, we should not overlook or downplay the major achievements so far. During the 23 years from 1990 to 2013, much progress has been made in reducing the prevalence and burden of several NTDs. In the year 2000, the United Nations Millennium Development Goals (MDGs) spurred action against human immunodeficiency virus, tuberculosis, and malaria. Those actions have paid off in ensuing years as we are making continued progress in fighting "the big three." However, goals specifically targeting NTDs were notably missing from the MDGs beyond a mention of the "other diseases." This omission sparked a response from a small group of dedicated NTD activists to raise the profile of NTDs in the global health community. Since then, we have seen the creation of a new Department of NTDs at WHO, a Global Network for NTDs, the creation of NTD research and support centers, and the establishment of programs to support MDA at the United States Agency for International Development and the British Department for International Development [22]. In addition, several product development partnerships (PDPs) have formed to develop new NTD drugs, diagnostics, and vaccines, and moreover, major pharmaceutical investment in a dengue vaccine has resulted in the first dengue vaccine currently approved in 3 countries as of January 2016. An open access scientific journal dedicated solely to NTDs began publication in 2007. The first WHO report on NTDs in 2010 was followed by a roadmap action plan in 2012, the launch of the END Fund and the London Declaration the same year, and a specific resolution for NTDs from the World Health Assembly in 2013. Such efforts have continued beyond 2013 with agreements such as the Addis Ababa NTD Commitment signed at the end of 2014, establishment of the NTD Modeling Consortium, and the recent inclusion of NTDs in the new UN Sustainable Development Goals. As a result of these joint efforts, some countries have successfully eliminated certain endemic NTDs. For example, Mexico was declared to have eliminated onchocerciasis in 2015 following its elimination in Colombia and Ecuador in 2013 and 2014, respectively [12].
Overall, the results presented here indicate that, despite significant gains, much work remains in the fight against NTDs. There are still approximately 2.3 billion cases of NTDs, which cause a substantial global disease burden. It is critical that we as a global community continue our efforts to help end the suffering caused by NTDs. Helping nations to achieve health for the poorest of their citizens will be a step forward in achieving their Sustainable Development Goals. Finally, most of the NTDs are still underreported, and the quantification of their burden is limited by the data that are available. Therefore, screening and notification efforts for the NTDs should be increased in order to capture the true burden of these diseases. Understanding the true burden of NTDs is essential to track health progress, assess the impact of public health interventions, and inform evidence-based policy decisions.
|
2018-04-03T05:55:17.991Z
|
2017-08-01T00:00:00.000
|
{
"year": 2017,
"sha1": "52a39fd755a2b01f0fd2e4b0499e3f9c0b48a74e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0005424&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52a39fd755a2b01f0fd2e4b0499e3f9c0b48a74e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
125402389
|
pes2o/s2orc
|
v3-fos-license
|
Quasi-solutions of genuinely nonlinear forward-backward ultra-parabolic equations
In the present paper we have proved the existence of quasi-solutions of genuinely nonlinear forward-backward ultra-parabolic equations. Quasi-solutions are obtained with the help of the vanishing anisotropic temporal diffusion method. Moreover, at the present stage of our research we assume that various choices of temporal artificial diffusion coefficients lead to entropy solutions or to quasi-solutions. The latter assumption is the subject of our further scientific research.
Introduction
Ultra-parabolic equations are used in the theory of boundary layers and in the mathematical models of Brownian motion [1][2][3]. Entropy solutions of nonlinear ultra-parabolic equations were studied in [4][5][6][7], see references therein. It is important to note that entropy solutions were firstly obtained in [8] for hyperbolic differential equations and later extended to various types of partial differential equations.
The research on the well-posedness of nonlinear forward-backward parabolic equations was started in [9][10][11][12][13][14]. Moreover, the history of the research on the linear case was described in [15].
Here we apply the vanishing anisotropic temporal diffusion (viscosity) method [16]. The technique of elliptic regularization was invented in [17] during the study of degenerate parabolic equations, and subsequently adapted for the Navier-Stokes equations [18], and for forwardbackward parabolic equations [9,11]. Independently, the elliptic regularization of hyperbolic nonlinear equations was studied in [19,20]. Furthermore, singular limits of anisotropic elliptic perturbations were obtained in [21,22]. Also, for hyperbolic equations the vanishing viscosity method with gradient dependent viscosity was applied in [23,24].
We have obtained a quasi-solution with the help of a corresponding sequence of weak solutions {u ε } ε>0 to problem Π ε as ε → 0+. The main difference from previous results on quasisolutions is that only the right-hand sides of kinetic boundary conditions (4.7b)-(4.7e) are not [32]. In the present paper we have used the vanishing diffusion method with the help of anisotropic p-Laplacian, p = (2, . . . , 2, p 1 , p 2 ) ∈ R d+2 , p 1 , p 2 > 1. The physical meaning of the vanishing anisotropic diffusion method is that we take into account fast and slow diffusive regimes when |p 1 − 2| + |p 2 − 2| ̸ = 0, see [33,Chapter 5]. This paper is organized as follows. In section 2 we formulate the non-homogeneous Dirichlet problem Π 0 . Since this problem is ill-posed, in section 3 we formulate problem Π ε . Quasisolutions to problem Π 0 are obtained as singular limits of weak solutions to problem Π ε as ε → 0+, see sections 4 and 5. We have not proved yet that singular limits of weak solutions to problem Π ε are also entropy solutions to problem Π 0 even if |p 1 − 2| + |p 2 − 2| ̸ = 0. So, it is still an open question (see Remark 4).
Genuinely nonlinear forward-backward ultra-parabolic equation
Let scalar functions a(z), b(z) and vector function φ(z) satisfy the following conditions.
Conditions on
Function a is non-monotonic. Moreover, a ′ and b ′ satisfy the genuine nonlinearity condition Under Conditions on a, b&φ, we are going to formulate boundary value problem Π 0 . Let Here we deal with anisotropic Sobolev spaces, see, for example, [34]. The anisotropic Sobolev space W 1,p 0 (G T,S ) is equipped with the norm
Anisotropic elliptic regularization
We are going to construct a quasi-solution as a singular limit of weak solutions u ε to the nonhomogeneous Dirichlet problem Π ε as ε → 0+.
Problem Π ε . For arbitrary initial and final conditions
in a weak sense, see Definition 1.
We assume here that ε ∈ (0, 1], is called a weak solution to problem Π ε if the following demands hold: for every ϕ ∈ L ∞ (G T,S ) ∩ W 1,p 0 (G T,S ). Remark 2. We can reformulate (3.4a) in the equivalent way: ∫ Remark 3. We assume that extension u into G T,S exists if u Γ 0 , u Γ T , u Ξ 0 and u Ξ S are from C 1,α 0 , 0 < α < 1. In the case p 1 = p 2 = 2 we deal with entropy solutions (see Remark 4) and with the help of [25,Theorem 1] we can decrease the smoothness of initial and final data:
Proposition 1. Under Conditions on a, b&φ, problem Π ε has at least one weak solution
, (3.5) and the energy estimate ∫
Kinetic formulation of forward-backward ultra-parabolic equations
In this section we deal with the kinetic formulation of forward-backward ultra-parabolic equations. Here we use methods developed in [25-27, 31, 36-41].
Consider the function χ which is defined in the following way elsewhere.
Definition 2. Let N be a positive integer, O be an open set of R N and the function
The following lemma formulated and proved in [40] guarantees the link between sequences of χ-functions and their limits.
The main result of the present paper is the following theorem.
We need to introduce the convex entropy flux pair (η, q): The entropy inequality is valid for every convex entropy flux pair (η, q). We get this inequality from where γ is an arbitrary nonnegative finite test function in G T,S . Using the kinetic formulation, from (5.8) we obtain Kinetic equation (4.7e) follows from (5.9) due to arbitrariness of η ′ . Furthermore, kinetic boundary conditions (4.7b)-(4.7e) are deduced from the following equalities
Conclusion
In the present paper we have enriched results presented in [25] in the case when temporal artificial diffusion coefficients depend on partial derivatives of u ε in t and s variables. Namely, we have shown that various choices of temporal artificial diffusion coefficients lead to quasi-solutions or to entropy solutions of problem Π 0 . The later assumption is still under discussion.
|
2019-04-22T13:08:12.731Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "bcc422a23f31068661ba1a8a7a6f1e73e64e65da",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/894/1/012046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8e425d62675e92dad1c079f86dcec73307a20d03",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
3551600
|
pes2o/s2orc
|
v3-fos-license
|
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time.
Introduction
High-speed cameras are widely used in high-frame-rate (HFR) video shooting for fast-moving scenes in various applications such as factory inspection, biomedicine, multimedia and civil engineering. In HFR video shooting, the camera's exposure time should be lowered as the apparent speeds of the target scenes increase to reduce motion blur. Image degradation due to motion blur is affected by the camera's exposure time, as well as the target speeds. However, the HFR images captured with a lower exposure time become too dark for observation when fast-moving scenes are shot in low light because of insufficient intensity of the light projected on the image sensor. When video shooting fast-moving scenes with high magnification, such as precise product inspection on a conveyor line, road surface and tunnel wall inspection from a fast-moving car and flowing cells in microscopic fields, the trade-off between brightness and motion blur in video shooting is distinctly aggravated. This is because the light intensity projected on the image sensor is lowered and the apparent speed increases with increasing magnification enabled for precise observation.
In order to reduce image degradation due to motion blur when observing moving scenes, many motion deblurring methods [1,2] have been proposed to restore the blurred images by deconvolution using the estimated blur kernels that express the degrees and distributions of motion blur in the images. Blind deconvolution methodologies were used to estimate the blur kernels from a single image using parametric models for maximum a posteriori estimation [3][4][5]. In addition, various types of single-image motion deblurring methods have been proposed for correct prediction of true edges using filters [6][7][8], multi-scale coarse-to-fine approaches [9,10] and reduction of ill-posed image priors in the deblurred images such as normalized sparsity priors [11], color priors [12], patch priors [13], dark channel priors [14] and smoothness priors [15]. Multi-image motion deblurring methods have been used to accurately estimate the blur kernels from multiple images such as super-resolution for consecutively captured images [16][17][18], a high-resolution still camera with a video camera [19,20] and image deblurring with blurred image pairs [21,22]. Considering a camera motion model such as a perspective motion model [23] and simplified three-DOF models [24,25], several studies have reported motion deblurring systems by estimating the camera's egomotion with gyro sensors and accelerometers [26] or the camera's geometric location [27]. Most of these motion deblurring methods dealt with image restoration of input images degraded due to motion blur, and they did not consider the acquisition of non-blurred input images. There were limitations to the extent to which the images could be improved, and it was difficult to completely eliminate motion blur in the input images when significant changes with large displacement occurred in the images.
In this study, we propose a novel concept for motion-blur-free video shooting that can capture non-blurred images of fast-moving objects without lowering the camera's exposure time. Building on the camera-driven frame-by-frame intermittent tracking method [28] in which the actuators are simultaneously controlled for tracking in synchronization with the camera's frame timings, we extend the method to the actuator-driven frame-by-frame intermittent tracking method so that the camera's frame timings are controlled for motion-blur-free video shooting in synchronization with the large amplitude vibration of a free-vibration-type actuator such as a resonant mirror vibrating at a high frequency corresponding to its natural frequency. The proposed method can derive the maximum performance of the free-vibration-type actuator that enables motion-blur-free shooting of faster-moving objects at a higher frame rate. The remainder of this paper is organized as follows. Related works on image stabilization for blur reduction, high-speed vision and camera-driven frame-by-frame intermittent tracking method are presented in greater detail in Section 2. In Section 3, we propose the actuator-driven frame-by-frame intermittent tracking method and describe how to determine the parameters in frame-by-frame intermittent tracking. Section 4 provides an outline of the configuration of our motion-blur-free video shooting system with a resonant mirror vibrating at 750 Hz and describes the algorithm implemented for motion-blur-free video shooting of 1024 × 1024 images at 750 fps. The parameters are verified in the preliminary experiments described in Section 5. In Section 6, the effectiveness of the method is verified by demonstrating the results of HFR video shooting experiments performed for fast-moving scenes.
Image Stabilization
To reduce undesirable motion resulting from shaking or jiggling of the camera, a large number of image stabilization techniques has been developed. These techniques can be categorized into: (1) optical image stabilization (OIS) and (2) digital image stabilization (DIS). The lens-shift OIS systems have been designed to shift their optical path using optomechatronic devices such as shift-mechanisms for lens barrels [29,30], a fluidic prism [31], a three-DOF lens platform with magnetic actuation [32] and a deformable mirror [33]. For small systems such as mobile phones, the sensor-shift OIS systems have been compactly designed to shift their image sensors using voice coil actuators [34][35][36][37][38][39]. Many types of multi-DOF gimbal control systems [40][41][42][43][44] have been also used in OIS systems in handheld shooting and drone-based aerial videography with ready-made commercial digital cameras. These OIS systems can stabilize input images for reducing motion blur resulting from camera shake by controlling the optical path with the camera's internal sensors such as gyro sensors. However, these systems are not suitable for shooting blur-free images of fast-moving scenes when the camera is fixed. This is because the internal sensors cannot detect any apparent motion in the captured images. DIS systems can stabilize input images by compensating the residual fluctuation motion using an image processing technique that estimates the local motion vectors such as block matching [45][46][47], bit-plane matching [48,49], feature point matching [50][51][52][53][54][55] and optical flow estimation [56][57][58][59]. Most of these DIS systems do not need any additional mechanical or optical device, and this feature makes them suitable for low-cost electronics. However, these systems are not suitable for capturing non-blurred input images because they cannot address existing motion blur in the captured images. Since its origination in [60], many high-speed photography methods and systems with strobe lights [61][62][63][64] have been developed. They can shoot videos of fast-moving objects without motion blur with very short strobe pulses, whereas they cannot shoot videos of fast-moving objects at distant places under daylight conditions because ambient light becomes dominant.
High-Speed Vision
In order to track fast-moving objects with visual feedback, many real-time high-speed vision systems operating at 1000 fps or more have been developed [65][66][67][68]. Various types of image processing algorithms such as optical-flow [69], camshift tracking [70], multi-object tracking [71], feature point tracking [72] and face-tracking [73] have been implemented for HFR visual tracking accelerated by field-programmable gate arrays (FPGAs) and graphic processing units (GPUs) on high-speed vision systems. The effectiveness of high-speed vision has been demonstrated in tracking applications such as robot manipulation [74][75][76][77], multicopter tracking [78,79], microscopic cell analysis [80][81][82][83] and vibration analysis [84]. The tracking performances of most of these tracking systems are limited by the time delay of dozens of frames for convergence in tracking control, because the responsive speed of the actuator is much slower than those in the accelerated video capturing and processing in high-speed vision systems. Recently, the 1-ms auto pan-tilt system [85] using galvano-mirrors with accelerated pan-tilt actuators has achieved dynamic image control for ultrafast tracking of moving objects, and such galvano-mirror-based active vision systems can function as virtual multiple tracking cameras that can observe hundreds of different views in a second [86]. By tracking an object to be observed in the center of the camera view with visual feedback, such high-speed tracking systems can reduce motion blur without decreasing their exposure time because the apparent motion of the object to be observed can be canceled in the camera view when the tracking control works correctly. However, motion-blur-free video shooting in such systems is limited to a single target object because the viewpoints cannot be freely changed for observing other objects when the target object is tracked in the camera view.
Camera-Driven Frame-By-Frame Intermittent Tracking
For viewpoint-free video shooting of fast-moving objects without increasing motion blur, Inoue et al. had proposed a frame-by-frame intermittent tracking method [87] that can reduce motion blur by alternating different gaze control methods on an ultrafast active vision system from tracking control to back-to-home control at every frame. In synchronization with the camera shutter timings, the tracking control is executed to maintain the apparent velocity of the object to be observed on the image sensor at zero for motion blur reduction when the camera shutter is open. The back-to-home control is executed to reset the optical path of the camera to its home position without degrading the image quality when the camera shutter is closed because the image sensor is blind to any apparent motion according to no incident light. Figure 1 shows the control scheme of the camera-driven frame-by-frame intermittent tracking, in which the actuator for the alternative gaze control is synchronized with the fixed frame cycles of the camera. Based on the concept of the camera-driven frame-by-frame intermittent tracking, high-speed mirror-drive tracking systems using high-frequency response actuators such as piezo-mirrors [87,88] and galvano-mirrors [89,90] have been reported for motion-blur-free video shooting of fast-moving objects at hundreds of frames per second. In [87], a mirror-drive two-degrees-of-freedom (DOF) piezo-actuator-based tracking system that can capture 512 × 512 images of fast-moving objects at 125 fps with an exposure time of 4 ms without motion blur was proposed. Two piezo-mirrors with 30 mm × 30 mm mirror surfaces were used, and their movable ranges in pan and tilt directions were very narrow; 0.17 and 0.14 degrees, respectively. Additionally, the trajectory of the piezo-mirror had considerable ripples at its natural frequency of approximately 800 Hz once the motor command was provided to the piezo-mirror. Further, it took 4 ms or more for decaying the ripples. This system could not perform frame-by-frame intermittent tracking at a frame rate larger than 125 fps, and the maximum angular speeds for the pan and tilt angles were limited to 67.1 and 49.7 • /s, respectively. Thus, there remained the following constraints of a high-frequency response actuator in the camera-driven frame-by-frame intermittent tracking: (1) Limited movable range A high-frequency response actuator has to perform a trade-off between its frequency response and movable range. The amplitude of the repetitive motion at a high frequency is limited because the movable range of a high-frequency response actuator gets narrower as its mechanical time constant gets smaller. In frame-by-frame intermittent tracking with the camera exposure time, the high-frequency response actuator should continuously track a target object whenever the camera shutter is open. However, the motion blur cannot be completely eliminated when the distance of the object during the time the camera shutter is open is larger than the movable range of the actuator. The admissible speed of the target object is limited in motion-blur-free video shooting with a large camera exposure time.
(2) Limited controllability in the high-frequency range A high-frequency response actuator requires a certain time to attenuate its ringing response with resonant vibration, because it achieves its high-frequency drive with a low damping ratio by reducing its viscosity such as friction. The trajectory of a high-frequency response actuator should be linearly controlled whenever the camera shutter is open so as to cancel the apparent speed of the target object assuming that it moves at a fixed speed as long as the camera shutter is open. However, it is difficult to completely eliminate ripples in the actuator's trajectory in the frame-by-frame intermittent tracking at hundreds of hertz or more because the frame interval is not larger than its damping time for resonant vibration, and motion blurs are still retained in the images.
Concept
According to the constraints stated in the previous section, the camera-driven frame-by-frame intermittent tracking method cannot always derive the maximum performance of a high-frequency response actuator, and the frame rate of a high-speed vision system should be lowered so as to maintain the linear trajectory of the actuator during the time the camera shutter is open. The very flexible controllability of the high-speed vision system, whose frequency response is much higher than that of the actuator, was not fully utilized in the frame-by-frame intermittent tracking.
Thus, in this study, we propose an improved frame-by-frame intermittent tracking method that can reduce motion blur in video shooting by controlling the camera shutter timings in synchronization with the resonant vibration of a free-vibration-type actuator such as a resonant mirror. Its high-frequency vibration with a large amplitude enables the ultrafast gaze control to track fast-moving objects during the time the camera shutter is open. Figure 2 shows the concept of our proposed actuator-driven frame-by-frame intermittent tracking method. When the camera's viewpoint moves unidirectionally, the viewpoint's position x(t) at time t vibrates at a cycle time of T = 1/ f 0 on the following sinusoid trajectory,
actuator-driven intermiƩent tracking
where f 0 is the resonant frequency of the free-vibration-type actuator and A(t) is the amplitude of the vibration at time t, assuming x(t) = 0 when t = 0. In the actuator-driven tracking approach, the exposure start and end times to capture the image at frame k, which are expressed as t O k and t C k , respectively, are controlled so that the camera shutter is open when the viewpoint is located in the highly linear range within the sinusoid trajectory. In parallel with the shutter timing control, the slope of the approximate line to the sinusoid trajectory when the camera shutter is open, which indicates the speed of the camera's viewpoint, is controlled for motion blur reduction so as to coincide with the apparent speed of the target object on the image sensor. In the frame-by-frame intermittent tracking with a free-vibration-type actuator, the resonant frequency, which is a fixed value peculiar to the actuator, is not controllable, and the speed of the camera's viewpoint can be controlled with the amplitude of the vibration, as well as the exposure start and end times, which determine the time range for the linear approximation to the sinusoid trajectory. Figure 3 shows the control scheme of our actuator-driven tracking approach. Compared to the performance-limited mechanical actuator control in the camera-driven tracking approach, the actuator-driven tracking approach can derive the maximum mechanical performance of a free-vibration-type actuator enables motion-blur-free video shooting of faster moving objects at a higher frame rate, whereas a free-vibration-type actuator is plagued by the following limitations: (1) Unresponsive amplitude control in resonant vibration A free-vibration-type actuator tends to move on a periodic trajectory with a certain hysteresis caused by friction, and it is largely deviated from the ideal sinusoid trajectory in the case of the resonant vibration with a small amplitude. In the actuator-drive tracking approach, such properties may degrade the tracking performance in video shooting a target object whose speed is either very low or varies with time.
(2) Limited time aperture ratio In the camera-driven tracking approach, the time aperture ratio, which is the ratio of the frame interval and the exposure time in video shooting, can be programmably determined by designing the target trajectory of the camera's viewpoint freely, whereas the high-frequency response actuator cannot move on the target trajectory with a large amplitude, due to its limited movable range and speed. On the other hand, the time aperture ratio in the actuator-driven tracking approach is limited due to the sinusoid trajectory with resonant vibration. This is because the camera shutter timings are automatically determined so as to guarantee the linear motion of the camera's viewpoint when the camera shutter is open, whereas the percentage of the linear range on the sinusoid trajectory decreases as the exposure time increases.
Camera Shutter Timings and Vibration Amplitude
In motion-blur-free video shooting with actuator-driven frame-by-frame intermittent tracking, the nonlinear sinusoid trajectory with resonant vibration of a free-vibration-type actuator, which is segmented in the time range when the camera shutter is open, deviates from its approximate straight line more extensively as the camera exposure time increases. For motion-blur-free video shooting without lowering the incident light, it is important to determine a larger camera exposure time with consideration of the permissible deviation error in straight-line approximation, which corresponds to the degree of motion blur. In this subsection, we discuss how to determine parameters for camera shutter timings in actuator-driven frame-by-frame intermittent tracking on the basis of the numerical relationship between the segmented sinusoid trajectory and its approximate straight line.
As illustrated in Figure 4, the input image is captured at frame k with an exposure time τ by opening and closing the camera shutter at times t O k = t k − τ/2 and t C k = t k + τ/2, respectively. In this study, we assume that the center time of the camera exposure is set to t k = 2nπ (n: integer) to synchronize with the sinusoid trajectory x(t) = A sin(2π/T)t so that the slope of a tangent to the sinusoid trajectory is maximum at time t k . To track a target object moving at a speed of v in images when the camera shutter is open, we assume that the amplitude A of the sinusoid trajectory is so controlled that the straight line y(t) = vt approximates the segmented sinusoid trajectory in the range of time t O k to time t C k . Here, we assume that the open and close times for camera exposure are t O k = −τ/2 and t C k = τ/2, respectively, by setting the center time to t k = 0 for simplification, and the y-intercept of the approximate line is zero because the segmented sinusoid trajectory in the range of close open open close Solving the following equation such that the partial derivative of E(A) with respect to A is zero, the amplitude A min can be derived as follows: where r = τ/T is the temporal aperture ratio that indicates the ratio of the exposure time τ to the cycle time T of the sinusoid trajectory. Figure 5 shows the relationship between the temporal aperture ratio r and the amplitude ratio of A min to A 0 ; A 0 is the slope of the tangent line of the sinusoid trajectory at time t k when the exposure time τ approaches zero as follows: Thus, the minimum value E min of the squared-error loss function is obtained as follows: Without actuator-driven frame-by-frame intermittent tracking, the squared-error loss E NT in the range of time t O k to time t C k when observing a target object moving at speed v can be described as the value of the loss function when the amplitude of the sinusoid trajectory is A = 0, corresponding to no camera motion for tracking, as follows: Considering the roots of the squared-error losses of E min and E NT , the relative error ratio ε is defined as follows: where ε indicates the degree of motion blur reduction in video shooting with frame-by-frame intermittent tracking, compared with the deviation error in video shooting without tracking; ε = f (r) is a monotonically increasing function of the temporal aperture ratio r, and motion blur is largely canceled when ε approaches zero. Thus, the temporal aperture ratio r can be expressed as a monotonically increasing function of the relative error ratio ε, which is independent of the cycle time T of the sinusoid trajectory and the target speed v, as follows: Figure 6 shows the relationship between the temporal aperture ratio r and the relative error ratio ε. Using the relationship between r and ε in Figure 6 as a look-up table, the camera shutter timings can be automatically determined in actuator-driven frame-by-frame intermittent tracking when the permissible degree of motion blur is initially given. For example, the relative error ratio ε is permissible up to 1%, 5% and 10%, respectively, and the upper-limit values of the allowable temporal opening ratios are r(0.01) = 0.151, r(0.05) = 0.333, and r(0.1) = 0.460, respectively. Especially when the exposure time is constant, the open and close times for camera exposure can be determined independently from the time-varying amplitude A of the sinusoid trajectory, which is controlled so as to cancel the apparent speed of the target object when the camera exposure is open, and these signals are generated in synchronization with the external synchronization signal from a free-vibration-type actuator.
Motion-Blur-Free HFR Video Shooting System
In order to verify the effectiveness of actuator-driven frame-by-frame intermittent tracking, we developed a test-bed system for motion-blur-free HFR video shooting. Figure 7 shows the overview of the test-bed system. The test-bed system consists of (1) a motion-blur-free HFR camera system with a resonant mirror and (2) a high-speed belt-conveyor system that can convey target objects to be observed at various speeds. The function generator AFG1022 was used to adjust the delay time in the external TTL signal so as to synchronize the center time of the open exposure with the vibration center of the sinusoid trajectory in this study. The PC was mainly used to control the vibration amplitude of the resonant mirror system for motion-blur-free video shooting.
/W džƉƌĞƐƐ
The high-speed belt-conveyor system was installed 625 mm below a planar mirror of 100 mm × 100 mm size; the planar mirror was located 135 mm on the right side of the resonant mirror to change the direction of the camera view to the vertical direction for target objects horizontally-moving on the belt-conveyor system; they were observed under the lighting with an LED illuminator (VLP-10500XP, LPL, Saitama, Japan), which was installed 400 mm diagonally upward from the belt-conveyor system. On the belt-conveyor system, target objects attached to a 500-mm-width rubber belt can move forward with rotations of 80 mm-diameter pulleys, one of which was the drive pulley powered with a three-phase induction motor (SF-PRV-3.7KW-4P-200V, Mitsubishi Electric, Tokyo, Japan). The length between pulleys was set to 1.5 m. The induction motor was controlled by an inverter (WJ200-037LF, Hitachi Industrial Equipment Systems, Tokyo, Japan), and the conveying speed of the belt-conveyor system can be set in the range of 0 to 7.55 m/s by providing a voltage command in the range of 0 to 10 V to the inverter. The rotation speed of the induction motor was measured by a high-speed vision system IDP Express [67]; the rotation speed was computed by extracting the position of an 8 mm-diameter marker attached on a rotational axis with real-time video processing of 512 × 512 images at 2000 fps on IDP Express. Thus, the conveying speed of the belt-conveyor system can be simultaneously estimated for motion-blur-free video shooting at 2000 Hz on the PC, on which IDP Express was mounted for controlling the vibration amplitude of the resonant mirror.
With this test-bed system, 1024 × 1024 input images were captured at 750 fps, with a frame interval of 1.333 ms and the exposure time of 0.33 ms, respectively, in synchronization with the external trigger signal from the resonant mirror, which was dependent on its resonant frequency. The intensity of incident light on the image sensor was not different from that while shooting a video without a resonant mirror. We confirmed that only 2.2% of the intensity of incident light was lost owing to the 23 mm × 23 mm mirror in the presented setup, compared with that in video shooting without a resonant mirror. The temporal aperture ratio in frame-by-frame intermittent tracking was r = 0.25, and the relative error ratio ε = 0.028; this corresponded to the relationship between r and ε in Figure 6. A target object on the belt plane was apparently distant R = 760 mm from the center of the resonant mirror. Considering the twice of the mirror angle of the resonant mirror, the sinusoid trajectory of the mirror angle θ(t) = A θ sin 2πt/T was projected on the belt plane as: where it is assumed that θ(t) is small. From Equation (4), the vibration amplitude A θ of the resonant mirror to minimize the squared-error loss function described in Equation (3) can be expressed as a function of the speed v of a target object by substituting T = 1.333 ms, r = 0.25 and R = 760 mm as follows: T(sin πr − πr cos πr) where c min = 1.485 × 10 −4 (rad·s/m) = 8.506 × 10 −3 ( • ·s/m). The vibration amplitude of the resonant mirror was controlled with sensor feedback so that the apparent scan speed with the resonant mirror on target objects on the belt was always matched with the conveying speed measured by the high-speed vision system IDP Express. An image region of 1024 × 1024 pixels corresponded to the 104 mm × 104 mm area on the belt of the belt-conveyor system, and 0.10 mm corresponded to one pixel. The maximum permissible speed for target objects to be observed that can guarantee the efficiency of frame-by-frame intermittent tracking was determined theoretically by the ratio of 0.5 deg to 0.33 ms, which were the maximum movable angle of the resonant mirror and the duration time of the exposure time, respectively; the maximum angular speed was 2.34 × 10 3• /s considering that the variation of the view angle via the mirror corresponds to twice that of the mirror angle. Thus, the displacement of 95.6 pixels in the x direction on the image sensor was permissible during the open exposure of 0.33 ms, and the maximum permissible apparent speed on the image sensor was 2.90 × 10 5 pixel/s. This value corresponded to the maximum permissible speed of 29.4 m/s for objects to be observed on the belt of the belt-conveyor system, whereas the maximum conveying speed of the belt-conveyor system was 7.55 m/s.
Relationship between Drive Voltage and Vibration Amplitude
Firstly, we conducted a preliminary experiment to verify the relationship between the drive voltage to the resonant mirror and its angular displacement. To measure the angular displacement, a laser beam spot was redirected by the resonant mirror, and the locations of the beam spots projected on a screen at a distance of 1375 mm from the resonant mirror were extracted offline by capturing an HFR video of 384 × 56 pixels at 100,000 fps with the exposure time of 8.98 × 10 −3 ms. Figure 8 shows the angular displacement for 4 ms when the drive voltage to the resonant mirror was set to 0.0, 1.0, 2.0, 3.0, 4.0 and 5.0 V. The angular displacement was sinusoidally changed at a frequency of 750 Hz, and its amplitude increased in proportion with the drive voltage. Figure 9 shows the relationship between the drive voltage and the averaged vibration amplitude of the angular displacement for 1 s, corresponding to 750 cycle times of the 750-Hz vibration, when the drive voltage varied in the range of 0.0 to 5.0 V at steps of 0.2 V. The vibration amplitude linearly varied with the amplitude of the drive voltage, whereas there was a slight offset around 0 V; the relationship between the drive voltage V (V) and the vibration amplitude A ( • ) can be linearly approximated as A = 0.0368V + 0.0026. Figure 10 shows the relationship between the drive voltage and the standard deviation of the vibration amplitude in the duration time of 1 s. In the figure, the relative ratio of the standard deviation to the averaged vibration amplitude was also plotted. When the drive voltage was 5.0 V, the averaged vibration amplitude and its standard deviation were 0.188 and 3.35 × 10 −4• , respectively. The standard deviations had similar values around 3 × 10 −4• at all the drive voltages, and the relative ratio decreased in proportion to the drive voltage; the relative ratio was 0.18% when the drive voltage was 5 V.
Step Responses of Vibration Amplitude
Next, we conducted an experiment to verify the response time of the vibration amplitude of the resonant mirror when a step drive voltage is commanded to the resonant mirror. In a similar environment as that in the previous subsection, the vibration amplitude of the resonant mirror is measured by capturing an HFR video for the laser beam spots projected on a screen; 384 × 265 images were captured for 5 s at 750 Hz with the exposure time of 0.05 ms in synchronization with the timing when the angular displacement of the resonant mirror was at the maximum. Figure 11 shows the step response of the vibration amplitude when the drive voltage of (a) 1 V and (b) 3 V is simultaneously switched to the different target voltage in the range of 0 to 5 V at time t = 0. Figure 12 shows the rise time (from 10 to 90%), the delay time (to 50%) and the settling time (within 5 %) of the vibration amplitude when analyzing the step responses in Figure 11. At all target voltages except 0 V in (a) 1 V and (b) 3 V, the rise times and the delay times had similar values of 0.16 s and 0.12 s, respectively. However, the settling times were much larger than these parameters. There was a distinct tendency of hysteresis that the settling time was around 0.50 s for all cases when the drive voltage increased, whereas it became larger when the drive voltage largely decreased. Comparing with the 750-Hz free vibration of the resonant mirror, the dynamic response of the vibration amplitude is so slow and hysterical that the vibration amplitude cannot be quickly controlled for motion blur reduction when the speed of a target object to be tracked is quickly time-varying, whereas vibration amplitude control functions well in motion-blur-free video shooting for a target object moving at a large, but slightly time-varying speed in many applications such as product inspection on a factory line and road inspection from a moving car.
Video Shooting without Amplitude Control for Circle-Dots Moving at Constant Speeds
Next, we conducted video shooting experiments for a patterned object attached on the moving belt of the test-bed system to verify the relationship between the speed of a target object and its motion blur when the vibration amplitude of the resonant mirror was set to a constant value. Figure 13 shows the patterned object to be observed;a circle-dot pattern on which 4 mm-diameter circle-dots were black-printed at vertical and horizontal intervals of 11 mm and 7 mm, respectively. This tendency corresponded to that the squared-error loss functions when the vibration amplitude of the resonant mirror was 0.0000, 0.0085 • , 0.0213 • , 0.0383 • and 0.0553 • were minimized in observing a target object moving with the motor command of 0.0, 1.0, 2.5, 4.5 and 6.5 m/s, respectively, according to Equation (12). The image degradation with motion blur in the horizontal direction became larger as the object speed deviated from its desired speed for motion blur reduction, which was determined by the vibration amplitude of the resonant mirror. For a circle-dot pattern, the blur index λ dot = λ x − λ 0 x was introduced; λ x represents the length of the x-axis of the approximated ellipse of the circle dot in the image, and λ 0 x is the value of λ x in observing a circle-dot at a fixed location in the case of no vibration of the resonant mirror. The index λ dot increases as the motion blur in the horizontal direction becomes larger in the image, and it becomes zero when the circle-dot has no motion. λ x was estimated offline by computing its zero-, firstand second-order moment features for the circle-dot region in the 424 × 424 image cropped from the input image; a single circle-dot was located at the center of the cropped image. The circle-dot region was extracted by binarization with a threshold of 2600. Figure 15 shows the relationship between the speed of a circle-dot and its blur index λ dot when the target objects moved with the motor command to the conveyor system in the range of 0.0 to 7.5 m/s at intervals of 0.5 m/s. In the figure, the blur indexes λ dot were averaged by those for 25 selected dots in two images, and they were plotted when the vibration amplitude of the resonant mirror was 0.0000 • , 0.0063 • , 0.0202 • , 0.0391 • , 0.0572 • and 0.0776 • . When the vibration amplitude was 0.0000 • , 0.0063 • , 0.0202 • , 0.0391 • and 0.0572 • , the blur index λ dot had a minimum value when the motor command was 0.0, 1.0, 2.5, 4.5 and 6.5 m/s, respectively; these motor commands were around the desired speeds for motion blur reduction, 0.00, 0.74, 2.38, 4.60 and 6.72 m/s, which were determined by the vibration amplitude 0.0000 • , 0.0063 • , 0.0202 • , 0.0391 • and 0.0572 • , respectively. The desired speed for motion blur reduction when the vibration amplitude of the resonant mirror was 0.0776 • , corresponding to the voltage command of 2.0 V to its control driver, was 9.12 m/s, and it was larger than the maximum conveying speed of 7.55 m/s on the belt-conveyor system. Thus, there were no local maximum/minimum values when the vibration amplitude was 0.0776 • in Figure 15. These experimental results indicate that we can reduce motion blur in video shooting when the object speed corresponds to the desired speed for motion blur reduction, which is determined by the vibration amplitude of the resonant mirror.
Video Shooting with Amplitude Control for Circle-Dots Moving at Constant Speeds
Next, we conducted video shooting experiments for a fast-moving patterned object when the vibration amplitude of the resonant mirror was controlled in proportion to the object speed on the belt, which was estimated at 2000 fps by IDP Express, so that motion blurs in input images were reduced by frame-by-frame intermittent tracking. The circle-dot pattern identical to that used in Section 6.1 was observed in the experiments. Figure 16 shows the 215 × 215 images cropped from the 1024 × 1024 input images of the circle-dot pattern when the patterned objects moved with the motor command to the conveyor system of 0.0, 1.5, 3.0, 4.5, 6.0 and 7.5 m/s. The input images captured when the vibration amplitude of the resonant mirror was controlled with sensor feedback (IT, with tracking) were compared with those captured when no vibration of the resonant mirror (NT, without tracking). As observed in Figure 16, the NT images became increasingly blurred in the horizontal direction as the object speed increased, whereas the IT images were non-blurred for all the speeds. Figure 17 shows the relationship between the speed of a circle-dot and its blur index λ dot in video shooting the IT images with amplitude control and the NT images without amplitude control when the target objects moved with the motor command to the conveyor system in the range of 0 to 7.5 m/s at intervals of 0.5 m/s. In the figure, the vibration amplitudes of the resonant mirror in video shooting the IT images with amplitude control were also plotted. The index λ dot was computed offline in a similar manner as in the previous section. In Figure 17, the vibration amplitudes of the resonant mirror were controlled for motion blur reduction in proportion to the object speed in video shooting with amplitude control, whereas there remained a slight non-zero offset in the vibration amplitude when the motor command to the conveyor system was 0 m/s. In Figure 17, the blur index λ dot for the IT images was remarkably low, comparing with that for the NT images. The blur index λ dot for the IT images when the motor command was 0.0, 1.5, 3.0, 4.5, 6.0 and 7.5 m/s was 1.50. 0.07, 0.00, 0.02, 0.09 and 0.33 pixel, respectively, whereas that for the NT images was 0.00, 1.57, 4.45, 7.25, 10.36 and 13.42 pixel, respectively. In the experiments, the object speed was 7.55 m/s or less, which was smaller than the maximum permissible motion-blur-free speed of 29.4 m/s in the horizontal direction, and our actuator-driven frame-by-frame intermittent tracking method remarkably reduced motion blurs of the circle-dot pattern moving at high speed. When the motor command to the conveyor system was 1.0 m/s or less, the blur index λ dot in video shooting with actuator control was slightly larger than that in video shooting the object moving with the motor command of 1.5 m/s or more. This is because the resonant mirror could not control its vibration amplitude around 0 • due to friction hysteresis, and there still remained small vibration as the relationship between the drive voltage and vibration amplitude was described in Section 5.1.
Video Shooting with Amplitude Control for Patterned Objects at Variable Speeds
Next, we show the experimental results in video shooting with amplitude control for a checkered pattern, when the object speed varied in the range of 0 to 7.55 m/s. In the experiments, the motor command for the belt-conveyor system was set to the following trajectory: the motor command that determines the object speed started to increase from 0 m/s at time t = 4.3 s and reached the maximum conveying speed of 7.55 m/s at time t = 8.0 s. After keeping the maximum speed during 8.0 s for t = 8.0 to 16.2 s, it started to decrease and reached 0 m/s at time t = 26.0 s. Figure 18 shows a checkered pattern on which 2 mm × 2 mm squares of alternating black and white were printed. Figure 19 shows (a) the measured object speed and the vibration amplitude of the resonant mirror and (b) the blur index λ edge when the IT images for t = 0.0 to 30.0 s were captured with frame-by-frame intermittent tracking. The blur index λ edge = E av /I ave was introduced; I ave and E ave are the averaged values of the image intensities I(x, y) and edge intensities E(x, y), respectively, for the 824 × 824 image center-cropped from a 1024 × 1024 input image. The index λ edge decreases as the motion blur becomes larger in the image, because the edge intensities in the moving direction are degraded due to motion blur. The edge intensities were computed as follows: E(x, y) = |I(x + 1, y) − I(x, y)| 2 + |I(x, y + 1) − I(x, y)| 2 .
(13) Figure 18. Checkered pattern to be observed. For comparison, Figure 19b shows the blur index λ edge when the NT images were captured with no vibration of the resonant mirror; the checkered pattern moved in a manner similar to the captured IT images. Figure 19b shows that the value of λ edge for the IT images was almost constant in the range of 11.8 to 13.0% when the speed of the checkered pattern varied in time, whereas the value of λ edge for the NT images varied considerably in the range of 9.5 to 13.0%, depending on the speed of the checkered pattern. The blur indexes λ edge for the NT images were larger than those for the IT images when the measured object speed was 0 m/s due to the non-zero offset in the vibration amplitude of the resonant mirror as illustrated in Figure 19a. There were fluctuations in the blur indexes λ edge for both the IT and NT images when the checkered pattern was moving because the average values of the edge intensities were slightly varied depending on the apparent location of the checkered pattern in the images. The dynamic response of the vibration amplitude of the resonant mirror was not so quick compared with its 750-Hz free vibration as described in Section 5.2, whereas the vibration amplitude control functioned well for motion blur reduction with frame-by-frame intermittent tracking in video shooting a target object moving at a large, but slightly time-varying speed on the belt-conveyor system, because the dynamic response of the conveyor's speed was slower than that in the vibration amplitude control of the resonant mirror.
To verify our actuator-driven frame-by-frame intermittent tracking with amplitude control for complex patterned objects, we experimented with (a) the printed pattern of an electronic board of 54 mm × 84 mm in size with 0.25 mm-width wiring patterns and (b) the printed pattern of a book page with many 2-mm letters as illustrated in Figure 20; these patterns were attached on the belt of the belt-conveyor system and moved at variable speeds in a manner similar to the checkered pattern. Figures 21 and 22 show (a) the 323 × 323 IT images cropped from the 1024 × 1024 input images of the electronic board pattern and (b) the 323 × 323 IT images of the book page pattern, when the object speed was 0.0, 2.5, 5.0 and 7.5 m/s, compared with the NT images captured when there was no vibration of the resonant mirror. The IT images at all the speeds resembled the images of unmoving patterns, which corresponded to the NT images when the object speed was 0.0 m/s, whereas the motion blur for the NT images became larger in the horizontal direction as the object speed increased. When the electronic board pattern was moving at 0.0, 2.5, 5.0 and 7.5 m/s, the blur indexes λ edge of the IT images were 11.5%, 14.8%, 15.7% and 13.4%, respectively, whereas those of the NT images were 19.8%, 9.45%, 8.13% and 7.68%, respectively. When the book page pattern was moving at 0.0, 2.5, 5.0 and 7.5 m/s, the blur indexes λ edge of the IT images were 5.22%, 7.19%, 7.59% and 6.76%, respectively, whereas those of the NT images were 8.65, 4.09, 3.62 and 3.51%, respectively. Thus, fast-moving complex patterned objects such as the wiring patterns of 0.25-mm width printed on the electronic board pattern and the 2-mm alphabet letters printed on the book page pattern were observable without noticeable blurring by applying the actuator-driven frame-by-frame intermittent tracking method with amplitude control.
Conclusions
In this study, we developed a motion-blur-free video shooting system based on the concept of actuator-driven frame-by-frame intermittent tracking. In this system, the camera frame timings are controlled for video shooting with a larger camera exposure time in synchronization with the high-frequency free vibration with a large amplitude of a resonant mirror so that it enables the ultrafast gaze control to track fast-moving objects when the camera shutter is open. Our system can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur, and its performance was verified by conducting several video shooting experiments for fast-moving patterned objects on a high-speed belt-conveyor system. The following issues remain to be solved in the future. The proposed one-DOF mirror system with low-frequency response of amplitude control has a limitation in that it cannot shoot motion-blur-free videos of an object moving at a rapid time-varying speed in variable directions, and the efficacy in obtaining the incident light entering the camera is also practically limited by the size of the mirror when we use a very large aperture lens for zooming. On the basis of these experimental results and considerations, we plan to extend our motion-blur-free video shooting system to a two-DOF mirror system that can independently control the directions of the pan and tilt mirrors and improve it by feedback-controlling the camera frame timings, as well as the vibration amplitudes of resonant mirrors with high-speed real-time video processing to estimate the object speed and compute the edge-based feature for the blur index. We also plan to apply our system to various applications such as precise production inspection on a high-speed factory automation line and infrastructure inspection from a fast-moving vehicle, where video shooting with high magnification is strongly required for unidirectionally fast-moving scenes and the apparent speed of the target scene can be given as the speed of the automation line or the vehicle.
|
2017-11-16T10:29:22.396Z
|
2017-10-29T00:00:00.000
|
{
"year": 2017,
"sha1": "9aa2c08f80339b1af8091e66156f22a2865385fb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/11/2483/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9aa2c08f80339b1af8091e66156f22a2865385fb",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Engineering"
]
}
|
269943407
|
pes2o/s2orc
|
v3-fos-license
|
The Clinical Management of Electrical Stimulation Therapies in the Rehabilitation of Individuals with Spinal Cord Injuries
Background: People with spinal cord injuries (SCIs) often have trouble remaining active because of paralysis. In the past, exercise recommendations focused on the non-paralyzed muscles in the arms, which provides limited benefits. However, recent studies show that electrical stimulation can help engage the paralyzed extremities, expanding the available muscle mass for exercise. Methods: The authors provide an evidence-based approach using expertise from diverse fields, supplemented by evidence from key studies toward the management of electrical stimulation therapies in individuals with SCIs. Literature searches were performed separately using the PubMed, Medline, and Google Scholar search engines. The keywords used for the searches included functional electrical stimulation cycling, hybrid cycling, neuromuscular electrical stimulation exercise, spinal cord injury, cardiovascular health, metabolic health, muscle strength, muscle mass, bone mass, upper limb treatment, diagnostic and prognostic use of functional electrical stimulation, tetraplegic hands, and hand deformities after SCI. The authors recently presented this information in a workshop at a major rehabilitation conference. Additional information beyond what was presented at the workshop was added for the writing of this paper. Results: Functional electrical stimulation (FES) cycling can improve aerobic fitness and reduce the risk of cardiovascular and metabolic diseases. The evidence indicates that while both FES leg cycling and neuromuscular electrical stimulation (NMES) resistance training can increase muscle strength and mass, NMES resistance training has been shown to be more effective for producing muscle hypertrophy in individual muscle groups. The response to the electrical stimulation of muscles can also help in the diagnosis and prognosis of hand dysfunction after tetraplegia. Conclusions: Electrical stimulation activities are safe and effective methods for exercise and testing for motor neuron lesions in individuals with SCIs and other paralytic or paretic conditions. They should be considered part of a comprehensive rehabilitation program in diagnosing, prognosing, and treating individuals with SCIs to improve function, physical activity, and overall health.
Introduction
Individuals with spinal cord injuries (SCIs) and other paralytic or paretic conditions often face challenges in maintaining their health and mobility due to reduced physical activity [1][2][3][4][5].A host of comorbidities develop from a combination of the neuropathology of the injury and the decreased physical activity levels associated with the injury [6].Common comorbidities include cardiometabolic conditions such as neurogenic obesity [7][8][9], metabolic syndrome [10,11], cardiovascular complications [12][13][14] including orthostatic hypotension [15,16] and autonomic dysreflexia [17,18].Early recommendations for exercise after SCI suggested voluntary exercise with the non-paralyzed muscles of the arms, which limited the activity workload due to the reduced amount of available active skeletal muscle [19][20][21].However, recent scientific research has demonstrated the benefits of electrical stimulation-evoked exercise, leading to the recommendation of neuromuscular electrical stimulation (NMES) resistance training and functional electrical stimulation (FES) cycling for individuals with SCIs [22,23].
NMES involves using electrical impulses to stimulate the paralyzed muscles, inducing muscle contractions, and increasing the range of physical activities that can be performed.This includes resistance training, which can enhance muscle strength, endurance, and power [24][25][26][27][28][29].FES exercises, such as cycling, also use electrical impulses to stimulate the affected muscles, enabling the individual to engage in physical activities that would otherwise be impossible [30][31][32][33].
The review sought to summarize important advancements in NMES and FES interventions for individuals with SCIs.Through an analysis of studies, this review showcases evidence supporting the use of these interventions for enhancing lean mass volume; improving cardiovascular and metabolic outcomes; potentially reducing bone loss; and diagnosing, prognosing, and treating hand dysfunction in this population.
Methods
The authors used the evidence-based process of combining their expertise from diverse fields supplemented by separate scientific literature searches for key evidence related to the management of electrical stimulation therapies in the rehabilitation of individuals with SCIs.The search engines used for the literature searches included PubMed, Medline, and Google Scholar.The keywords used for the separate searches included functional electrical stimulation cycling, neuromuscular electrical stimulation resistance training, spinal cord injury, cardiovascular health, metabolic health, muscle strength, muscle mass, bone mass, upper limb treatment, and diagnostic and prognostic use of functional electrical stimulation for the hands of those with tetraplegia.The inclusion criteria included articles involving individuals with SCIs; the use of electrical stimulation for treatment, diagnosis, or prognosis; and outcomes related to cardiovascular health, metabolic health, muscle strength, muscle mass, bone mass, and upper limb function.The exclusion criteria included articles 20 years or older and those that did not match the inclusion criteria.The authors recently presented this information in a workshop at a major rehabilitation conference.Additional information beyond what was presented at the workshop was added for the writing of this paper.Individuals that suffer a traumatic SCI undergo an initial rapid decline in muscle mass and strength and bone mass.For this reason, we focused on research that attempted to regain lost muscle and bone a year or more post-injury after muscle atrophy and bone demineralization had slowed.Thus, for the topics of muscle, bone, and cardiometabolic health, we focused on chronically injured individuals (>1 year post injury).For the diagnostic and prognostic evaluations for individuals with tetraplegia, the paper focused on more acute SCIs.
Cardiovascular and Metabolic Health (Table 1)
A systematic review of research by van der Scheer and colleagues [34] found that 30 out of 36 peer-reviewed studies provided moderate to high evidence supporting the effectiveness of FES cycling in improving muscle health if performed for 30 min, three times a week for 16 or more weeks.These studies applied electrical stimulation settings to maximize power output at 30-50 revolutions per minute cycling cadence.However, there was weaker evidence of whether FES leg cycling activities could provide sufficient 'dose potency' to increase power output and aerobic fitness, and the authors gave those health outcomes a 'low certainty' GRADE rating (Figure 1).One randomized controlled trial found that voluntary arm crank exercise (ACE) significantly outperformed FES leg cycling for improvements in peak oxygen utilization (VO2peak) [36].Specifically, FES leg cycling only resulted in a 2.5% increase in VO2peak, compared to an over 20% increase achieved through ACE.Similarly, a separate study found that FES leg cycling was less effective than ACE, hybrid cycling (FES legs cycling plus ACE), and outdoor arm and leg cycling in reaching training levels to improve VO2peak [37].However, upon the re-analysis and speculation of the exercise intensity required to achieve a cardiovascular training effect for low-aerobic-fitness-conditioned individuals (such as individuals with tetraplegia, elderly individuals, or morbidly obese individuals), it was hypothesized that it is possible that FES leg cycling could lead to improvements in cardiovascular fitness in these low-fitness clinical populations [38].Nonetheless, the authors concluded that hybrid FES cycling usually led to greater cardiovascular fitness improvement due to the higher cardiovascular demand during submaximal exercise.
The aerobic fitness benefits of FES leg cycling were highlighted by Johnston and associates [39] in 30 5-to-13-year-old children with SCIs after performing 40 min of FES leg cycling, passive cycling, or NMES therapy three times per week for six months.They discovered a significant increase in VO2peak (16%) with FES leg cycling, while no improvements were observed in VO2peak in the passive cycling or NMES therapy groups.However, the NMES therapy group was the only group to show decreased blood cholesterol levels (17%).
Aerobic fitness improvements are typically dependent on workload intensity, so it is reasonable to conclude that hybrid cycling, which combines FES leg cycling with ACE, may provide greater aerobic and cardiovascular health benefits than either FES leg cycling or ACE alone due to the larger muscle mass involved in such exercise.Brurok et al. [40] investigated the effects of hybrid FES cycling thrice weekly for eight weeks.A high- One randomized controlled trial found that voluntary arm crank exercise (ACE) significantly outperformed FES leg cycling for improvements in peak oxygen utilization (VO 2 peak) [36].Specifically, FES leg cycling only resulted in a 2.5% increase in VO 2 peak, compared to an over 20% increase achieved through ACE.Similarly, a separate study found that FES leg cycling was less effective than ACE, hybrid cycling (FES legs cycling plus ACE), and outdoor arm and leg cycling in reaching training levels to improve VO 2 peak [37].However, upon the re-analysis and speculation of the exercise intensity required to achieve a cardiovascular training effect for low-aerobic-fitness-conditioned individuals (such as individuals with tetraplegia, elderly individuals, or morbidly obese individuals), it was hypothesized that it is possible that FES leg cycling could lead to improvements in cardiovascular fitness in these low-fitness clinical populations [38].Nonetheless, the authors concluded that hybrid FES cycling usually led to greater cardiovascular fitness improvement due to the higher cardiovascular demand during submaximal exercise.
The aerobic fitness benefits of FES leg cycling were highlighted by Johnston and associates [39] in 30 5-to-13-year-old children with SCIs after performing 40 min of FES leg cycling, passive cycling, or NMES therapy three times per week for six months.They discovered a significant increase in VO 2 peak (16%) with FES leg cycling, while no improvements were observed in VO 2 peak in the passive cycling or NMES therapy groups.However, the NMES therapy group was the only group to show decreased blood cholesterol levels (17%).
Aerobic fitness improvements are typically dependent on workload intensity, so it is reasonable to conclude that hybrid cycling, which combines FES leg cycling with ACE, may provide greater aerobic and cardiovascular health benefits than either FES leg cycling or ACE alone due to the larger muscle mass involved in such exercise.Brurok et al. [40] investigated the effects of hybrid FES cycling thrice weekly for eight weeks.A highintensity interval training (HIIT) protocol utilized four exercise bouts at 85-90% of maximal workload for ACE and 80% of 140 mA electrical stimulation amplitude for the legs during the four-minute high-intensity exercise bouts.Three minutes of low-intensity exercise (70% of maximal workload for ACE and assisted leg cycling without electrical stimulation) was interspersed with the high-intensity bouts.After eight weeks of hybrid HIIT-FES cycling, the participants realized a 33% increase in stroke volume, a 27% increase in cardiac output, and a 28% increase in VO 2 peak over the exercise-free control period.Similarly, in a separate study, six weeks of hybrid HIIT-FES cycling with virtual-reality feedback produced a 33% increase in power output and a 20% increase in VO 2 peak [41].However, because blood lipid and glucose levels were unchanged, the authors contemplated whether more than six weeks of hybrid HIIT-FES cycling might be required to show benefits in cardiovascular health blood markers.In this study, eight adults with SCI exercised for 32 min three times per week or 48 min twice weekly, totaling 96 min of hybrid HIIT-FES cycling per week.
A study that combined NMES resistance training with FES leg cycling resulted in higher VO 2 peak levels and reduced visceral adipose tissue.Twelve weeks of NMES resistance training plus twelve weeks of FES leg cycling was compared to twelve weeks of passive leg movement plus twelve weeks of FES leg cycling.The results showed that NMES resistance training plus twelve weeks of FES leg cycling was more effective than passive leg movement therapy followed by FES leg cycling in improving VO 2 peak levels, with respective increases of 29% and 16% [42].
In a separate study, Gorgey et al. [43] demonstrated improvements in cardiovascular blood markers with positive lipid changes after 12 weeks of twice-weekly NMES resistance training.Free fatty acid levels decreased by 24%, triglyceride levels decreased by 38%, and the cholesterol/high-density lipoprotein ratio also decreased.
Regarding potential metabolic benefits, Sanchez and associates [44] performed a meta-analysis on nine studies investigating evidence that NMES effectively improves glycemic control predominantly in a middle-aged and elderly population with type-2 diabetes, obesity, and SCI.The meta-analysis showed that NMES resistance training in the legs significantly lowered fasting blood glucose.Likewise, Griffin and colleagues [45] deployed 30 min of FES leg cycling during two to three weekly sessions for ten weeks on 18 individuals with chronic SCI.They found an improvement in glycemic response during oral glucose tolerance testing and reduced levels of inflammatory markers, c-reactive protein (CRP), interleukin-6 (IL-6), and tumor necrosis factor-α (TNF-α) [45].Summary FES-LEC and ACE activities have been shown to provide cardiometabolic benefits; however, hybrid FES cycling activities, which combine both FES-LEC and ACE, have been found to be more beneficial for cardiometabolic health due to the engagement of more muscle activity and increased levels of exercise intensity.Eight weeks of thrice-weekly hybrid HIIT-FES cycling sessions showed increased stroke volume, cardiac output, and VO 2 peak levels.Combining NMES-RT and FES-LEC twice weekly has also been demonstrated to improve VO 2 peak levels, lower fasting blood glucose and improve cardiovascular blood markers.FES-LEC and NMES-RT have also been found to reduce inflammatory markers and improve glycemic control in middle-aged and elderly populations with type-2 diabetes, obesity, and SCI.More large-scale randomized control trials are needed to help confirm the findings of the current available evidence and to optimize the dose-response relative to the level of injury and the goals of individuals.
Muscle Strength and Mass (Table 2)
Roxley and colleagues [46] demonstrated the muscle-strengthening benefits of progressive resistance exercise combined with FES leg cycling.A 12-week randomized control trial on 28 individuals with incomplete SCIs combined 12 progressive resistance training sessions (knee extension and flexion, ankle dorsiflexion, and plantarflexion) and 24 FES leg cycling sessions, resulting in significantly greater quadricep and hamstring peak torque than a control group performing FES leg cycling without progressive resistance training.Moreover, the group that combined FES leg cycling with progressive resistance exercise demonstrated a more significant increase in muscle mass than the FES leg cycling-only group, 7% to 3%, respectively.
Gorgey et al. [42] also combined exercise protocols to optimize muscle hypertrophy.Twelve weeks of NMES resistance training twice weekly increased the cross-sectional area of the proximal, middle, and distal knee extensor muscle regions by 30-33%, 29-32%, and 26-28%, respectively.Furthermore, increases in knee extensor muscle hypertrophy were maintained by an additional twelve weeks of FES leg cycling.Dolbow and associates [47] used HIIT-FES leg cycling to elicit positive body composition changes, including an increased leg lean mass of 7% and a decreased total body fat percentage of 2.5%.Five individuals with chronic SCIs performed HIIT-FES leg cycling thrice weekly for eight weeks with nutritional counseling one time per week and showed significantly greater improvements than the five-person control group that received nutritional counseling only.
While Farkas and colleagues [36] found only minimal non-significant increases in VO 2 peak after FES leg cycling five times per week for 16 weeks, there were greater body composition enhancements than ACE participants with a 4% increase in total body lean mass, a 7% increase in leg lean mass, and a decrease of 4% in total body fat percentage.
Speed of cadence has also been shown to affect gains in muscle mass.Seventeen individuals with SCIs were divided into the low-cadence and high-cycling-torque FES leg cycling group (20 revolutions per minute at 2.8 Nm) and the high cadence with low torque group (50 revolutions per minute at 0.8 Nm) for cycling sessions three times per week for six months.Both increased in muscle volume, with the low-cadence group having a significantly greater increase, 19% to 10%, respectively [48].
Gorgey and associates [27] combined NMES resistance training with dietary recommendations to demonstrate increases in thigh muscle mass.After 12 weeks of thrice-weekly NMES resistance training and diet, individuals with chronic SCIs observed increases in the whole-thigh cross-sectional area of 28%, the knee extensor cross-sectional area of 35%, and the knee flexor muscle cross-sectional area of 16%.In a separate study, Gorgey et al. [49] combined NMES resistance training twice weekly for 16 weeks with low-dose testosterone patches (2-6 mg per day).They again found significant increases in skeletal muscle crosssectional area in the legs.Results from magnetic resonance images revealed a more than 20 cm 2 increase in the whole-thigh muscle cross-sectional area and a 34% increase in the proximal region of the knee extensor muscle group, with a 32% increase for the middle knee extensor region and a 30% increase in the lower knee extension region.After accounting for intramuscular fat (IMF), the percentages increased to 43%, 34%, and 33%, respectively.Although the NMES resistance training concentrated on the knee extensors, the hip adductors and hamstring muscle groups also showed gains in cross-sectional areas.These gains were also accompanied by an increased basal metabolic rate, decreased visceral adipose tissue, and reduced inflammatory biomarkers [49].
NMES resistance training combined with testosterone has also been associated with a 29% fiber cross-sectional area and increased citrate synthase and succinate dehydrogenase.Surprisingly, the number of myonuclei increased following NMES resistance training and testosterone without successfully reporting fiber-type changes in histochemistry analysis via muscle biopsy [50,51].
The above findings suggested that the use of NMES resistance training with and without testosterone may promote health benefits and attenuate comorbidities in persons with SCIs.Furthermore, using NMES resistance training with relatively inexpensive, commercially available ankle weights may be as equally effective as using expensive FES leg cycling bikes for home use.
The evidence indicates that while both FES leg cycling and NMES resistance training can increase muscle mass, NMES resistance training outperforms FES leg cycling for producing muscle hypertrophy in individual muscle groups.
A recent systematic review indicated that there is conclusive evidence of the effects of electrical stimulation exercise on muscle size and lean mass.However, there is limited evidence to support the effects on percentage fat mass, regional fat mass, or ectopic adiposity following electrical stimulation exercise in persons with SCIs [52].
Summary
NMES-RT and FES-LEC have both been shown to be safe and effective ways to increase muscle mass and reduce body fat, with NMES-RT demonstrating a greater ability to increase the skeletal muscle cross-section area in the targeted muscles.Adding testosterone patches may also enhance the benefits.Twice-weekly sessions of NMES-RT for eight to twelve weeks has been found to be a successful regime, while thrice-weekly FES-LEC has also been successful.Adding progressive resistance exercise to FES-LEC has been shown to elevate benefits.HIIT-FES leg cycling, combined with nutritional counseling, has demonstrated potential for reducing body fat percentage.More research is required to determine optimal protocols regarding the type of electrical stimulation exercise to optimize the goals of those with SCIs and to determine at what stage the various protocols should be initiated in SCI recovery.
Bone Mass (Table 3)
While evidence supports the concept that skeletal muscle hypertrophy can result from several weeks of FES exercise, slower bone metabolism typically requires at least six months to a year to produce improvements in bone health.Furthermore, positive bone health sequelae have not been consistent based on evidence.FES leg cycling and NMES resistance training provide only modest recovery or slowing of the rate of bone loss after an SCI [53].
Holman and associates [54] studied the effects of sixteen weeks of NMES resistance training on the legs along with receiving testosterone.Twenty men with SCIs were randomly placed in the NMES resistance training and testosterone group or the testosteroneonly group.The effect sizes of changes in trabecular bone were estimated to be moderate in the proximal tibia and small in the distal femur.The authors speculated that these changes could increase significantly with more extended NMES resistance training and testosterone duration.
Frotzler et al. [55] had eleven individuals with SCIs perform FES leg cycling 3-4 times per week for a year, resulting in a 14% greater trabecular bone mineral density and a 7% increase in total bone mineral density in the distal femur.Similarly, Johnston and colleagues [19] demonstrated that using low-cadence FES leg cycling (20 revolutions per minute) three times per week for six months produced a 7% increase in trabecular bone.The largest positive impact on bone resulted from electrical stimulation at 1.5 times the body weight five times per week for two years, resulting in a 31% increase in bone mineral density in the distal tibia of individuals with SCIs [56].
Another study used the stimulation amplitude and the number of leg extension repetitions to highlight muscle and bone qualities in persons with SCIs.The authors noted that an arbitrary current of less than 100 mA and a leg extension repetition number greater than 70 out of 80 repetitions may suggest that persons with SCIs had greater muscle and bone qualities.The authors were capable of driving several regression equations to predict muscle size and knee bone mineral densities in persons with SCIs [57].
Available evidence suggests that the best results have been attained with FES or NMES leg exercises at least three times per week for several months to two years, with high-resistance exercises also necessary.
Summary
Changes in bone mass are much slower than muscle mass due to the relatively slow metabolic rate in skeletal bone.FES and NMES activities have been shown to provide a limited recovery of bone mass or decelerate the bone loss rate after an SCI.The current evidence shows that FES-LEC and NMES-RT programs require high-volume and highintensity exercise to produce benefits in bone tissue.High-intensity exercise three to five times per week provides the best opportunity to slow bone loss or improve bone mineral density in individuals with SCIs.Training for at least six months to over a year may be required to achieve meaningful benefits.More research is needed to provide conclusive exercise guidelines for bone health after an SCI.Because of the limited benefits of electrical stimulation activities on bone health, future studies should focus on combining electrical stimulation exercises with bone maintenance medications or nutrition.
Diagnosis, Prognosis, and Treatment for Upper Limbs (Table 4)
A further aspect of the application of electrical stimulation demonstrates the variety of its use, taking the upper extremities as an example in people with tetraplegia.Here, the application consists of a systematic diagnosis, prognosis, and treatment sequence.As previously published, the integrity of the lower motor neuron (LMN) can be tested by selectively assessing the upper limb muscles [58,59].For this purpose, the muscles that are decisive for grasping and releasing objects are tested using a standardized measurement procedure employing electrical stimulation via a nerve, i.e., with a short pulse width.As the electrical excitability of nerve fibers (from 50 s = 0.05 ms) is earlier than that of muscle fibers (from 10 ms), the targeted stimulation of the motor points in the corresponding muscle can be used to determine whether an LMN lesion is present.This requires a reliable 2-channel stimulator that guarantees the output of the displayed intensity (amplitude mA) based on 250-300 µs (0.25-0.30ms) with a frequency of 35 Hz.A pen electrode is recommended as the active electrode for higher precision (Figure 2).metabolic rate in skeletal bone.FES and NMES activities have been shown to provide a limited recovery of bone mass or decelerate the bone loss rate after an SCI.The current evidence shows that FES-LEC and NMES-RT programs require high-volume and highintensity exercise to produce benefits in bone tissue.High-intensity exercise three to five times per week provides the best opportunity to slow bone loss or improve bone mineral density in individuals with SCIs.Training for at least six months to over a year may be required to achieve meaningful benefits.More research is needed to provide conclusive exercise guidelines for bone health after an SCI.Because of the limited benefits of electrical stimulation activities on bone health, future studies should focus on combining electrical stimulation exercises with bone maintenance medications or nutrition.
Diagnosis, Prognosis, and Treatment for Upper Limbs (Table 4)
A further aspect of the application of electrical stimulation demonstrates the variety of its use, taking the upper extremities as an example in people with tetraplegia.Here, the application consists of a systematic diagnosis, prognosis, and treatment sequence.As previously published, the integrity of the lower motor neuron (LMN) can be tested by selectively assessing the upper limb muscles [58,59].For this purpose, the muscles that are decisive for grasping and releasing objects are tested using a standardized measurement procedure employing electrical stimulation via a nerve, i.e., with a short pulse width.As the electrical excitability of nerve fibers (from 50 s = 0.05 ms) is earlier than that of muscle fibers (from 10 ms), the targeted stimulation of the motor points in the corresponding muscle can be used to determine whether an LMN lesion is present.This requires a reliable 2-channel stimulator that guarantees the output of the displayed intensity (amplitude mA) based on 250-300 μs (0.25-0.30ms) with a frequency of 35 Hz.A pen electrode is recommended as the active electrode for higher precision (Figure 2).The question of why this is ultimately important in treating the hands of people with tetraplegia is based on the fact that developing the tenodesis effect is still an essential aspect of upper-limb rehabilitation [60,61].The tenodesis effect enables the affected person to grasp and release objects tentatively.Active dorsiflexion of the wrist leads to closure of the fist, which is achieved by passive insufficiency of the long finger flexors, which are positioned in approximation to provoke shortening.The hand is opened passively by relaxing the dorsiflexion, which consecutively leads to finger extension with volar flexion.
Clinical observations have shown that achieving this tenodesis effect is rarely successful in ensuring everyday functionality of the hand despite standardized positioning and appropriate splinting, including physio-and occupational therapy.Factors like edema, pre-existing contractures, and spasticity can influence the desired result.Another reason that should be considered is damage to the LMN on critical muscles that determine grasp and release.The key actuators are the extensor digitorum communis (EDC), the extensor pollicis longus (EPL), and the abductor pollicis longus (APL) for finger and thumb extension and the flexor digitorum profundus (FDP) and flexor pollicis longus (FPL) for flexion.
In a study involving 86 individuals with tetraplegia, it was shown that four different scenarios of hand forms develop, which have different innervation patterns regarding the LMN integrity of the critical muscles for hand opening and closing [62].A subsequent investigation of the differently developing thumb positions, which also contribute significantly to the functionality of grasping and releasing, confirmed the findings previously obtained for the finger extensors and flexors [63].
In terms of hand form, the following four scenarios were identified: 1.The open flat hand, in which both the EDC and the FDP show LMN damage.
2. The hand that shows an incomplete tenodesis effect but with few functional limitations.In this case, the integrity of the LMN is preserved on both the EDC and FDP.
3. The classic hand with the well-functioning tenodesis effect, in which the EDC typically has a damaged LMN and the FDP an intact LMN.
4. The undesired claw hand, which is functionally unsuitable for manipulating objects.This is characterized by an intact LMN on the extensor side (EDC) and a damaged LMN on the flexor side (FDP).
This finding has implications for the treatment of the tetraplegic hand in rehabilitation.The use of electrical stimulation can be targeted based on the knowledge of the type of damage.In scenario 1, for example, where both the EDC and the FDP are denervated, long-pulse stimulation is indicated to prevent denervation atrophy, which results in the alteration of the muscle into connective and fatty tissue [64].The likelihood of contractures developing is high.
In the case of the claw hand described in scenario 4, the consequence in treatment is that classic taping of the hand to support the development of the tenodesis effect should preferably be avoided (Figure 3).Applying the stimulus via tape to the dorsal side of the fingers activates the muscle spindles.Muscle spindles are sensitive longitudinal traction receptors in skeletal muscle.Stretch-induced activation excites the Ia and II afferents in the spindle.The discharge of the muscle spindle's afferents depends on the muscle's resting length.It can be increased by applying pressure to the muscle belly or tendon or by moving the joint in a direction that increases the stretch of the muscle.
In other words, taping the hand is counterproductive to developing a tenodesis effect [65].The discharge of the muscle spindle's afferents depends on the muscle's resting length.It can be increased by applying pressure to the muscle belly or tendon or by moving the joint in a direction that increases the stretch of the muscle.
In other words, taping the hand is counterproductive to developing a tenodesis effect [65].
The effective and efficient electrical stimulation of the various neurologically damaged muscles of the upper limb is essential for successful treatment.Electrical stimulation can be used as a diagnostic tool to determine the damage.Applied and used promptly following an SCI, it allows a prediction about the development of hand function [32].
Summary
The electrical stimulation testing of upper-extremity muscles can provide diagnostic information regarding upper or lower motor neuron injury to muscles that are key to upper-extremity function.This information can also be used to determine the prognosis of possible future deformities of the hands and how to best approach rehabilitation to achieve the tenodesis effect for grasping and overall functional recovery as well as reconstructive surgery, including muscle-tendon and nerve transfers.The research is extensive and detailed in this area, with guidelines that can help provide targeted electrical stimulation exercises to help decrease the risk of contractures and improve the recovery of hand function.Further research is required to determine the optimal dose-response effects of electrical stimulation training on injuries of varying levels and degrees of completeness.
Conclusions
Overall, electrical stimulation activities are safe and effective methods for exercise (NMES and FES) and testing for motor neuron lesions in individuals with SCIs and other paralytic or paretic conditions.They should be considered part of a comprehensive rehabilitation program in diagnosing, prognosing, and treating individuals with SCIs to improve function, physical activity, and overall health.
Figure 2 .
Figure 2. Nerve stimulator including pen electrode (active electrode) and self-adhesive electrode (reference electrode) for motor point mapping.Figure 2. Nerve stimulator including pen electrode (active electrode) and self-adhesive electrode (reference electrode) for motor point mapping.
Figure 2 .
Figure 2. Nerve stimulator including pen electrode (active electrode) and self-adhesive electrode (reference electrode) for motor point mapping.Figure 2. Nerve stimulator including pen electrode (active electrode) and self-adhesive electrode (reference electrode) for motor point mapping.
Figure 3 .
Figure 3.The impact on finger extensors by taping in the case of an intact LMN.
Figure 3 .
Figure 3.The impact on finger extensors by taping in the case of an intact LMN.
Table 1 .
Effects of electrical stimulation exercise on cardiovascular and metabolic health.
Table 2 .
Effects of electrical stimulation exercise on muscle strength and mass.
Table 3 .
Effects of electrical stimulation exercise on bone.
Table 4 .
Use of electrical stimulation for diagnosis, prognosis, and treatment for upper limbs.
|
2024-05-22T15:17:40.551Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "fb3f6dd77d5b62d2e0fd3f619ac8c198e44b8136",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/13/10/2995/pdf?version=1716188915",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc892211ad37d2ede64aef328d9dabb1bb5f1939",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244696169
|
pes2o/s2orc
|
v3-fos-license
|
The Application of Targeted Nanodrugs with Dual Responsiveness of PH and Ros in Preventing and Treating Vascular Restenosis
In order to study the application of PHand Ros-responsive targeted nanodrugs in preventing and treating vascular restenosis, a method based on pH-responsive and reactive oxygen species(ROS-) responsive carrier materials synthesized in the early stage and rapamycin as a model drug was proposed. *is method evaluated the therapeutic advantages of PH and Ros dual-responsive nanoparticles and the effect of dual-responsive active targeted drug delivery nanoparticles on vascular restenosis in vivo by comparing with nonresponsive PH or Ros single responsive nanotherapy. By optimizing the feed mass ratio of pH-responsive materials (ACD) and ROS-responsive materials (OCD), the best pH and ROS responsive nanoparticles were prepared. It has been proved that nanoparticles have ultrasmall volume (10–1000 nm) and can easily pass through the blood vessel wall without causing damage and have the characteristics of targeting and sustained release, so they are an ideal carrier for local administration. Nanoparticles as gene vectors have also achieved good results.
Introduction
In recent years, targeted drug delivery based on nanoparticles has been considered as a promising strategy for specific delivery of different imaging and therapeutic drugs, providing a new approach for the diagnosis and treatment of cancer and immune, cardiovascular, and other diseases.Cardiovascular disease (CVD) is the main cause of morbidity and mortality in the world.It is estimated that cardiovascular diseases can cause 17.9 million deaths every year, accounting for about 31% of the total global deaths.Studies have shown that vascular inflammation is closely related to the pathogenesis of many cardiovascular diseases, such as atherosclerosis, myocardial infarction, restenosis, intracranial aneurysm and aortic aneurysm, stroke, and peripheral artery disease.By regulating different molecular and cellular processes involved in inflammatory reactions, a large number of therapeutic methods have been studied to prevent and treat cardiovascular diseases [1].Although great achievements have been made in preclinical research, however, the ideal efficacy of most tested anti-inflammatory drugs has not been fully proved in clinical practice.To a great extent, this may be related to the inefficient delivery of therapeutic molecules to vascular inflammation sites, which is caused by their nonspecific distribution and rapid elimination from circulation.Even drugs are absorbed locally in inflammatory blood vessels because of uncontrolled diffusion, and the residence time is also very short.In recent years, targeted therapy based on nanoparticles is considered as a promising strategy, which can specifically deliver different therapeutic and imaging drugs for detecting or treating vascular inflammation [2].Especially, a wide range of nanoparticles have been designed as targeted carriers to treat atherosclerosis, myocardial infarction, heart failure, ischemia-reperfusion injury, severe limb ischemia, restenosis, abdominal aortic aneurysm, and ischemic stroke.In these examples, polymerized lipid nanoparticles, liposomes, recombinant high-density lipoproteins, vesicles derived from cells, inorganic nanoparticles, metal nanoparticles, and mixed nanoparticles are used to deliver therapeutic drugs [3].In addition to passively targeting vascular lesions through a damaged endothelial cell layer or EPR effect, nanoparticles can further improve vascular targeting efficiency by adjusting their physical properties (such as geometric structure and compliance), modifying molecular groups, or functionalizing special cell membranes.Studies have shown that according to the biochemical signals of abnormal changes of vascular inflammation sites, people can design and synthesize the components of nanoparticles and release the drug load in a targeted manner.However, the transformation of these vascular targeted nanotherapies is still challenging [4].Siracuse J. et al. believe that although direct regulation of antigen-specific affinity can enhance the aggregation of nanoparticles in blood vessels, the targeting efficiency of nanoparticles is still very limited only after modification with molecular groups.For nanodrug therapy derived from the biomimetic strategy based on cell membrane, its relatively complex formulation process and unclear components may hinder its subsequent large-scale production and clinical research [5].Cf A. thought that, in view of the slight acidic environment of inflammation and increased oxidative stress after vascular endothelial injury, active oxygen and pH-responsive nanoparticles were constructed by integrating and optimizing the ratio of active oxygen and pH-responsive cyclodextrin materials, which can be used as a new effective and safe nanoplatform for drug delivery at vascular inflammatory sites [6].
Treatment of Vascular Restenosis
According to the characteristics of vascular restenosis, researchers have put forward some treatment schemes to inhibit vascular restenosis, but there is no completely successful treatment method in clinic.
Because systemic therapy cannot successfully inhibit restenosis, researchers put forward the concept of local administration.e main advantage of local administration is that the dosage is reduced, thus alleviating the toxic and side effects of drugs, and the local action time can be prolonged through the physical or chemical connection between drugs and carriers.In addition, short-half-life drugs, such as recombinant proteins, polypeptides, and other unstable biological macromolecules, can reduce the damage caused by systemic administration.Local administration can also avoid individual differences [6][7][8][9].erefore, from a therapeutic perspective, the development of drug vectors that can effectively deliver bioactive agents and selectively target regions of oxidative stress and acidic microenvironment may play a positive role in the diagnosis and treatment of inflammation-related diseases.
Peripheral Administration of Blood Vessels.
e principle of peripheral administration of blood vessels is to provide a drug reservoir on the outer skin of blood vessels, from which the drug is released into tissue fluid and passively diffused into the blood vessel wall driven by concentration gradient.Ethylene-vinyl acetate copolymer (EVA) is a nontoxic biodegradable polymer.It has been approved by the FDA as a carrier for peripheral drug delivery.It can carry lipophilic or hydrophilic drugs and has good mechanical properties.
Researchers systematically studied the antiproliferation and restenosis effects of heparin carried by EVA in the rat carotid artery model.Experiments show that local extravascular administration of heparin is an effective method to prevent restenosis.EVA can also carry antisense oligonucleotides as a vector [6].
Intravascular Administration.
Intracavitary administration can be performed simultaneously with percutaneous vascular intervention.Once administered, the drug is first distributed in the deepest part of the intima and middle layer of blood vessels, so it can treat intravascular processes such as thrombosis.en, some drugs were redistributed through the vascular nourishing tube and reached the adventitia.Because redistribution is performed in a few hours, treatment can occur in the process of adventitia, such as the aggregation of myofibroblasts and the change of connective tissue protein expression [10].
Intraluminal Stent.
In recent years, with the in-depth study of vascular deformation after interventional angioplasty, endovascular stents have been widely used.
e mechanical force of stent can neutralize the early elastic recovery of blood vessels and the late contraction deformation of blood vessels, but the intimal hyperplasia caused by stent is more serious than that caused by interventional angioplasty.In order to prevent restenosis caused by stent, researchers have proposed two new drug delivery systems based on stents.One is to wrap a polymer film containing drugs on the stent.
e other is made of biodegradable polymer containing drugs [11,12].
Commonly used vectors for gene therapy of vascular restenosis mainly include adenovirus vector, retrovirus vector, liposome, cationic polymer, etc. Adenovirus vector is the most widely used gene vector at present.Adenovirus transfection efficiency is very high, and it will not integrate into the DNA of host cells, so there is no risk of carcinogenesis.ere are some problems in the use of viral vectors in gene therapy, such as their immunogenicity, proto-oncogene characteristics, and some unknown long-term effects.Nonviral vectors do not have these problems.However, the transfection efficiency of such vectors as liposomes is relatively low.
e transfection efficiency of cationic polymer is relatively high, but its clinical applicability needs further research.Stimulus-responsive nanoparticles have been widely concerned as an intelligent drug delivery system and used in the diagnosis and treatment of various diseases.
Application of 3 Nanoparticles in the Treatment of Vascular Restenosis
Nanoparticles are new drug controlled-release carriers made of polymer with a diameter of 10-1000 nm.Drugs can be wrapped inside nanoparticles or adsorbed on the surface of nanoparticles, or they can be combined with nanoparticles through chemical bonding.Nanoparticles can carry different drugs [13], and their release time ranges from several minutes to several months.e mechanism of drug release from nanoparticles can be drug diffusion, polymer degradation, or adsorbed drug detachment from nanoparticles.As a drug delivery system, nanoparticles have certain advantages for local drug delivery in blood vessels.Because of its very small volume, it can easily pass through the blood vessel wall.It can easily enter the blood vessel wall without causing damage.According to the different physical and chemical properties of drugs, the drug loading of nanoparticles can reach up to 30%.By adjusting the parameters of the preparation process and the composition of the polymer, the release time of the coated drug can be from several days to several months.Nanoparticles can be made into uniform and stable suspension and dispersed in salt solution or buffer solution and can also exist in tissue culture solution.
erefore, it is feasible for injection.In addition, due to the small size of nanoparticles, when it enters the body, it hardly causes the immune response of the body.Studies on the distribution of viscera confirmed that the distribution of dual-responsive nanoparticles in vivo was mainly concentrated in the liver and spleen, relatively less in the lungs, and almost no fluorescence distribution in the heart and kidney after injection of the tail vein.
Studies have shown that local administration of dexamethasone nanoparticles coated with poly (lactic acid-polyglycolic acid) copolymer can effectively inhibit restenosis in rat carotid artery and pig coronary artery models [3], and dexamethasone has existed in blood vessel walls for more than two weeks.U-86983 (an antiproliferative agent) nanoparticles have an obvious antiproliferative effect in tissue culture, and continuous application in the rat model for two weeks can inhibit the occurrence of restenosis.Heparin-loaded poly (lactic acid-polyglycolic acid) copolymer nanoparticles can reduce platelet deposition by nearly 40 times in the pig vascular model, and the coagulation time has not changed, which indicates that heparin does not show a systemic effect.erefore, local administration of heparin nanoparticles can effectively inhibit thrombosis and restenosis of blood vessels [1] and, at the same time, lead to endothelial dysfunction, macrophage activation, and release of cytokines and growth factors, promote the proliferation and migration of vascular smooth muscle cells, and ultimately, lead to the formation of new intima and lumen stenosis.
Experimental Analyses
4.1.Materials and Methods.In this experiment, PLGA nanoparticles containing RPM were prepared by ultrasonic reemulsification and solvent evaporation.RPM is added into methylene chloride solution of PLEA with a certain concentration and volume, and ultrasonic emulsification is carried out by using a probe ultrasonic instrument under ice bath condition to form a uniform suspension, then PVA aqueous solution is added, and stirring is continued by the ultrasonic emulsification instrument to prepare a uniform milk-like compound emulsion.It is put in a beaker, and the organic solvent is volatilized with a magnetic stirrer at normal pressure in a fume hood.After the solvent is completely volatilized, it is centrifuged with a high-speed centrifuge at 23000RPM for 20 minutes and the supernatant is discarded.e collected precipitate is washed with distilled water for 3 times and then freeze-dried for preservation after removing free rpm and PVA. e level of H 2 O 2 was significantly reduced after treatment with double-responsive rap-loaded nanoparticles, indicating that the double-responsive raploaded nanoparticles can effectively reduce the oxidative stress of damaged local tissues.e process flow is shown in Figure 1.
e animals were killed by euthanasia at 1 hour and 3 hours after operation, the hearts were taken out quickly, and the coronary arteries were washed with normal saline.
e administration position was determined with reference to CAG, and the administration vessel was dissociated.A total of 4 cm blood vessels were taken from the 1 cm above and below the estimated position in the drug administration group, and 2 cm blood vessels from the dyeing position in the dye group were placed in an Eppendorf tube for marking and then stored in a liquid nitrogen tank.e blood concentration sample is cut in time.Tissue homogenate was performed in a tissue homogenizer.After centrifugation, the supernatant was taken and stored at low temperature [14].
RPM detection conditions by HPLC are shown in Table 1.RPM content of the arterial wall was measured by an internal standard method and external standard method, respectively.
Pretreatment before injection: the sample was dissolved by 0.5 ml mobile phase and centrifuged at 10000 rpm for 3 min, and 50 μl sample solution was absorbed and injected.
Selection of Internal
Standard.N, N-diethyl-m-toluamide (deft) was selected as the internal standard for detecting RPM by HPLC.Internal standard retention time after injection shall be interpreted with reference to the injection time of separate internal standard.
(1) Selecting an internal standard method for drug extraction in vascular tissue (2) Taking out the vascular tissue from the Eppendorf tube and accurately weighing the wet weight of the tissue (3) Cutting the tissue pieces evenly and putting them into a tissue homogenizer and adding 50 μl internal standard solution, 1 ml homogenizing normal saline, and a proper amount of rapamycin standard solution (4) Taking the suspension and adding it into the centrifuge tube and addding chloroform solution 1 ml at the same time, then oscillating for 15 min, and centrifuging at 3000 rpm for 10 min (5) Absorbing the lower organic phase and putting it in a clean test tube (6) 1 ml chloroform is added into the upper water phase again.e abovementioned steps are repeated, and the organic phase is added into the storage test tube Journal of Healthcare Engineering (7) Storing the test tube of organic phase in an oven at 500C overnight and draining it the next day (8) e mobile phase was added to the sample for HPLC detection [15] (9) e external standard method is the same as the internal standard method except that the internal standard solution is not added in the step (10) Testing the standard curve and reproducibility (11) Taking 6 abdominal aorta of New Zealand white rabbits with a diameter of about 2.5 mm and a length of about 40 mm and accurately weighing the wet weight of the tissues (12) Cutting the tissue block evenly, putting it into a tissue homogenizer, and adding l1 ml normal saline and a proper amount of rapamycin standard solution for homogenization e recovery rate was determined, and the RPM concentrations were 6.25, 25, and 50 μg/ml for each standard solution.1 ml 6.25, 25, and 50 μg/ml standard solutions are added into the cut arterial tissue, extracted according to the abovementioned conditions and methods, and measured by HPLC, and 20 μl samples are injected each time, and each concentration was determined 4 times repeatedly. 1 ml 2.5 μg/ml RPM standard solution is added into the cut arterial tissue, extracted according to the abovementioned conditions and methods, and measured by HPLC, and 20 μl samples are injected each time, and the determination was repeated 5 times.At the same time, in the microenvironment of inflammation and oxidative stress at the site of vascular endothelial injury, pH and ROS dual-responsive drugloading nanoparticles can simultaneously trigger the release of the drug rapamycin in the slightly acidic local tissues and the elevated ROS environment and inhibit the proliferation and migration of VSMCs, thus more effectively inhibiting the formation of angiogenesis intima.
Statistical Treatment.
All data were calculated by SPSS10.0.e statistical results are expressed as mean and standard deviation (measurement data) or percentage (counting data).e data were tested for normality by the Kolmogorov-Smirnov method.
e parameters among groups in the normal population were compared by the T test or Newman-Keuls test of one-way square analysis.Rank and Kruskal-Wallis tests were used for nonnormal distribution data.e X test and Fisher accurate probability test were used for counting data.Confidence coefficient was taken as α � 0.05.
Results
e particle size distribution of RPM-PLGA nanoparticles prepared by ultrasonic emulsification can be seen from the light scattering particle size distribution diagram.RPM-PLGA nanoparticles prepared by PLGA with an LA : GA ratio of 50 : 50 have an average particle size of 246.8 nm, and the particle size distribution is concentrated in the range of 208nm-294 nm, showing a narrow distribution [16].
e sample of RPM-PLGA nanoparticles prepared by ultrasonic emulsification was observed by using a scanning electron microscope, as shown in Figure 2. Nanoparticles are spherical, smooth in appearance, and well encapsulated.It can be seen that the particle size observed by using the SEM is within the expected range, which is consistent with the light scattering results.e flaky lump shadow can be seen in the photo because the HITACHIX-650 scanning electron microscope cannot cool the sample at the same time, and the high temperature under the microscope melts the RPM-PLGA nanoparticle sample.We intend to further construct dual-responsive nanodrugs targeting intravascular subcutaneous type IV collagen by modifying pH and ROS dual-responsive nanoparticles with targeting units and characterize their physical and chemical properties.
e free RPM-PLGA was extracted according to the abovementioned method, and the experimental results showed that the average extraction rate reached more than 99% (Table 2), so the reliability of this method for determining the drug embedding rate of RPM-PLGA nanoparticles can be affirmed.
e research shows that type IV collagen (Col-IV) is the main component of subcutaneous basement membrane in blood vessels, and KLWVLPKGGGC polypeptide has high affinity with type IV collagen and has good targeting for inflammatory and/or injured vascular lesions.In view of the abovementioned situation, we covalently bound the polypeptide with DSPE-PEG-maleimide by chemically bonding sulfhydryl with maleimide.DSPE-PEG-KLWVLPKGGGC was successfully synthesized, and then, TAOCD NP (pH/ ROS Double-Responsive Nanoparticle) targeting intravascular subcutaneous type IV collagen was successfully In this study, pH/ROS biresponse IV nanoparticles (TAOCD NPs) targeting intravascular subcutaneous collagen type IV were successfully constructed by a similar improved nanoprecipitation/self-assembly method.e antiproliferative drug RAP can be effectively encapsulated into TAOCD NPs to form a targeted and double-responsive drug-loaded nanoparticle (RAP/TAOCD NP).In vitro studies show that similar to nontargeted nanoparticles (AOCD NPs), Cy5/TAOCD NPs can be effectively absorbed by rVSMCs in a dose-dependent and time-dependent manner.Cy7.5/TAOCD NPs can specifically bind to type IV collagen in vitro.After being modified by targeted type IV collagen polypeptide, the biresponsive nanoparticles significantly enhanced their ability of targeted aggregation of injured carotid artery tissue.e dualresponsiveness-targeted RAP nanoparticles significantly inhibited the abnormal proliferation of neovascular intima after balloon injury of the carotid artery in rats and further improved the effect of pH/ROS dual-responsiveness nanodrug in preventing and treating vascular restenosis in vivo.Cy7.5/TAOCD NPs can specifically bind to type IV collagen in vitro.Dual-responsive nanoparticles modified with targeted type IV collagen peptides significantly enhanced their ability to target aggregation of injured carotid artery tissue.
Conclusions
Based on the abovementioned research results, we believe that the pH of constructing based on beta-cyclodextrins and the ROS double-responsiveness nanodrug delivery system can effectively deal with the active species with low pH value and high level caused by endothelial injury and effectively release the drug carrying molecules in the environment of microacid and oxidative stress, so as to avoid systemic toxicity to the drug.Moreover, the targeting ability of AOCD NPs was significantly improved by modifying the targeting unit of the double-responsive nanocarriers.From what has been discussed above, we can see that the nanoparticles with ultrasmall volume and high drug-polymer interactions can easily penetrate the vascular wall instead of the vascular endothelial damage and also can reduce the side effects of drugs, protect the drug from enzymatic degradation, and so on, so the nanoparticles in the treatment of vascular restenosis have a unique advantage.However, there are also some problems in the application of nanoparticles, such as the biocompatibility and biodegradability of materials, the stability and integrity of biological macromolecules carried by nanoparticles, and the content and release time of drugs in nanoparticles, and all need to be carefully investigated.erefore, we can imagine that using nanoparticles as gene carriers for gene therapy is a promising research direction [17].
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Table 1 :
Conditions for detecting RPM by HPLC.
|
2021-11-28T16:07:59.820Z
|
2021-11-26T00:00:00.000
|
{
"year": 2021,
"sha1": "69b2b812369480efbdf649f8742532b5428d08de",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jhe/2021/3982158.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "622c7a306d984a988883f01fb076a20689e11e0e",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15314676
|
pes2o/s2orc
|
v3-fos-license
|
Regular spatial structures in arrays of Bose-Einstein condensates induced by modulational instability
We show that the phenomenon of modulational instability in arrays of Bose-Einstein condensates confined to optical lattices gives rise to coherent spatial structures of localized excitations. These excitations represent thin disks in 1D, narrow tubes in 2D, and small hollows in 3D arrays, filled in with condensed atoms of much greater density compared to surrounding array sites. Aspects of the developed pattern depend on the initial distribution function of the condensate over the optical lattice, corresponding to particular points of the Brillouin zone. The long-time behavior of the spatial structures emerging due to modulational instability is characterized by the periodic recurrence to the initial low-density state in a finite optical lattice. We propose a simple way to retain the localized spatial structures with high atomic concentration, which may be of interest for applications. Theoretical model, based on the multiple scale expansion, describes the basic features of the phenomenon. Results of numerical simulations confirm the analytical predictions.
Introduction
Optical lattices formed by laser waves are the media where Bose-Einstein condensates (BEC) exhibit remarkable properties. Over the last few years there has been a significant progress in manipulation with BEC confined to optical lattices which resulted in observation of diverse new phenomena, such as coherent emission of Bose-condensed atoms [1], Bloch oscillations and Landau-Zener tunneling [2], atomic Josephson effect [3], Mott insulator -superfluid transition [4]. Realization of BEC arrays in 2D and 3D optical lattices [4,5] has opened new perspectives for investigation of fundamental properties of quantum gases in lower dimensions. Theoretical studies on the dynamics of BEC in a periodic potential critically rely on the concepts of band structure and Bloch states, acquired from the solid state physics. Recent developments in the field show that the above concepts, originally constructed for linear periodic systems, also play the crucial role in the physics of nonlinear periodic systems, such as BEC arrays in optical lattices [6,7,8]. A good correlation between predictions of the band theory for the BEC dynamics in a periodic potential and experimental results with static, moving, and accelerating 1D optical lattices was demonstrated in [9]. Wide range of spatiotemporal behaviour of a BEC in 1D linear-and circular-chain optical lattices in the tight-binding limit, when the dynamics is described by the discrete nonlinear Schrödinger equation, was numerically investigated in Ref. [10]. Transmission of matter wave pulses incident in a 1D optical lattice, including their collision dynamics, was theoretically considered in [11].
A subject, which is interesting both from the viewpoints of the theory and applications of BEC, is the origin and dynamics of spatially localized nonlinear excitations in the condensate confined to a periodic potential. A stimulating discovery was the proof of the existence of bright solitons in the effectively 1D BEC arrays with repulsive interaction between atoms [8,12,13]. In view of the fact that a continuous BEC with repulsive interatomic forces does not support spatially localized humps of atomic concentration, BEC arrays in optical lattices are considered to be the most inviting media for the creation and manipulation of soliton-like structures with ultracold atoms. The physical mechanism by which this possibility arises is similar to that of electrons in a periodic potential, in specific cases acquiring negative effective mass. The presence of the optical lattice can invert the sign of the dispersive term, which then balances the action of the nonlinearity. Therefore, bright solitons in BEC arrays with repulsive interaction between atoms are possible in the presence of the periodic potential of the optical lattice. It is worth to mention here that although a continuous BEC with attractive interaction between atoms can bear bright solitons, its other property leading to a collapse of the condensate at some critical atomic concentration (for review see e.g. [14]), makes it less appealing for the above purpose. In recent papers [15,16], reporting on the first experimental observation of matter-wave bright solitons in a continuous BEC with attractive interatomic forces ( 7 Li), the macroscopic quantum bound state of Bose-condensed atoms (bright soliton) was shown to exist in a narrow window of atomic numbers (around N ∼ 5000). Beyond that window atomic wavepackets undergo collapse or explosion. Moreover, the bright soliton with attractive forces between atoms subject to expulsive potential, as applied in [15,16], appears to be of limited lifetime due to the effect of quantum tunneling [17] (termed by authors as quantum evaporation), which leads to eventual explosion of the soliton. Therefore, matter-wave bright solitons composed of repulsive atoms in optical lattices, which are free of the above constraints, seem to be advantageous for applications.
Out of existing studies on localized excitations in BEC arrays, little attention has been devoted to methods of creation of such structures, so far. It has recently been suggested to employ the modulational instability, which constitutes one of the most important phenomena associated with the evolution of nonlinear waves, to create bright BEC solitons in a 1D optical lattice [8]. A variety of localized solutions are found to the one-dimensional nonlinear Schrödinger equation with a periodic potential, some of which are spatially and temporally stable [12]. Interesting consequence of the modulational instability in a continuous BEC was reported in [18]. The authors have shown that the modulational instability leads to fragmentation of the ferromagnetic phase in a spinor Bose-condensate. Another manifestation of the modulational instability as leading to dynamical superfluid-insulator transition in a BEC confined to an optical lattice and magnetic potential has been studied in [19].
It is appropriate to mention, that the phenomenon of modulational instability is well studied in different areas of nonlinear physics, since initiated in the 1960s, by predictions in hydrodynamics [20], plasmas [21,22], and nonlinear optics [23,24]. For later reviews on modulational instability in Hamiltonian systems the reader is addressed to references [25,26,27]. In view of the existence of many features of ultracold atomic gases similar to those observed in the above mentioned systems, there is a solid ground to expect rich dynamics induced by the modulational instability in such a nonlinear system as Bose-Einstein condensates.
In the present paper we study the dynamical processes in BEC arrays confined to one-, two-, and three-dimensional optical lattices, which are due to the modulational instability. Particularly we focus on the coherent spatial structures in 2D and 3D BEC arrays, which originate from the modulational instability.
The paper is organized as follows: Section 2 contains the derivation of main equations, as well as brief exposition of the modulational instability. In section 3 we present the energy band structure for a BEC distributed over the periodic potential. In section 4 we analyze the features of spatial structures in arrays of BEC, originated form the modulational instability of the initial waveforms, corresponding to different points of the Brillouin zone. Section 5 summarizes the results of this study.
The multiscale analysis and modulational instability
To develop the model we consider a dimensionless 3D Gross-Pitaevskii (GP) equation where r = (r x , r y , r z ). In (1) the spatial coordinates are normalized to ℓ, ℓ being a characteristic size of the potential (say, the smallest of its periods), the time is mesured in units of 2mℓ 2 / , and the amplitude of the order parameter is normalized to the total number of atoms per unit volume N /ℓ 3 . Then the nonlinearity coefficient χ is given by χ = 8πN a s /ℓ, where a s is the s-wave scattering length. The potential V (r) is assumed, for the sake of simplicity, to be separable, i.e. of the form V (r) = j V j (r j ), j = x, y, z (which corresponds to the majority of experimental settings), and periodic in each of the spatial directions: V j (r j ) = V j (r j + a j ), with a j the period in the direction r j (in accordence with the accepted scaling a j 1). For convenience, the equation (1) is considered subject to periodic boundary conditions ψ(r) = ψ(r x + L x , r y , r z ), etc., where L j = N j a j with N j and L j respectively, the number of primitive cells and the length of the system in the direction r j . The theory is developed for the small amplitude limit, when the multiscale analysis is applicable. Hence, we look for a solution to equation (1) in the form where the ψ j are functions of the scaled independent variables τ p = ǫ p t, ξ p = ǫ p r, p = 0, 1, 2, ..., with ǫ a small parameter to be specified later. Denoting with ω α j (q j ), and Φ α j (r j ) ≡ |α j , q j , the eigenvalues and eigenfunctions of the periodic operators L r j = −∂ 2 r j + V j (r j ), we have that the solution to a linear part of the equation (1), Lψ = 0, with L = i∂ t − j L r j , can be written in the form |m x m y m z = j Φ m j (r j )e iωα j (q j )t , with Φ m j (r j ) Bloch states of the corresponding 1D linear problems. Here m j denotes the couple of quantum numbers {α j , q j }, with α j the band index and q j the component of the wave vector in the j direction (note that the imposed boundary conditions obviously imply that q j ≡ q j,n = 2π L j n so that the extension of the Brillouin zone (BZ) in the j direction is [−π/a j , π/a j ]).
Substituting the equation (2) into (1), and collecting terms of equal powers in ǫ, one arrives at the set of equations Lψ n = M n , with where ∇ p denotes the gradient with respect to ξ p . Since we are interested in instabilities of the condensate wavefunction, we investigate the influence of the nonlinear term in the equation (1) on the Bloch states of the underlying linear problem. To this end we take as starting point in the expansion (2), a modulated state of the form with ω 0 ≡ j ω α 0,j (q j ) (the subscript zero refers to the chosen band, below we consider the two lowest ones). Then the first order equation is automatically satisfied by ψ 1 , while the equation of the second order can be solved in the form where the prime denotes α = α 0 in the sum and have taken into account that the terms with q = q 0 give zero contribution. Analysis similar to that of reference [8] shows that A = A(R; ξ 2 ...; τ 2 , ...) with R = ξ 1 − vτ 1 and v = − α 0x α 0y α 0z |2i∇|α 0x α 0y α 0z is the group velocity of the carrier wave. The coefficients B α are found as where x , q 0,x δ αy,α 0,y δ αz,α 0,z (other coefficients Γ are obtained by cyclic permutations of x, y, z). Finally, considering the orthogonality of M 3 to |m 0x m 0y m 0z we obtain the following 3D NLS for the slowly varying envelope where we assumed A not depending on ξ 2 , and introduced the inverse of the effective mass tensor and the effective nonlinearitỹ Now we are at the point to discuss the small parameter. To simplify, we consider a cubic box with a j = ℓ and L j = L. On the one hand, as it was mentined above, the physical order parameter is normalized to the total number of atoms N , while the formal wave function ψ must be normalized to one: |ψ| 2 dr = 1. On the other hand all parameters in the equation (6) must be of order one. Next we notice the oscillatory character of the Bloch functions, in a general situation leads to the fact that the integrals in the last expression (8) forχ has a numerical smallness (see e.g. the examples below). Then, taking into account that χ = 8πN a s /ℓ one can define ǫ = N a s ℓ 2 /L 3 . Consider now a condensate with N = 10 5 of 87 Rb atoms (a s ≈ 5.5 nm) homogeneously distributed over a cubic box with L = 100 µm having N i = 100 cells in each direction (respectively ℓ = 1 µm). Then we compute ǫ ≈ 0.014. A physical situation when ǫ is not too small (say in experiments [5] it can be identified as ǫ ≈ 0.257) the multiple scale expansion, strictly speaking, is not valid. For this reason below we employ the numerical simulations, which, however, clearly illustrate that the small amplitude limit gives remarkably good estimates for the characteristic scales of the problem and allows one to understand the symmetry of the developed patterns.
Let us analyze the stability problem within the framework of equation (6), i.e. look for a solution of the form (3) where the modulational instability is understood in the sence of a plain wave instability with respect to small modulations of its amplitude. Below in section 4 we consider the modulational instability of solutions to equation (1) in the form of Bloch states induced by periodic small amplitude and long-wavelength perturbation.
Energy band structures
In the previous section we assumed the knowledge of the Bloch states and energy bands of the underlying linear Schrödinger problem. For generic multidimensional potentials this could be a quite difficult problem to solve. To avoid difficulties we shall restrict here to the case of separable potentials of trigonometric form i.e. we take Here A is a constant fixing the depth of the lattice and 2π/κ j the periodicity in the r j directions. In the following we fix κ j = 2 for all j so that the potential will be a superposition of identical one dimensional Mathieu potentials, and the corresponding Bravais lattice will have simple cubic symmetry. The band structure and the Bloch states of the linear Schrödinger equation are then obtained as where α denotes the band index and ǫ α (k j ) and ψ α (k j , r j ) are the eigenvalues and eigenfunctions of the Mathieu equation This equation can be transformed, making use of the expansion of the wave function into momentum eigenfunctions (Fourier expansion), into a tridiagonal problem whose solutions can be obtained with high accuracy by means of continued fractions. As an example of these calculations we report in figure 1 the first two energy bands of the 2D separable Mathieu potential, in the case A = 1 (note that with the choice κ = 2 the potential has periodicity π in both directions, so that the BZ is a square of size 2). In figure 2 we also show the sections of constant energy for the bands depicted in figure 1. By changing the amplitude of the potential the band structure will change, and the bands become more flat and more separated as the amplitude of the potential is increased. In figures 3,4 we show the contour plots of Bloch states at different points in the BZ. We remark that, due to the separability of the potential, both the derivative and the curvature of the band are independent on k y (respectively k x ) for fixed values of k x (respectively k y ) in the BZ. This is seen in figure 5, where the group velocity and the components of the reciprocal mass tensor are reported for the bands in figure 1. Similar calculations can be easily done for the 3D separable Mathieu potentials. The analysis can be extended to lattices with rectangular or tetragonal symmetry, as well as, to more general separable potentials such as the ones considered in [28].
In the following sections we shall use these results to compare numerical studies of the instability of Bloch states in presence of nonlinear interactions with the prediction (equation (9)) of the previous section. Figure 5. The group velocity (a) and the reciprocal of the effective mass ω xx (respectively ω yy ) (b), as a function of k x (respectively k y ), for arbitrary k y (respectively k x ) in the BZ. The continuous and broken lines refer, correspondingly, to the first and second band of figure 1 (notice from (a) that there is no singularity for ω xx (or ω yy ) in the origin.
Numerical simulations
The linear stability analysis, described in section 2 gives the growth rates and spectra for the modulational instability. This information appears to be sufficient to predict the spatial arrangement and symmetry of the emerging soliton-like structures. However, to gain more insight into the development of modulational instability one has to recourse to numerical simulations.
For the numerical study we have used the potential V j (r j ) = A cos(κr j ) for j = x, y, z, which is motivated by the recent experiments [4,5]. Then in the above formulas the terms corresponding to j = x, j = x, y, and j = x, y, z are retained, respectively for 1D, 2D and 3D optical lattices. Also in the case at hand M −1 α,xx , M −1 α,yy , and M −1 α,zz have the same functional dependence on the arguments, which means that they coincide when q x = q y = q z . Numerical solution of the equation (1) has been performed by the operator splitting procedure using multi-dimensional fast Fourier transform [32]. The spatial domain x, y, z ∈ [− L 2 .. L 2 ] (i.e. L x = L y = L z = L) was represented by an array of 128 x 128 x 128 points. For 1D and 2D cases the results were checked by increasing the number of grids (512 and 256 x 256 respectively), which showed no qualitative difference. The time step was δt = 0.001. To be specific, we concentrate on the case of positive scattering length, χ = 1.0, choose κ = 2.0 (i.e. a x = a y = a z = π), and ρ = 0.5.
1D optical lattice
Basic features of the development of modulational instability and formation of solitonlike excitations in effectively 1D optical lattice was described in [8,12]. Below we extend the parameter values, which can lead to formation of qualitatively different types of localized excitations.
The coefficient of nonlinearity χ in equation (1) is an important parameter which determines such a property of BEC as the macroscopic quantum self-trapping [29,30,31]. At strong nonlinearity the tunneling of atoms between adjacent wells of the optical lattice is suppressed, dispite the significant population imbalance, due to macroscopic quantum self-trapping effect. This property affects the development and further evolution of spatially localized excitations in BEC arrays. In figure 6 we report two types of soliton-like excitations, developed in 1D BEC arrays at weak and strong nonlinearities. All remaining parameters, except χ, are similar in these two cases. Envelope solitonlike modes, which occupy few lattice sites are formed at weak nonlinearity, while the intrinsic localized modes, occupying a single lattice site are formed at strong nonlinearity. The time required for development of these excitations are also different. The localized excitations represented in the figure 6 are thin disks filled in with BEC atoms, where the atomic density is much greater than in neighbouring array sites. The dynamics of these excitations is governed by the 1D nonlinear Schrödinger equation [8], and for particular parameter settings they can be stable [12], or have very long (relative to duration of experiments) recurrence times. The separability of the periodic lattice potential in equation (1) leads to similar scenarios of the development of modulational instability also in 2D and 3D cases considered below.
2D optical lattice
Now let us consider in more detail the development of soliton-like excitations and the possibility to stabilize them in a 2D optical lattice. A 2D optical lattice is formed by overlapping two laser standing waves along the x and y axes, superimposed on a continuous BEC in a magnetic trap. The condensate is then fragmented and confined in many narrow tubes centered at lattice potential minima and directed along the z axis. As a result of modulational instability, the initial distribution of the atomic density over the tubes in the optical lattice is changed. In order to analyze the instability of initial waveforms we consider Bloch states corresponding to different points of the BZ. Let us consider the points q 0 = (±1, ±1) at the boundary of the BZ (figure 1). Then, restricting consideration to the two lowest bands (α = 1, 2), one can distinguish three different cases: < 0 and the wave is unstable. The BEC population dynamics in this case is reported in figure 7. The most interesting feature of the modulational instability developed is that it evolves in a regular structure which represents symmetrically spaced localized in space (we call them soliton-like) distributions (see figure 7b). Each of the humps shown in the figure represents a tightly confined tube along the z-direction. The number of tubes is proportional to the size of the box. In order to illustrate the last statement we performed calculations (see figure 8) with parameter settings similar to those of figure 7 with the exception of domain size L = 28π.
In order to understand this behavior we notice that from the equation (9) we get that the excitations with characteristic scales λ > λ min = 2π 1 is the inverse of the effective mass tensor, are unstable. The largest increment (i.e. the large Im|Ω|) is achieved for λ 0 = √ 2λ min . This has two consequences. First, the symmetry group of the developed structure must be of C n type with the symmetry axis coinciding with that of the condensate, and second, an effective scale λ ef f ∼ λ 0 must be a characteristic scale of the most excitation which at the beginning of the evolution. One can estimate the value of λ 0 taking into account that for a chosen point of BZ the inverse of the effective mass tensor is |M −1 1 | ≈ 6 (see figure 5) and for the solutions studied numerically the effective nonlinearity isχ ≈ 0.1935 (the respective normalized eigenfunctions are approximated by 2 π sin(x) [8], which gives λ 0 ≈ 20.053. This result corroborates with the distances between the humps along the radial direction measured from the direct numerical simulations: λ ef f ≈ 23.0 ( figure 9). Next we have to take into account that the carrier wave mode is chosen at the point q = (±1, ±1) placed at the corner of the BZ which corresponds to waves whose phases propagate in the directions x = ±y. This immediately specifies the symmetry C 4 . In other words, one can specify the points where the humps (confined tubes) should appear: in the plane (x, y) these are intersections of lines x = ±y with the circles of radii 1 2 + p λ ef f where p = 0, 1, .... In a square box of size L one will observe L 2 /λ 2 ef f humps. This estimate being rather rough (it does not take into account boundary effects) was confirmed by our numerical simulations. Also one can predict that the characteristic diameters of the humps should be less than λ min (≈ 7.1 in our case). This gives an estimate for the BEC density in a tube n t versus the initial density n 0 : n t = LxLy λ 2 min n 0 , which in our case gives n t ≈ 38n 0 . In order to evaluate the increase of the BEC atomic density in a soliton-like excitation, we have numerically integrated |ψ(x, y, t)| 2 at t = 85 (figure 7b) over individual lattice sites. The result is that 65 % of the BEC matter, initially uniformly distributed over the optical lattice, are collected in four sites due to the modulational instability. Therefore, the increase of the atomic density in a localized excitation is n t = 196 4 · 0.65 · n 0 ≃ 32n 0 , wich is close to the above analytical estimation. For the 3D case considered in the next subsection this last estimate for the BEC density in hollows n h is modified as: n h = LxLyLz λ 3 min n 0 . As we have seen, the modulational instability results in formation of regular pattern of soliton-like excitations in arrays of BEC (figures 7b, 8). However, they eventually decay in accordance with equation (6), which does not support stable solitonic solutions in 2D and 3D. A simple way to retain these excitations would be the increasing of the strength of the periodic trap potential, when excitations are formed. High potential barrier between lattice sites then suppresses the atomic tunneling, providing strong confinement. This idea is illustrated in figure 10, where we show the evolution of the BEC atomic distribution. Until t = 85 the dynamics is guided by the modulational instability, resulting in formation of regular spatial structures with BEC. As the soliton- To understand the stabilization phenomenon we notice that λ min is of order of 2a x (2a y ), which means that the most of BEC atoms are concentrated in a unique cell (see figure 9). This type of excitations closely resemble the intrinsic localized modes in BEC in the tight-binding approximation [33,34]. By increasing the potential amplitude one makes the optical lattice more deep, which in turn leads to decreasing both the probability of tunneling of atoms from the most populated cell to neighbor cells and a "number" of atoms in classically forbidden zone. As a consequence, the BEC density in the most populated cell is growing, which is illustrated by figures 7c and 10.
Case 2.
The eigenfunctions Φ m 0,x and Φ m 0,y belong to different zones, say Φ m 0,x belongs to the first lowest zone: m 0,x = (1, ±1) and Φ m 0,y belongs to the second lowest zone: m 0,y = (2, ±1). Then M −1 1,xx < 0 and M −1 1,yy > 0, and the condensate is unstable. In this case the instability condition takes the form 0 < M −1 2,yy K 2 y − |M −1 1,xx |K 2 x < 4χρ 2 , and the most unstable excitations have K 2 x < M −1 2,yy |M −1 2,xx | K 2 y (which is related to the fact that an eigenfunction Φ m 0,x belongs to the "unstable" branch). That is why the main instability results in a pattern having different symmetry: it develops in the xdirection. Along this direction the pattern is rapidly split in a sequence of solitary waves. The instability develops also along y-direction, but at much larger time scales (see figure 11). To estimate the number of humps, we take into account that the periodic boundary conditions impose a characteristic scale K x = 2π/L which leads to the following estimate for the most unstable scale λ 0 , and thus to λ ef f : . For the case, illustrated in figure 11a we obtain λ 0 ≈ 24, which yields for the number of humps in the x direction LK max /2π ∼ L/λ 0 ∼ 2. The instability is developed in y-direction as well, which is characterized by much larger spatial and temporal scale, and for experimental purposes can be neglected. In order to preserve the developed spatial structures with high atomic density, it would be enough to increase the strength of the periodic trap potential, since the excitations fit the sequence of single lattice cells along the y direction (figure 11b).
4.2.4.
Other points of the Brillouin zone. It is of particular interest for applications to explore the development of modulational instability of initial waveforms, corresponding to different points of BZ. We have tried the Bloch states with wavevectors spanning the first BZ. The main observation from the relevant numerical simulations is that, the modulational instability results in formation of different spatial structures with BEC depending on the initial waveform. The time required to formation of these structures is also different.
As an example, in figure 12 we report the result of modulational instability of the Bloch state with m 0,x = m 0,y = (0.9, ±0.9). This initial distribution seems to be of interest because it resembles the situation, when a BEC of comparatively small size is loaded into the optical lattice, so that minor number of lattice sites are filled in with BEC, others being almost empty. In this case the cells in the central part of the optical lattice contain more BEC atoms, than the peripheral ones. The localized excitations developed due to the modulational instability occupy the central nearest-neighbor lattice cells.
3D optical lattice
Qualitatively similar behaviour of the modulational instability with respect to formation of soliton-like excitations was observed in 3D case. The developed structures are small hollows filled in with BEC atoms of much greater density compared to surrounding array sites. Figure 13 illustrates the emergence of spatial structures with high atomic concentration in a 3D BEC array, shown as a section along the main diagonal of the cubic domain with L = 12π. The time interval is selected to display the emergence of soliton-like excitations at t ∼ 28, and their subsequent decay. The relative atomic Figure 13. Formation of soliton-like excitations by t ∼ 28 in a 3D BEC array. d is the distance along the main diagonal of the cubic domain with L = 12π. The initial condition is ψ(x, y, z, 0) = 0.5 sin(x) sin(y) sin(z). density in a localized excitation is estimated, as in the 2D case, by numerical integration of |ψ(x, y, z, t)| 2 at t = 28 over individual lattice sites. In the 3D case ∼ 67% of all the BEC matter, initially uniformly distributed over 1728 lattice sites, are collected in eight sites due to the modulational instability, therefore n h = 1728 8 · 0.67 · n 0 ∼ 145n 0 . For the considered initial parameter settings the long-term evolution of the atomic distribution over the optical lattice in both 2D and 3D cases exhibits the recurrence phenomenon, which reveals itself as the return to the primary low density state. In order to prevent the decay of soliton-like excitations, similarly to 2D case, the strength of the periodic potential has to be increased adiabatically when the excitations are formed, e.g. at t ∼ 28 for the above 3D parameter settings ( figure 13).
As far as the initial state is concerned, we remark that it could be experimentally realized starting from a uniform condensate with quasi-momentum k = 0 (i.e. with equally filled potential wells), and accelerating the optical lattice in a particular direction. The acceleration of the lattice, being equivalent to a gravitational field, will induce the initial k = 0 state to move along the band, until it reaches the edge of the band where the instability develops. Depending on the direction selected in the real space (this corresponding to a fixed direction in the reciprocal space), one will get different final localized states from the band edge instability. The issues relevant to preparation of particular Bloch states of the condensate in an optical lattice are discussed in [9].
It should be pointed out that the similarity of the development of modulational instability in all three dimensions is the result of separability of the periodic trap potential. Although this corresponds to the majority of experimantal situations, the case of non-separable potentials (non-orthogonal lattices) represents a significant interest.
Conclusions
We have studied the modulational instability in arrays of BEC confined to optical lattices. The formation of coherent spatial structures with BEC is shown to be the principal feature of the evolution of atomic distribution over the optical lattice, when guided by the modulational instability. In 1D case the developed structures are the matter-wave solitons which may be regarded as thin disks of highly concentrated BEC atoms. Depending on significance of the nonlinearity, two distinct types of matter-wave solitons may develop, these being the envelope solitons (weak nonlinearity) occupying few lattice sites and intrinsic localized modes (strong nonlinearity), each fitting a single lattice site. In 2D and 3D cases the emerging spatial structures with BEC represent soliton-like excitations regularly arranged over the optical lattice which are, however, not stable. We proposed a simple way to stabilize these localized excitations by increasing the strength of the optical lattice when they are formed due to the modulational instability. Different initial waveforms, corresponding to particular points on the BZ are tested for instability. The aspects of developed spatial structures with BEC is shown to depend on the selected Bloch state. Theoretical model, based on the multiple scale expansion describes the primary features of emerging soliton-like structures with BEC, including the number of localized excitations, their spatial simmetry and relative density of BEC atoms they contain. The proposed method for creation and preserving of soliton-like spatial structures with highly concentrated BEC atoms, may be of interest for the physics and applications of BEC.
|
2014-10-01T00:00:00.000Z
|
2002-12-28T00:00:00.000
|
{
"year": 2003,
"sha1": "f448d2df129b562786226f73ac2b94463c77229d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0306656",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1f9f91f203e413a02e361a1eccd832a4e7e8dd89",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
199022488
|
pes2o/s2orc
|
v3-fos-license
|
Love Island, Social Media, and Sousveillance: New Pathways of Challenging Realism in Reality TV
This paper explores the changing nature of audience participation and active viewership in the context of Reality TV. Thanks to the ongoing rise of social media, fans of popular entertainment programmes continue to be engaged in new and innovative ways across a number of platforms as part of an ever-expanding interactive economy. Love Island 2018 has pushed the boundaries of this participatory culture by exploiting new forms of digital media in order to encourage multi-platform consumption of content by the show's fans. This paper argues that while this strategy has enabled Love Island to successfully exploit monetization opportunities, it has simultaneously created opportunities for the show's audience to group together online and form communities of resistance which have placed themselves in opposition to the show's producers. These fan communities have harnessed the connective powers of social media to pool together their means and knowledge and to eventually exercise modes of sousveillance designed to hold “powerful” actors to account for perceived wrongdoing on the show. Examples of such behavior during Love Island 2018 hint at a paradigm shift in the relationship between television producers and audiences and demonstrate the new pathways available to audiences as they seek to answer the perennial question of this entertainment genre: how real is Reality TV?
INTRODUCTION
In the past two decades, considerable scholarly analysis has reviewed the growth and proliferation of different forms of Reality TV (Nabi, 2007;Andrejevic, 2008). With most commentators agreeing that Reality TV has "moved from the margins of television culture to its core" (Orbe, 2008, p. 345), a central theme of academic discussion has been the extent to which this form of entertainment truly offers realism and authenticity (Biressi and Nunn, 2005;Hall, 2009;Hill, 2014). Reality TV's claims to assert a "radical inclusiveness and transparency" (Kjus, 2009: 281) is a key appeal to audiences (Papacharissi and Mendelson, 2007), but the issue of realism continues to dominate discussions and the question-for audiences and scholars-often remains: how real is Reality TV (Escoffery, 2006)? Relatedly, academic discussions of Reality TV have also focused on the ways in which this televisual format has sought to generate audience participation. This, it is argued, has led to the transformation of viewers from passive consumers to active participants as part of an ever-expanding interactive economy (Andrejevic, 2004;Holmes, 2004). This paper argues that the latter of these developments has recently begun to impact the former-audiences engaged within a participatory culture are increasingly querying the authenticity of Reality TV.
As will be illustrated below, this process has been aided by the rapid expansion of social media in the past 15 years and the connective power of platforms such as Twitter. So while social media provides television producers with considerable potential for multi-platform exposure of their content as well as new pathways toward engagement with audiences, these same platforms also empower individuals by bringing them together in more cohesive fan communities who are able to share knowledge and pool resources (Lévy, 1997). As a result, what appears to have emerged are increasingly "savvy" (Andrejevic, 2008, p. 27) fan communities who, encouraged to proactively participate with their favorite television shows and supplied with the tools to do so via social media, pursue behaviors perhaps unforeseen, unexpected and ultimately damaging to these very same shows as they challenge Reality TV's claims to realism and authenticity. Love Island 2018, in which the show's fan community mobilized online to exercise modes of surveillanceor more accurately sousveillance (Mann et al., 2003)-as part of their interaction and engagement with the show, serves as a particularly salient example.
The paper begins by briefly tracing the historical development of attempts to elicit viewer engagement across different forms of popular entertainment, culminating at the turn of the twenty-first century with the introduction of Reality TV shows such as Big Brother. Next, the paper briefly discusses notions of authenticity and realism in the context of Reality TV and audience expectations. The paper then considers how a contemporary form of Reality TV, Love Island, has pushed the boundaries of audience participation by exploiting the connective potential of social media, and in doing so, how Love Island's producers have reaped commercial rewards. The discussion then outlines concepts from surveillance scholarly literature such as lateral surveillance and sousveillance to explain the participatory nature of these forms of watching and monitoring which have grown considerably in recent years thanks to social media. The final section of the paper brings these strands together and draws upon examples from Love Island 2018 to demonstrate how audiences have re-imagined their engagement and participation and repurposed this to exercise modes of sousveillance to hold the show's producers to account.
REALITY TV AND THE INTERACTIVE ECONOMY
Contemporary audiences of Reality TV are expected and prompted not to be passive consumers but active participants shaping the day-to-day narrative and ultimately co-producing outcomes (Holmes, 2004). In some ways, these forms of audience engagement are not new and in fact have a long tradition preceding the advent of Reality TV. Kjus (2009) outlines the way classic American radio-based games and quizzes such as Vox Pop, Idol, and the Major Bowes Amateur Hour provided early examples of entertainment as a "combination of social engagement and responsibility" for audiences (2009, p. 279). Such programming encouraged "ordinary" listeners to take part in the production of content and the opportunity to determine outcomes. These forms of social engagement and participation were later followed by television-based programmes, most notably The §64,000 Question, a quiz format subsequently redeveloped, re-hashed, and re-booted time and time again across many different countries over the next several decades. The next iteration of this growing participatory culture came in the creation of popular day-time talk shows such as Oprah, in which "ordinary" people were once again tasked with generating content, this time in an increasingly politicized context as audience members were recruited to discuss pressing social issues and contribute to debates. Again, the success of this format prompted many similar versions to emerge in the US and beyond, replicating the format in which a participatory platform fronted by a pseudo-political host encouraging audience members to contribute to debate, initially in person in the studio but, as time went on, via telephone, text, email, and of course in recent years via social media (Kjus, 2009).
Despite these televisual formats offering pathways in which the public could participate in these programmes, the nature of this participation was heavily regulated, and the agency of audience members and viewers at home remained limited. For one thing, individuals selected to participate were often carefully chosen and their image curated to exploit particular characteristics or to chime or clash with the sensibilities of home audiences (Anderson, 1978). But what also marks out audience participation in these contexts as compared to that which Reality TV would later claim to offer is that participants engaging in quizzes and talk shows were only expected to either answer set questions or offer opinions on pre-determined topics. Moreover, viewers at home were largely still consuming these shows passively since their ability to contribute was also restricted in the same ways. As such, audiences were still pre-dominantly passive in the sense that they had few opportunities to determine outcomes or genuinely contribute to the development of narratives.
The introduction of Reality TV shows at the turn of the twenty-first century such as Big Brother and Pop Idol was the beginning of a rapid paradigm shift which began to restructure the "interface between industry, text and audience" (Holmes, 2004, p. 214). While popular docu-soaps such The Real World, Cops, and The Osbournes certainly contributed to the advancement of Reality TV during the 1990s (Doyle, 1998;Gillan, 2004), these formats still cast the viewer as passive recipients. Big Brother on the other hand, heralded the move toward expressly, and deliberately empowering viewers to shape outcomes by enabling them to choose, week by week, who remained in the show, thus co-opting the audience to co-produce what happened next. Kjus describes this key development as "a shift from the asymmetrical communication of broadcasting to the symmetry of telephony and the Internet" (2009, p. 295). As Holmes (2004) has noted, this apparent empowerment of audiences was not merely a sideshow within these programmes but rather it was placed at the heart of their design and marketing. This was perhaps most obviously demonstrated in the slogans used to promote these shows which emphasized the central role of audiences and their empowerment-"You decide!" (Big Brother) and 'But this time you choose!' (Pop Idol) (Holmes, 2004). This ability to co-produce outcomes and determine directions of the narrative may be thought of as the democratization of production within an increasingly interactive economy (Andrejevic, 2004). This growing interactivity was facilitated by technological advancements in the early twenty-first century, most obviously the move toward Web 2.0 representing a shift from static webpages to dynamic and collaboratively constructed online content, including the rise of social media platforms (O'Reilly, 2005). The true extent of this interactivity has been queried by some and Jenkins (2006, p. 3) in particular has argued that despite constantly new and emerging forms of audience engagement in television and other media, "not all participants are created equal. . . and some consumers have greater abilities to participate in this emerging culture than others." Whether this participatory culture truly encompasses and embraces all audiences is perhaps unclear, but what is certain is that this move toward an interactive economy in television consumption was ultimately designed for the benefit of television producers and their sponsors first and foremost. This shift toward interactivity was necessary if television as medium was to keep up with its competition. In the mid-1990s, commentators such as Negroponte (1995, p. 54) predicted a dire future for "passive old media" such as television broadcasting due to the impending explosion of "interactive new media" facilitated by the internet. Indeed, Gilder (1994, p. 189) rejected the idea that television could continue to exist alongside new media, claiming that "the computer industry is converging with the television industry in the same sense that the automobile converged with the horse." Although these warnings ultimately proved to be rather over-stated, recent developments in media production arguably revisit these concerns and may even have made them ever-more pressing. Entertainment industry giants such as Netflix and Amazon (and soon Apple) now have not only the platforms to deliver content via their streaming websites but also the resources necessary to commission, produce, and market their content unilaterally. Competition for audiences has perhaps never been more fierce and this has therefore arguably pushed television producers to seek new and innovative ways to keep their audiences engaged.
Moreover, audience participation can, of course, be monetized. These monetization processes take place overtly-in the first incarnations of Big Brother, for instance, telephoning (and eventually texting) to vote for your favorite housemate would be charged. But monetization can also be rather more subtle thanks to the opportunities to generate advertising revenues within these shows, a process described by Deery(2004: 1) as "advertainment" in which "shows themselves act as marketing vehicles in addition to attracting audiences for spot advertisers." In the contemporary digital era, opportunities for advertainment have grown exponentially, thanks in part to the creation of mobile apps and the use of these apps as the exclusive medium through which audiences can participate in the co-production of shows' outcomes. Once they have downloaded a show's official app, users are soon confronted with a panoply of marketing for the show in question as well as its many commercial partners. But as Razaghpanah et al. (2018) have explained, mobile apps are also armed with the capacity to collect users' data, revealing their consumer preferences and habits, enabling more targeted advertising and, ultimately, greater potential for monetization. Though writing in the mid-2000s, Jenkins predicted the potential for greater commercial exploitation mediated through a merging of Reality TV and digital media. He proposed that when television begins converging with other forms of media such as the internet, "every important story gets told, every brand gets sold, and every consumer gets courted across multiple media platforms" (2006, p. 3).
This redefinition of traditional relationships and passive/active dichotomies is arguably best exemplified in a relatively recent example of Reality TV-Love Island. In particular, the extensive and deliberate use of social media made by producers and audiences has pushed audience participation in new and perhaps unexpected directions.
REALITY TV, AUTHENTICITY AND REALISM
Notions of realism and authenticity are at the heart of ongoing debates concerning Reality TV. Academic work has often discussed authenticity in Reality TV by examining the extent to which depictions of certain populations can be said to be authentic and representative of reality (Escoffery, 2006). But Hill (2014) has argued that rather than making claims to absolute authenticity, Reality TV overtly invites audiences to explore the fluid nature of realism, performance and identity. Jones (2003) has further argued that audiences are in fact aware that Reality TV is far from authentic but deliberately suspend disbelief in order to indulge in something of a "guilty pleasure." Similarly, Allen and Mendick(2013, p. 466) propose that rather than seeking a complete sense of realism, audiences in fact derive enjoyment from trying to distinguish the real from the false in Reality TV shows and that this "ambiguity provides space for pleasure." This may explain the considerable popularity of shows such as The Hills, Keeping up with the Kardashians and many others which are billed as Reality TV despite widespread acknowledgment that scenes are scripted and key events carefully choreographed (Woodward, 2018). However, others have argued that the promise of realism in Reality TV continues to represent a key appeal for audiences. Papacharissi and Mendelson (Papacharissi and Mendelson, 2007, p. 363) research has shown that for audiences, "the more realistic reality TV programming was perceived to be, the greater the affinity viewers experienced, and vice versa." In a rather more abstract sense, Fetveit (1999, p. 798) has argued that Reality TV offers a symbolic connection to realism for its audiences and that "a powerful urge for a sense of contact with the real is inscribed in much of the reality TV footage." Further, Hill (2002, p. 324) has claimed that a perennial attraction for audiences of Reality TV is the potential to capture a "moment of authenticity" amongst contestants, as exemplified by the recurring use of devices such as "reveals" and "confessionals" in these shows.
These scholarly discussions are particularly pressing when considered against the long history of subterfuge in Reality TV programming. For instance, Anderson (1978, p. 14) has explored quiz show scandals in the US in the 1950s, as part of which producers supplied personable and well-liked contestants with answers in order to keep them on the air for as long as possible. Meanwhile, "hard cases, whiners, or smart alecks" were systematically filtered out of broadcasts (1978, p. 14) and quizzes ostensibly depicted as fairly rewarding the most intelligent contestants were shown to be doing anything but. In the UK, the mid-2000s saw its own Reality TV scandal when widespread fraudulent activities were uncovered concerning audience participation in quizzes and contests. This form of participation was found to have been subject to abuse and manipulation including inventing winners of prizes; faking the results of contests which had charged audiences for taking part; failing to count telephone and text votes due to technical errors; overcharging individual callers on premium telephone lines; and broadly misleading viewers as to the nature of the games and quizzes they were taking part in. The sheer scale of the scandal and the financial cost to viewers was unprecedented and labeled as the "the biggest fraud in UK TV history" (Deans, 2007). The aftermath of this scandal included record fines against broadcasters, high-profile resignations and calls for legal amendments to protect audiences from similar abuses in the future (Deans, 2007).
It may thus be argued that the legacy of such incidents is a lasting sense of broken trust amongst Reality TV audiences and an entirely justified skepticism as to claims of authenticity. Heritage (2019) has proposed that this shattered trust endures today and is exemplified by constant accusations of manipulation which emerge during broadcasts of major Reality TV programmes in the UK. However, Heritage also argues that rather than justifying claims of "fixes" in Reality TV, the scandal of 2007 has in truth led to much stricter regulation of such programming. Nevertheless, audiences' trust has been irrevocably damaged, and this paper will argue that this continuing distrust can be demonstrated by the activities of contemporary audiences in their ongoing search of authenticity. When audiences sense they have been duped, they react proactively and-thanks to social media-collectively.
LOVE ISLAND, SOCIAL MEDIA, AUDIENCE PARTICIPATION, AND MONETISATION
The scholarly work discussed above enables an appreciation and understanding of the role of Reality TV as a conduit for reinventing and re-envisioning the extent to which audiences can be engaged in co-production of content. But what these analyses perhaps lack is a recognition of the impact social media in particular would come to have upon the nature and extent of this participatory culture. Love Island's embracing of social media hints at the vast potential to further the increasingly symbiotic relationship between television and digital media as simultaneous sites of consumption. Love Island's merging of television broadcasting with social media demands an acknowledgment that audiences must be re-configured as occupying dual roles-that of television viewers and social media users. Moreover, Love Island also presents a case study in the potential for exploitation of new and emerging forms of media consumption and the opportunities for monetization this offers. Equally however (as will be argued later), Love Island may also demonstrate the unexpected consequences that may arise when "viewers-users" increasingly embrace this duality and all the potential it may offer.
Love Island is a British Reality TV dating show during which contestants spend 8 weeks in a villa in Spain. Contestants are tasked with "coupling up, " meaning they must find a partner and avoid being "single" and consequently being removed from the show. Single contestants are removed on a weekly basis following a so-called "re-coupling" ceremony during which contestants decide who they wish to "couple up" with. During the course of the show, contestants go on dates, take part in challenges and broadly interact in the villa under the constant gaze of a production crew filming their activities. Over the show's 8 week run, the Love Island audience are invited to take part in voting on a number of topics, some critical to the show's narrative and others rather more mundane. Examples include (but are not limited to) voting for: favorite/least favorite couple; which contestants should go on a date; which contestants should leave the show; and which contestants should receive various forms of preferential treatment. At the end of the show, the audience is ultimately tasked with voting for the winning couple from those to have made it to the final episode. Reality TV shows centered on the premise of dating and romance have a long history and indeed represent one of the most prolific genres of Reality TV in the past two decades. Shows such as The Bachelor, Beauty and the Geek, and Millionaire Matchmaker have achieved global success while others such as Dinner Date and First Dates have dedicated followings in the UK and abroad (Campelli, 2015). What all of these shows lack however is any conduit for participation and audiences are instead rigidly cast as viewers passively consuming content. Love Island has therefore taken the central premise of dating shows-that viewers can observe "ordinary" people in their search for love-but has coupled this with one of the most appealing aspects of Reality TV: the ability for the audience to shape content via active participation. Perhaps for this reason, Love Island has proven immensely popular, achieving consistently higher viewing figures and social media mentions than its rival programmes despite a relatively short run of just 8 weeks per year (Hallam, 2018;Waterson, 2018).
Love Island's producers have made little secret that generating audience engagement via social media has been a central aspect of their strategy. This approach seeks to elicit a feedback loop whereby television and social media content feed back onto each other in a cycle, driving audiences to engage with the show across multiple platforms (Lips, 2017). For Love Island's producers, enacting this strategy on a day-to-day basis has included: offering exclusive online content; using social media to post "first look" previews of upcoming television content; creating memes to share online via the show's official Twitter and Instagram accounts; utilizing polls and other games and quizzes online; sending notifications via the show's official app including 5 and 10 min pre-show alerts; making the app the only medium through which the audience may cast votes on the show; and creating a Love Island video game accessible through the app (Jones, 2018).
A key purpose of Love Island's strategy, and specifically the use of the app, is to act as a vehicle for the show's commercial interests. In its most recent series, Love Island's official commercial partners included Samsung, Superdrug, Rimmel London, Jet2Holidays, Missguided, Ministry of Sound, Kellogg's, Echo Falls, Primark, Lucozade Zero, and Thorpe Park (Scribe, 2018). It is perhaps the partnership with Missguided which best demonstrates Love Island's ability to push the boundaries of advertainment by using digital tools to exploit monetization opportunities. As part of its commercial partnership with Love Island in 2018, Missguided provided clothing for contestants to wear in the show. Audiences were then granted the opportunity to "shop the look" whereby they could buy the same outfits they saw contestants wearing. This process was mediated via the show's app which re-directed shoppers to Missguided's official website to complete their purchase (see Figure 1).
This innovative strategy has been variously described as the "future of shopping" (Faramarzi, 2018), a "marketing masterpiece" (Tuite, 2018), and a "multi-channel triumph. . . (and) one of the best TV partnerships ever (Cole, 2018). This and the rest of Love Island's commercial partnerships are designed to achieve more than simply consumer exploitation; they add another layer of interactivity in enabling audiences to undergo what might be understood as a wrap-around experience during their engagement with the show. Audiences can watch Love Island on television; discuss it online via social media; take part in polls and other games and quizzes with the official app; purchase official merchandise through the app or the show's official website; and even "shop the look" as described above. These interconnected services facilitate a short (insofar as engagement drops off once the show is over) but highly intense form of multiplatform engagement, maximizing audience participation, and by extension, potential for monetization (Gilliland, 2018).
But while these strategies offer new and innovative pathways toward audience engagement, the nature of participation in this context arguably remains limited to that dictated by the show's producers, such as casting votes and purchasing merchandise. In order to move beyond this structured and regulated form of participation, audiences have also engaged heavily across social media platforms such as Twitter, Snapchat, and Instagram. Twitter in particular appears to have emerged as the social media platform of choice for audience discussion and analysis. (Hallam, 2018) claims that in the week leading up to the finale of the show's fourth season in July 2018, Twitter accounted for over 81% of Love Island mentions across all major social media platforms. Further, at several instances during the summer of 2018, Love Island-related content appeared as the UK's top trending topic on Twitter, often outstripping the football World Cup. Indeed, Love Island was the most talked about television show in the UK on Twitter in 2018, with 6.3 million tweets and 2.5 billion Twitter impressions (Kantar Media, 2018), representing more than twice as many tweets as its nearest rival-quite a feat for a show which runs for only 8 weeks of the year.
An inevitable consequence of encouraging audience interactivity is that audiences will not only interact with the show but also with one another. Social media has enabled the Love Island audience to form what Rath (1985) once described as "invisible electronic networks, " demonstrated in the way fans of the show have congregated online and embraced social media as a platform for the development of a vibrant fan community. Rather than passively following Love Island's official Twitter account, audiences have shown signs of genuinely moving beyond traditional audience passivity through their engagement online. As well as interacting with one another using their own personal accounts, fans of the show have created bespoke, fan-led Love Island social media accounts which have enjoyed significant popularity. For instance, @LoveIslandUK, @LIReactions, and @LoveIslandReact have close to 70,000 Twitter followers between them while @loveislandreactions has over 414,000 Instagram followers 1 . Throughout Love Island's 8 week run in 2018, each account provided real-time commentary during and after nightly television broadcasts, creating memes, referring to ongoing fan community jokes, offering comedic reflections on the show's content and more generally interacting with other users. For the Love Island audience, consuming the show across numerous platforms has become not only completely normalized but also a central part of their enjoyment. Indeed, Manavis (2018) has proposed that "for an hour a day, Love Island made Twitter a kind place to be, " arguing that the show's friendly virtual community overcame the usually confrontational and toxic nature of social media. She claims that Love Island went so far as having "transformed the way we treat each other online" with discussions amongst Love Island fans being open and supportive. These reflections are supported by other users with comments at the end of the show's most recent run capturing such positive feelings: "The actual best part of Love Island are the twitter conversations" (Amil, 2018) "The best thing about #loveisland was twitter tbh. . . You made my evenings entertaining for the past 2 months and I thank you for that" (Dun, 2018) "It has been such a p6gleasure connecting with people on Twitter over #LoveIsland. . . Thank you for making me smile, chortle, giggle & downright guffaw. For making me question things & for teaching me others" (Wozniak, 2018) The cumulative result of both Love Island's deliberate multiplatform strategy as well as its audience's pro-active engagement online has therefore been the emergence of a tightly bound, highly connected and digitally-confident fan community predominantly interacting via social media (Cavender, 2004). But the creation of an online community such as the Love Island audience may also lead to what Pierre Lévy (1997) once described as the creation of a "collective intelligence." Writing in the late 1990s, Lévy predicted that the rising computerization of society would "promote the construction of intelligent communities FIGURE 1 | Screenshot of the "shop the look" feature on Love Island's app (Cole, 2018). Used with copyright holder's permission.
in which our social and cognitive potential can be mutually developed and enhanced" (1997, p. 17). He proposed that as more and more people congregate online and interact with one another, information would be pooled and online communities would share and develop new forms of knowledge. Since "no one knows everything [but] everyone knows something" (1997, p. 13), communities would collectively become more intelligent thanks to the connective powers of the internet. As will be proposed later, this process can arguably be witnessed in the activities of the Love Island fan community.
PARTICIPATION, INTERACTIVITY, AND SURVEILLANCE
Interactivity and participation run throughout contemporary surveillant relations (Lyon, 2018). While Orwellian notions of top-down surveillance carried out by powerful all-seeing actors are not completely obsolete, Lyon (2018) argues that these visions are dated. Instead, surveillance subjects are far less powerless and passive than Orwellian visions suggest and in fact, individuals and groups actively participate in so-called surveillance societies. They do so by offering up personal data every single day, habitually interacting with bodies which collect, process and share this personal data (Norris et al., 2017). Whether providing information concerning health records, employment experience, credit history, educational performance, or mundane everyday activities, it is commonplace for individuals to offer personal data to a vast range of public and private bodies. In the digital era, these practices have become so ubiquitous that it is virtually impossible to move through everyday life without engaging in multiple forms of participation with bodies composing the surveillant assemblage (Haggerty and Ericson, 2000). These habitual and pro-active forms of participation in surveillance practices need not necessarily be understood as inherently negative. Ball and Webster (2018) propose that different forms of participation may have vastly different outcomes and may be beneficial for some if, for instance, it makes them feel safer or offers them economic advantages.
The rapid growth of social media has not only facilitated a massive expansion of data collection practices by giants such as Facebook and Google, it has also created new opportunities for individuals to participate in the surveillance economy via modes of surveillance such as lateral surveillance and sousveillance (Mann et al., 2003;Andrejevic, 2004). Andrejevic (2004, p. 488) describes lateral surveillance as the function of "watching one another" and he explains that "lateral surveillance, or peerto-peer monitoring (is) understood as the use of surveillance tools by individuals, rather than by agents of institutions public or private, to keep track of one another." Social media has undoubtedly accelerated and vastly expanded the ability to carry out such activities (Mann and Ferenbok, 2013). Research has found that users of social media readily acknowledge that a key purpose of their engagement with platforms such as Facebook is to watch others (Joinson, 2008) and enact forms of "social surveillance" (Marwick, 2012). The concept of sousveillance, which was introduced by Mann et al. (2003), is also salient here. Sousveillance builds on (Mann, 1998) earlier notion of "reflectionism" as an example of individuals using technology to respond to surveillant power asymmetries. Sousveillance refers to bottom-up monitoring practices facilitated by the growth of affordable and accessible surveillance technologies such as mobile telephones and wearable computing devices. Mann et al. (2003, p. 331) describe sousveillance as a type of "inverse surveillance" essentially deployed "as a counter to organizational surveillance." This ability to hold more powerful actors to account is the central raison d'être of sousveillance and represents an attempt by less powerful individuals to exercise greater agency and redress power imbalances at the heart of the panoptic asymmetries which characterize everyday life.
The increasing availability and affordability of mobile devices alongside the rapid expansion of social media has helped expand sousveillance practices in recent years. While past high-profile examples of sousveillance such as the filming of Rodney King's beating at the hands of the LAPD were once isolated albeit sensational, examples of police brutality and abuses of power by State or other actors are now captured and disseminated by members of the public almost every day. Social media grants individuals the ability to disseminate such content widely and instantaneously, by-passing traditional media, and in doing so, avoiding the filtering or regulatory mechanisms which traditional media remain subject to (Spiller and L'Hoiry, forthcoming). Thanks to social media, sousveillance practices have become so influential that they have seen the rise of seminal social movements such as Black Lives Matter, a movement largely mediated through social media and fuelled by recurrent instances of police brutality digitally captured by the public and disseminated online (Taylor, 2016). As such movements grow, sousveillance offers the potential for the formation of "communities of resistance" (Fernback, 2013) which group together online to monitor the actions of powerful actors.
The following section proposes that the Love Island fan community, driven by Love Island's producers' own strategy of pushing its audience to consume the show online as much as on television, has formed a community of resistance of sorts, in order to challenge the show's perceived lack of realism and authenticity.
LOVE ISLAND 2018, SOUSVEILLANCE, AND CHALLENGING CLAIMS OF REALISM
As outlined above, contemporary entertainment consumption, with Reality TV a prime example, is mediated across several platforms including television, social media, and mobile apps. Love Island presents a contemporary case study of these processes and the multiple indicators of success of the show-from viewing figures to online mentions 2 -suggest that the abovedescribed audience engagement strategy has worked. However, the following section proposes that in pursuing this strategy, producers have given the Love Island audience both the means and the appetite to enact sousveillance practices holding the show's producers to account. The means have been provided via the drive to create an online fan community, pushed by Love Island's strategic, and heavy engagement with platforms such as Instagram, Twitter, and the show's mobile app-all forms of engagement which demand from the audience some basic level of digital competency in order to participate in crafting narratives in the show. The appetite comes in the form of the investment demanded of the audience which is implicit in their participation and engagement. As fans are encouraged to vote for their favorite contestants, to take part in quizzes, to discuss every major and minor controversy online, to use the show's official hashtag and to follow the show's official social media accounts, an inevitable sense of investment emerges for audiences. Of course, as discussed above, this investment can be commercially exploited, and Love Island has advanced to new heights the ways in which audience participation may be exponentially monetized. But with investment may come a sense of ownership for the Love Island fan community together with a collective responsibility to ensure that (mis)behaviors on the show are monitored and addressed. This is what appears to have taken place during Love Island 2018 and the following section offers a number of examples to demonstrate this.
The discussion below is based on a textual analysis of Twitter posts relating to Love Island in 2018. Both quantitative and qualitative analyses of social media content are wellestablished methodological approaches to explore public opinion and sentiment about a variety of topics (Thelwall et al., 2011;Marwick, 2013). Such methods take multiple forms (Pearce et al., 2018) and this paper deploys a qualitative approach. An initial manual observation of tweets concerning Love Island was carried out during the entire course of the show's run from 4 June, 2018 to 30 July, 2018 in order to observe behaviors of users discussing the show online. This initial observation noted repeated accusations of manipulation against Love Island producers and staff centered on key incidents and/or individuals during the show. As a result, a second more systematic analysis was undertaken focusing on these key incidents and individuals. At this point, the online software Mozdeh was deployed to capture all tweets using the hastag #loveisland together with key word combinations relating to the incidents and individuals in question (i.e., names of contestants). These searches were time-limited to the day of controversial incidents and the immediate 2 days following. The results of this search were then manually filtered to focus on users' discussions of the incidents and individuals in question. These tweets were manually analyzed and coded to explore the nature, tone, and sentiments of users' discussions. The primary data presented in this section is specifically selected to reflect 2 Love Island attracted more viewers in 2018 than any of its previous series with ∼4 million viewers watching the live finale (Waterson, 2018). The 2018 series also generated 2.5 billion twitter impressions, outstripping the 2017 series which reached 1.5 billion impression (Lips, 2017). the broader nature of the discussions relating to the incidents in question and to present the dominant feelings and sentiments among online discussions by Love Island's fan community.
Example 1-#Kissgate
One of the show's most explosive moments came when New Jack 3 and Georgia went on a date (Lewis, 2018). New Jack was at this time "coupled" with another contestant-Laura, seemingly one of Georgia's friends-but Georgia had nevertheless selected him for the date. Georgia was by this point well-known for relentlessly proclaiming her resolute loyalty to her friends. The date was therefore seen as a test of New Jack's relationship as well as Georgia's claims of loyalty. At the conclusion of the date, New Jack and Georgia appeared to kiss. But the controversy came in the fact that the camera angle made it difficult to determine whether the kiss had been mutual or whether Georgia had initiated the kiss before New Jack moved away. Upon their return to the villa, considerable, and heated discussions took place between contestants about the incident with both New Jack and Georgia proclaiming their respective innocence. In the aftermath of the show airing the date, footage of the kiss was analyzed relentlessly on social media by the Love Island audience with different supporters coming to different conclusions.
However, following several days of online debate about the veracity of New Jack and Georgia's actions, a user on Twitter released footage with accompanying analysis which appeared to show the incident from different camera angles. Rather than resolve the question of who initiated the kiss, the footage actually revealed that the kiss had been filmed on two separate occasions (see Figure 2).
Since the incident had originally stemmed from what appeared to be a single moment in which two people clumsily kissed, this revelation undermined the authenticity of the whole controversy. Viewers had been led to believe by Love Island producers that the kiss was a momentary incident, caught on camera from an angle which left questions unanswered and the contestants' intentions up for debate. In fact, the revelation that the footage had been filmed twice showed that the footage of the kiss had been carefully edited and curated to maximize the controversy and to fuel debate amongst the audience. This example demonstrates that without the investigatory work of Love Island viewers and their ability to generate previously undiscovered knowledge by drawing upon a "collective intelligence" (Lévy, 1997), this incident would have been presented as something which it fundamentally was not. At a broader level, this example adds to the ongoing debate concerning the constructed nature of Reality TV and whether this form of entertainment can truly claim to be authentic (Escoffery, 2006).
Example 2-Samira and Frankie's Relationship
The way in which race is represented in Reality TV has been a long-running area of concern (Orbe, 2008;Squires, 2008). Love Island has appeared to continue this uneasy trend, with FIGURE 2 | Screenshot of edited video footage showing the kiss was filmed twice (Shadijanova, 2018). Used with copyright holder's permission. some commentators reflecting that non-white female contestants have rarely fared well on the show (Adegoke, 2018) and that black female contestants in Reality TV more broadly are often narrowly presented as the "angry black woman" (Dash, 2018). In Love Island 2018, the show's only black female contestant Samira appeared to be cast very quickly as being "unlucky in love." This narrative began as soon as she was not picked by male contestants to be in a couple during the show's opening episode and instead had to enter into a platonic and often awkward relationship with another contestant.
After several weeks, Samira's luck seemed to change with the arrival of new contestant Frankie. Samira and Frankie began a romantic relationship, but this lasted little more than 2 weeks before Frankie was voted out of the island. Frankie's removal prompted an emotional reaction from Samira who lamented losing him when their relationship was blossoming (Gibb, 2018). This reaction led to considerable discussion amongst the Love Island audience online, some of whom seemed confused by the apparent depth of Samira's investment in the relationship when relatively little screen-time had been dedicated to showing what Samira appeared to believe was a promising romance. "Anyone watching love island for the first time would think Frankie had just died not left after a week of knowing Samira" (Bumby, 2018) "Not being funny but Samira crying unctrollably like someone dropped her best friend into a piranha tank." (Ebuwa, 2018) Before long, accusations began to surface that Love Island's producers had deliberately limited coverage of Samira and Frankie's relationship in order to maintain Samira's "unlucky in love" narrative. These allegations were further fuelled when it was revealed that Samira and Frankie had spent a night the villa's hideaway-an action considered a big step forward in couples' relationships on the show-but this was edited out and never broadcast despite the fact that such encounters are routinely televised (Saunders, 2018). The Love Island fan community reacted furiously online, accusing producers of deliberate attempts to undermine Samira's relationship and even of racial pre-judice: "How can you justify editing out 99% of Samira and Frankie, including their hideaway stay. Careful #loveisland your pre-judice is showing" (Wilkinson, 2018) "samira and frankie went to the HIDEAWAY!! where was that!??! the producers did that girl DIRTY from the start and if you refuse to see that, check urself " (Lauren, 2018) "Nahh, Samira and Frankie had a night in the hideaway and it wasn't shown?? We didn't see 99% of their relationship and winning Love Island is RELIANT on public popularity. The producers made a conscious decision to sabotage them smh" (Avocaldo, 2018).
Shortly thereafter, Samira left the show of her own volition. In direct response to this perceived imbalance in the presentation of Samira and Frankie's relationship on the show, Love Island fans began to disseminate examples of the couple's relationship back in the UK, re-tweeting and liking photos and videos showing the couple enjoying the early stages of their romance. This collective reaction demonstrates the ability of the Love Island fan community to firstly call attention to the alleged misrepresentation of a contestant's narrative but also to then re-shape this narrative toward one which is more aligned to the audience's preferences. Based on a suspicion that they had been shown only a selective account of the couple's relations in the show, fans deployed counter measures to challenge the narrative ascribed to Samira by Love Island's producers and instead celebrated her relationship with Frankie whilst concurrently calling attention to the producers' apparent decision to reduce exposure of this romance. What is also perhaps demonstrated here is the depth of distrust amongst Reality TV audiences. Rather than this example representing only viewers' grievances concerning a lack of screen time for one of their favorite contestants, the accusations made in the tweets above demonstrate that for some fans, Samira's narrative was one rooted in racial pre-judice amongst Love Island's producers. The severity of such accusations hints at a deep-seated distrust arguably grounded in the long-term legacy of producer subterfuge in Reality TV.
Example 3-Jack in Casa Amore and Dani's Reaction
A recurring dramatic device in Love Island is the moment when couples are split and male contestants spend a week in a different villa-Casa Amore-where they meet a host of new female contestants. Meanwhile, the female contestants in the main villa also meet new male contestants. The premise behind this exercise is to test the strength of existing relationships and the week culminates in a dramatic ceremony during which contestants decide whether to return to their previous couples or form new relationships. In Love Island 2018, Jack and Dani were consistently seen as having the longest-running and strongest relationship in the villa and were firm fan favorites. However, when Jack arrived at Casa Amore, his (non-Love Island) exgirlfriend Ellie was waiting, prompting an anxious reaction from Jack who was evidently uneasy about her appearance. Jack made it clear on several instances that his concern was that Dani would be worried about Ellie's presence. He went so far as to sleep outside throughout the week in order to make clear that he did not want to spend time with or sleep in the same room as Ellie (Taylor, 2018). However, later in the week, Love Island's producers showed Dani footage of Jack's behavior in Casa Amore. Omitted from this footage was his decision to sleep outside and instead, only his initial, anxious reaction was shown with little additional context, leading Dani to become visibly distressed. Dani's upset at what fans judged to be an out-of-context clip resulted in outcry online and accusations that producers had engaged in emotional manipulation to maximize the incident's dramatic effect: "What you're doing to this girl is psychological warfare Love Island. . . you're misleading her about Jack. That's dirty and cruel. #loveisland #mentaltorture #unfair" (CheyenneMonty, 2018) "Out of order.. switched off for the rest of the series.. you think it makes good tv but that's someone's emotions" (Allison, 2018) "I was really enjoying @LoveIsland like I forgot how toxic manipulative reality tv was, but that Dani stunt was so twisted I don't know if I can continue to watch a show that encourages the deterioration of someone's mental health. Love Island wants people to be mentally unwell." (Donnell, 2018) Importantly, this discontent went beyond fans simply complaining to one another online. Instead, they mobilized and took proactive steps by submitting formal complaints to the UK's regulatory authority for broadcasting, the Office of Communications-better known as Ofcom. Over 2,500 complaints were filed to Ofcom for this incident alone, representing one of the most complained about single incidents on television in 2018 4 . Indeed, despite a relatively short run of 8 weeks, Love Island attracted the fourth highest number of complaints in 2018 5 (Ofcom, 2018). The behavior of the Love Island audience in this example, and indeed in the numerous other instances in which complaints were made to Ofcom during Love Island's 2018 run, demonstrates how collective outrage about perceived misbehavior by the show's producers can go beyond passive audience dissatisfaction and, fuelled by the collective power generated by being members of a community of resistance, fans can and will proactively hold producers to account. Comparing the numbers of Ofcom complaints in 2018 with that of previous Love Island series is also illustrative. In 2016, the show received only 40 complaints while the 2017 edition received 135 (Corrodus, 2018). The considerable rise in such complaints in 2017 may therefore suggests not only an increase in viewership but also the mounting sense of agency amongst Love Island's audience amidst the growing collectivism of the show's fan community.
Example 4-Questioning the Missing Challenges
A key feature of Love Island is the series of challenges contestants must complete throughout the show's run. These challenges often feature as the highlights of each series and are eagerly anticipated by the Love Island audience. However, with the 2018 series nearing its conclusion, Love Island fans began to vent their frustrations online that some of the best-loved challenges had not taken place: "where's the parents? the lie detector? the babies? the guess who said that about ya game? give us more challenges i'm boredddd of them sat around" (Cousins, 2018) "The whole of the UK waiting for the lie detector test, baby challenge and meet the parents to happen" (Jasmin, 2018) "Honestly it's so close to the end... why bring new ppl... let's do the call home, then lie detector, give them the babies, then meet the parents and other challenges... we don't need no more new ppl at this point" (Mrsd, 2018) Whether by design or otherwise, in Love Island 2018, the final week of the show featured all the challenges fans had been demanding online. In previous series, challenges usually took place around once a week, so such a concentration of the show's most popular challenges in the final week of the 2018 series certainly appeared highly unusual. There is a suggestion here that Love Island's producers responded directly to fans' dissatisfaction at the absence of these challenges. If this is true, it proposes a fascinating example of the way in which audiences can shape the activities contestants take part in seemingly through sheer force of will. Rather than contributing to the show's content within parameters defined by producers (i.e., taking part in scheduled votes with only certain contestants to choose from), fans are breaking beyond these constraints and making their own demands. The Love Island fan community's collective power harnessed online appears to be forcing producers into producing content on demand, hinting at a broader shift in the power relations between producers and consumers.
Whether the examples above can truly be said to demonstrate sousveillant practices may be up for debate. Love Island's fans are certainly not seeking to counter organizational surveillance in order to destroy these systems. Indeed, it seems clear that despite their complaints, Love Island's fans do not want the show to fail, as demonstrated by the strong viewing figures for the show's finale which suggest that audiences keep watching even after uncovering staged incidents earlier in the series. Love Island fans do however demonstrate motivations closely aligned to sousveillant activities insofar as they are attempting to redress asymmetries of power, specifically the power of producers to engineer incidents to maximize dramatic content and the audience's previous inability to directly challenge such content. The example of #kissgate is aligned to traditional sousveillance methods in the sense that new footage was generated to counter a dominant narrative. The other examples use tactics perhaps not usually associated with sousveillant practices, but all are designed to challenge producers when their behaviors stray from the collective values and ideals of the Love Island fan community.
At the heart of these examples is a search for authenticity among Love Island's fans, reflecting a continuing longing for realism amongst Reality TV audiences whose trust in this genre has been broken by a legacy of duplicitous behavior (Heritage, 2019). When events appear staged or producers are suspected of skewing the veracity of the content presented to the audience, fans enact their collective power to challenge these false narratives. The question is however, whether this truly represents a problem for Love Island's producers. Whether criticizing or praising producers, Love Island audiences discussing the show online will still use the Love Island hashtag and that alone may be enough to count as a success for Love Island's producers and their sponsors. The old adage that there's no such thing as bad publicity may be true, and the record-breaking commercial partnerships already agreed ahead of the 2019 edition of the show (Sweeney, 2019) suggest Love Island's producers have little to be concerned about-in the short term at least.
In the longer term however and as the examples of perceived subterfuge by producers outlined above add up, they become part of a longer history of deception in Reality TV and continue to chip away at any residual belief that Reality TV offers genuine authenticity and realism. For the audience, ongoing accusations that outcomes are pre-determined or fixed, and that controversial incidents are faked and stage-managed may, over time, undermine the attraction of shows like Love Island. But perhaps more than anything, what the actions of the Love Island fan community reveal is an enduring tension for all Reality TV audiences: between a desire for authenticity and an acceptance (or an acquiescence to) the fact that Reality TV is ultimately staged to some extent. Love Island fans enjoy the show and want to keep watching-hence despite their frustration at various duplicitous behaviors by producers, viewing figures remain high even after accusations of racism, emotional manipulation, and staged controversies (Waterson, 2018). Critically however, rather than switching off altogether, Love Island fans instead seek to consume the show on their own terms, binding together to recraft narratives when the content presented to them by producers does not meet with the audience's approval.
CONCLUSION
As television producers continue to push for consumption of their products across multiple platforms, audiences are finding new ways to consume and engage with popular entertainment such as Reality TV. The potential for commercial exploitation and monetization of audience participation in this context is limitless and it is this potential which is driving television producers and their sponsors to encourage audiences to move beyond passive consumption of fixed content. Instead, they seek to co-opt their audiences as co-producers who discuss, analyze, and commercially engage with a show before, during, and after its live broadcast. While the commercial rewards are significant and growing, these developments inevitably push audiences toward greater investment in these shows and a vested interest in monitoring outcomes relative to the audience's own desires.
These developments have correlated with the growth of social media in the past 15 years which has brought with it a greater potential for individual power and agency thanks to the communicative and connective capacity of different social media platforms. The ability of individuals to more easily and instantaneously connect online has engendered a sense of collective power, particularly when these individuals can identify and group together around a shared interest such as fandom of a television show. As Cover explains: A digital environment promoting interactivity has fostered a greater capacity and a greater interest by audiences to change, alter and manipulate a text or a textual narrative, to seek coparticipation in authorship, and to thus redefine the traditional author-text-audience relationship (Cover, 2006, p. 140).
As online communities come together, discuss shared interests and pool their knowledge, a "collective intelligence" (Lévy, 1997) begins to emerge which may be used to monitor the behaviors of others. The continuing rise of lateral surveillance and sousveillance practices demonstrates how this "collective intelligence" can be re-purposed within the work of virtual "communities of resistance" (Fernback, 2013). The behavior of Love Island's fans in 2018 demonstrates these practices in the context of Reality TV. Fans of the show mobilized online and on multiple occasions took pro-active steps, emboldened by the collective spirit of these online communities, to hold Love Island's producers to account for perceived misbehaviors and the undercutting of the show's claims to authenticity. The question remains what the impact of such audience behaviors is-do these behaviors in fact offer Love Island greater exposure and is therefore welcomed by its producers regardless of whether fan reactions are positive or critical? Or, by relentlessly driving their audience toward online consumption of the show, have Love Island's producers created a monster whose behaviors they can no longer predict or control? In an age of post-truth politics, "alternative facts" and "fake news, " if fans continue to feel duped as they reveal instances of staged controversies or deliberate manipulation of contestants, might they become so disenchanted by a lack of authenticity and realism that, in the end, they switch the channel?
DATA AVAILABILITY
The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher.
|
2019-08-02T13:38:26.391Z
|
2019-08-02T00:00:00.000
|
{
"year": 2019,
"sha1": "b5edfd9e542d83b86d621e6af394a3190197d40c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fsoc.2019.00059/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5edfd9e542d83b86d621e6af394a3190197d40c",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Sociology"
]
}
|
248276173
|
pes2o/s2orc
|
v3-fos-license
|
Augmenting Azoles with Drug Synergy to Expand the Antifungal Toolbox
Fungal infections impact the lives of at least 12 million people every year, killing over 1.5 million. Wide-spread use of fungicides and prophylactic antifungal therapy have driven resistance in many serious fungal pathogens, and there is an urgent need to expand the current antifungal arsenal. Recent research has focused on improving azoles, our most successful class of antifungals, by looking for synergistic interactions with secondary compounds. Synergists can co-operate with azoles by targeting steps in related pathways, or they may act on mechanisms related to resistance such as active efflux or on totally disparate pathways or processes. A variety of sources of potential synergists have been explored, including pre-existing antimicrobials, pharmaceuticals approved for other uses, bioactive natural compounds and phytochemicals, and novel synthetic compounds. Synergy can successfully widen the antifungal spectrum, decrease inhibitory dosages, reduce toxicity, and prevent the development of resistance. This review highlights the diversity of mechanisms that have been exploited for the purposes of azole synergy and demonstrates that synergy remains a promising approach for meeting the urgent need for novel antifungal strategies.
The Burden of Fungal Disease
Fungal pathogens present an ever-increasing threat to global health. An estimated 1.5 million people are killed by fungal infections every year, and the incidence of several serious mycoses is growing [1,2]. It is likely that the global fungal burden is underestimated, as several invasive fungal infections are under-reported in the developed world due to their association with other predisposing illnesses [3][4][5]. Australian clinics have recently seen a near-doubling of systemic candidaemia caused by drug-resistant Candida glabrata, with increasing rates of invasive candidaemia seen in Europe and the USA [6][7][8]. Candida auris, an emerging yeast pathogen infamous for its high tolerance to most important antifungals, has been the cause of several recent outbreaks, both before and during the COVID-19 pandemic [9][10][11][12]. Drug-resistant biofilms of various species of Candida have become increasingly responsible for fatal nosocomial infections [13]. Lethal infections with Aspergillus fumigatus and Cryptococcus sp. also remain a pressing concern, together causing an estimated 400,000 deaths per year, with chronic pulmonary aspergillosis severely affecting close to 3 million people [2]. Morbidity of non-life-threatening topical fungal infections is also increasing at an alarming rate. Cutaneous dermatophytosis affects 12 to 13 million people a year, and nail infections are extraordinarily difficult to treat, with less than 13% of cases fully resolved [14,15]. Decreasing susceptibility to topical antifungals has been observed in Exophiala dermatitidis and Malassezia sp. [15][16][17][18].
The increased incidence of emerging systemic, superficial, and cutaneous fungal pathogens has increased the demand for novel antifungal medications. However, despite the advances over the past four decades in bringing azoles and echinocandins to market, the currently available antifungals still operate via a limited number of mechanisms (c) Azoles inhibit Erg11 (lanosterol 14α-demethylase) preventing the biosynthesis of ergosterol and resulting in a build-up of toxic methyl-sterols that incorporate into the plasma membrane. The result is a loss of membrane structure and inhibition of growth [47]. (d) Allylamines inhibit ergosterol biosynthesis by antagonising squalene epoxidase Erg1, which converts squalene to squalene epoxide. As well as preventing the biosynthesis of ergosterol, this results in the build-up of squalene, which is deposited into lipid vesicles that disrupt the plasma membrane [48]. (e) Griseofulvin binds to tubulin in the fungal cell, preventing the formation of microtubules and arresting mitosis [49]. (f) 5-flucytosine is a pyrimidine analogue that is converted to 5-fluorouracil inside the cell. This fluoridated nucleotide is incorporated into mRNA, halting ribosomal processing and inhibiting protein translation. 5-fluorouracil also antagonises Cdc21, or thymidylate synthase, preventing the biosynthesis of thymidine nucleotides and inhibiting DNA synthesis [50].
There are various mechanisms of resistance to azole antifungals. Mutations in ERG11 (also known as CYP51), which encodes the target enzyme lanosterol 14α-demethylase, can prevent enzyme binding, or ERG11 expression can be up-regulated via changes to its promoter or regulating transcription factors or via gene duplication and aneuploidy [51]. Azoles can be actively excluded from the cell via membrane bound ABC and MFS efflux pumps [52][53][54], which can be increased in expression via aneuploidy or by alterations to transcription factors leading to constitutive expression. Mutations in ERG3 have also been found to increase resistance, thought to be via prevention of the build-up of toxic intermediates [55][56][57][58][59][60][61]. Frequently, these resistance mechanisms lead to cross-resistance across azole drug types, and although the research community is working to derive new azole antifungals by modelling lanosterol 14α-demethylase crystal structures [62][63][64], the increased resistance seen in clinical strains can undermine their use as a monotherapy. Echinocandins operate by inhibiting 1,3-β-D-glucan synthase in the fungal membrane, depriving the cell wall of glucans and therefore its structural integrity [46]. (c) Azoles inhibit Erg11 (lanosterol 14α-demethylase) preventing the biosynthesis of ergosterol and resulting in a build-up of toxic methyl-sterols that incorporate into the plasma membrane. The result is a loss of membrane structure and inhibition of growth [47]. (d) Allylamines inhibit ergosterol biosynthesis by antagonising squalene epoxidase Erg1, which converts squalene to squalene epoxide. As well as preventing the biosynthesis of ergosterol, this results in the build-up of squalene, which is deposited into lipid vesicles that disrupt the plasma membrane [48]. (e) Griseofulvin binds to tubulin in the fungal cell, preventing the formation of microtubules and arresting mitosis [49]. (f) 5-flucytosine is a pyrimidine analogue that is converted to 5-fluorouracil inside the cell. This fluoridated nucleotide is incorporated into mRNA, halting ribosomal processing and inhibiting protein translation. 5-fluorouracil also antagonises Cdc21, or thymidylate synthase, preventing the biosynthesis of thymidine nucleotides and inhibiting DNA synthesis [50].
There are various mechanisms of resistance to azole antifungals. Mutations in ERG11 (also known as CYP51), which encodes the target enzyme lanosterol 14α-demethylase, can prevent enzyme binding, or ERG11 expression can be up-regulated via changes to its promoter or regulating transcription factors or via gene duplication and aneuploidy [51]. Azoles can be actively excluded from the cell via membrane bound ABC and MFS efflux pumps [52-54], which can be increased in expression via aneuploidy or by alterations to transcription factors leading to constitutive expression. Mutations in ERG3 have also been found to increase resistance, thought to be via prevention of the build-up of toxic intermediates [55][56][57][58][59][60][61]. Frequently, these resistance mechanisms lead to cross-resistance across azole drug types, and although the research community is working to derive new azole antifungals by modelling lanosterol 14α-demethylase crystal structures [62][63][64], the increased resistance seen in clinical strains can undermine their use as a monotherapy.
Antimicrobial Synergy
Recent expiries in patent protection of early-generation azoles have led to generic alternatives for voriconazole, fluconazole, posaconazole and efinaconazole entering the market and have made space for innovations. There is a particular increasing interest in enhancing the antifungal activity of current azoles using drug synergy.
Synergy occurs when two compounds produce an increased inhibitory effect beyond what would be expected by adding the effects of the compounds individually [92]. Significant synergy is determined by the Fractional Inhibitory Concentration Index (FICI), which is calculated as the sum of the ratios of the Minimum Inhibitory Concentration (MIC) of the drugs when used alone and together, according to the following equation: When the FICI of two drugs is ≤0.5 their interaction is considered synergistic, and when this is >4 it is considered antagonistic. An FICI between 0.5 and 4 is considered indifferent [92].
Antimicrobial synergy can overcome resistance and lower inhibitory dosages to within clinically achievable levels [93,94]. Synergy can expand the spectra of activity of the individual drugs, making azoles a viable option against pathogens that may be otherwise resistant. Antimicrobial synergy can also make otherwise fungistatic drugs fungicidal, including many azoles [95,96].
Antimicrobial synergy has been exploited in the clinic for years for the treatment of HIV and malaria [97,98], and the most successful induction therapy for cryptococcal meningitis is a synergistic combination of amphotericin B and 5-flucytosine. Synergy between two drugs can be potentiated by their co-operation on multiple enzymes belonging to the same pathway; for example, sulfamethoxazole and trimethoprim are antibacterial agents that target the folic acid biosynthesis pathway at two different sites to synergistically improve both the spectrum and potency of bacterial inhibition [99]. Synergy can also be produced between drugs that inhibit different processes: for example, amphotericin B damages the fungal plasma membrane and 5-flucytosine interrupts translation and replication [100,101].
Aims and Scope of This Review
The mechanism of action of azoles is well understood ( Figure 1) and various synergists have been described that operate on related ( Figure 2) or quite disparate ( Figure 3) pathways or mechanisms. In this review, we describe recent studies that have reported azole synergists. There are several excellent earlier reviews that have considered aspects of azole synergy [52, 102,103]; here we provide an update with a particular focus on developments made over the past six years. Table 2 provides an overview of these findings presented as a heatmap, with the extent of synergy demonstrated between commonly used azoles and secondary agents in blue, and the spectrum of fungi found to be affected in yellow. Below we explore each of the different classes of synergists and their potential as combination therapies.
Synergy between Azoles and Currently Available Antimicrobials
Existing antimicrobial pharmaceuticals that have been proven safe or tolerable for humans and have approval from regulatory bodies such as the FDA make an attractive starting point for antifungal synergy. Theoretically, combining antimicrobials might broaden their activity spectrum to include pathogens that are not susceptible to either drug as a monotherapy [261]. Although combining azoles with other classes of antifungals could be expected to often give synergy, certain antibacterial, antiparasitic and antiviral drugs have also been shown to interact synergistically.
Azole-Antifungal Synergy
Numerous antifungals, including azoles, allylamines and morpholines, target different points of the ergosterol biosynthesis pathway, as summarised in Figure 2. Given that the mechanisms of action of these drugs are often closely related, they are prime targets for investigation as potential azole synergists. Terbinafine is an allylamine that has been investigated across a broad array of azoles and various fungal and oomycete pathogens and was found to synergise with some systemic azoles (Table 2), particularly against azole-resistant C. albicans isolates. A combined treatment strategy using terbinafine and fluconazole was also shown to resolve persistent oropharyngeal thrush in a clinical setting [104][105][106]. For other species of Candida, however, terbinafine-azole synergy is weaker and only seen in azole-sensitive isolates, although the combination shifts inhibition from fungistatic to fungicidal [107]. Terbinafine and azoles have proven effective against clinical isolates of Scedosporium prolificans, a hard-to-treat pathogen of the lungs, sinuses and brain, at clinically achievable concentrations [108,109]. For other pathogens, terbinafine-azole synergy is narrow in spectrum, with only a very few isolates of Aspergillus, azole-resistant non-albicans Candida species, azole-resistant dermatophytes and Pythium insidiosum, (a fungus-like oomycete that is the cause of often-fatal pythiosis) affected [106,107,110,111,262,263].
Antifungals from the echinocandin class, which act by weakening the cell wall, were considered potentially attractive as azole synergists when they first became available; however, for most combinations, the pairs either synergise strongly but in a select subset of isolates, or work across a global spectrum of isolates but with only weakly synergistic interactions (Table 2) [113][114][115][116][117]. One particularly promising pair is micafungin and voriconazole, which strongly inhibits azole-and echinocandin-resistant Candida auris, bringing the required dosages of both to well within clinically achievable levels [115]. In a recent survey, anidulafungin, caspofungin and micafungin all displayed strong synergy with isavuconazole in C. auris [118]. Given the extreme level of resistance to azoles demonstrated by many C. auris isolates and the threat of emerging resistance to echinocandins, which are currently the front-line antifungal for C. auris infection, these combinations may warrant immediate clinical application [264][265][266].
Broadly speaking, most other market antifungals have demonstrated weak or no synergy with azoles. The topical polyene natamycin weakly synergises with voriconazole against a narrow range of azole-susceptible pathogens, while ciclopirox synergises strongly with itraconazole but only in a small number of dermatophytes [119,120]. Flucytosine and even voriconazole have been tested with other azoles but were indifferent or only weakly synergistic for various fungal species, including C. auris and pathogens in the order Mucorales [121,122]. Amorolfine, however, which acts on the ergosterol biosynthesis pathway subsequent to azoles ( Figure 2) displayed more promise, synergising with systemic and topical azoles against dermatophytes in vitro, and demonstrating efficacy in an open randomised clinical trial of notoriously refractive onychomycosis [123][124][125]. While promising, this combined treatment strategy for topical fungal infections is yet to make it to market.
A few recently developed novel antifungals that are not yet available commercially appear promising as azole synergists. K20, a novel fungus-specific amphiphilic aminoglycoside that inhibits Fusarium spp. and a variety of yeast pathogens, displayed strong synergy with a wide variety of systemic azoles for almost all Candida isolates tested [127,267].
Oxadiazole-tagged macrocyclic peptides, which are capable of inhibiting azole-resistant C. glabrata and C. tropicalis strains also interacted synergistically with fluconazole, but with less synergy and a narrower spectrum of activity than K20 (Table 2) [128]. aminoglycoside that inhibits Fusarium spp. and a variety of yeast pathogens, displayed strong synergy with a wide variety of systemic azoles for almost all Candida isolates tested [127,267]. Oxadiazole-tagged macrocyclic peptides, which are capable of inhibiting azoleresistant C. glabrata and C. tropicalis strains also interacted synergistically with fluconazole, but with less synergy and a narrower spectrum of activity than K20 (Table 2) [128].
Figure 2.
Proposed mechanism of synergy between azoles and inhibitors that operate on the ergosterol biosynthesis and mevalonate pathways. In fungi, the synthesis of ergosterol occurs primarily in the endoplasmic reticulum, with the final product packaged into vesicles to be incorporated into the membrane [268]. Azole drugs inhibit Erg11, preventing lanosterol from being converted into dimethyltrienol and leading to the build-up of toxic methyl sterols. These are incorporated into the membrane instead of mature ergosterol, causing a loss of membrane structure and an inability to divide and resulting in the fungistatic arrest of growth [47]. Synergistic inhibitors co-operate at points up-and down-stream of Erg11, increasing the generation of toxic ergosterol precursors and other terpene-derived metabolites. The mevalonate pathway, upstream from the ergosterol biosynthesis pathway, is responsible for the biosynthesis of squalene, a precursor to all fungal membrane sterols. Statins like atorvastatin inhibit the HMG-CoA reductases Hmg1 and Hmg2, which are responsible for the production of mevalonate from HMG-CoA [269]. Further downstream, bisphosphonates like zoledronate inhibit farnesyl pyrophosphate synthase, or Erg20, which catalyses the production of farnesyl pyrophosphate from dimethylallyl pyrophosphate [270]. In the ergosterol biosynthesis pathway, squalene is converted into squalene epoxide by Erg1, a squalene epoxidase, which can be inhibited by allylamines like terbinafine [271]. Downstream from Erg11, dimethyltrienol is converted again into dimethylzymosterol by Erg24, a sterol reductase that is inhibited by Figure 2. Proposed mechanism of synergy between azoles and inhibitors that operate on the ergosterol biosynthesis and mevalonate pathways. In fungi, the synthesis of ergosterol occurs primarily in the endoplasmic reticulum, with the final product packaged into vesicles to be incorporated into the membrane [268]. Azole drugs inhibit Erg11, preventing lanosterol from being converted into dimethyltrienol and leading to the build-up of toxic methyl sterols. These are incorporated into the membrane instead of mature ergosterol, causing a loss of membrane structure and an inability to divide and resulting in the fungistatic arrest of growth [47]. Synergistic inhibitors co-operate at points up-and down-stream of Erg11, increasing the generation of toxic ergosterol precursors and other terpene-derived metabolites. The mevalonate pathway, upstream from the ergosterol biosynthesis pathway, is responsible for the biosynthesis of squalene, a precursor to all fungal membrane sterols. Statins like atorvastatin inhibit the HMG-CoA reductases Hmg1 and Hmg2, which are responsible for the production of mevalonate from HMG-CoA [269]. Further downstream, bisphosphonates like zoledronate inhibit farnesyl pyrophosphate synthase, or Erg20, which catalyses the production of farnesyl pyrophosphate from dimethylallyl pyrophosphate [270]. In the ergosterol biosynthesis pathway, squalene is converted into squalene epoxide by Erg1, a squalene epoxidase, which can be inhibited by allylamines like terbinafine [271]. Downstream from Erg11, dimethyltrienol is converted again into dimethylzymosterol by Erg24, a sterol reductase that is inhibited by morpholine antifungals such as amorolfine [47,272]. The resulting destabilisation of the cell membrane means synergy can often produce a fungicidal effect in the pathogen.
Azole-Antibacterial Synergy
Among the antibacterials, the sulfa-based drugs have shown the greatest potential to date for azole synergy (Table 2). These inhibit folic acid biosynthesis and several, including the widely available antibacterial sulfamethoxazole, have displayed strong synergy with azoles in most azole-resistant Candida isolates, including isolates of C. auris [129,130]. Tetracycline-based antibacterial agents have demonstrated synergy with azoles specifically against azole-resistant pathogens (Table 2). Doxycycline, tigecycline and minocycline could all potentially be used to improve therapy with azoles for persistent cases of candidiasis, aspergillosis and fusariosis [131][132][133][134][135][136][137]. Minocycline in particular has recently demonstrated potential as an itraconazole synergist in C. neoformans and Scedosporium sp. [138,139]. This synergy has been consistently reproduced in in vivo infection models and against sessile forms of Candida, with some demonstrating efficacy against biofilms. Colistin is a last-resort antibiotic that has recently displayed promising synergy in vitro with isavuconazole in Aspergillus spp. and Candida auris, and in vivo in Candida albicans [93,96,143,144,252,253].
Other antibacterial compounds have failed to display any real potential as combined antifungal therapies. Gentamicin was synergistic with fluconazole in biofilms produced by some resistant Candida species. Linezolid failed to produce true synergy with any azoles tested but it did reduce the required dosage of both drugs to a clinically achievable level for a limited spectrum of fungi (Table 2) [140,141]. While reducing the dose is one of the main goals when developing novel combination treatments, in the wake of more viable therapeutic leads it seems unlikely that drugs like linezolid will see further development. Furthermore, the in vivo use of voriconazole and clarithromycin, another antibacterial found to be synergistic in vitro, was found to cause acute kidney damage, illustrating the potential for undesirable consequences when exploiting antimicrobial synergy [273].
Azole-Antiparasitic Synergy
Only a limited number of antiparasitic compounds have been explored for azole synergy in recent years, but some of them show promise. Chloroquine and artemisinin interacted synergistically with azoles against azole-resistant Candida strains, while pyrvinium pamoate-azole synergy was observed in a broad suite of dermatophytes [145][146][147][148]. Mefloquine and related compounds displayed limited synergy in C. neoformans, but did potentiate a strong fungicidal effect when combined with fungistatic fluconazole [149].
Azole-Antiviral Synergy
Antivirals have recently been investigated as potential antifungal synergists, with the anti-retrovirals showing the most promise. Saquinavir and ritonavir were effective at synergistically cooperating with azoles against Histoplasma, a systemic fungal pathogen [150]. Other antivirals like ribavirin and 2-adamantanine have shown promise at treating biofilms of azole-resistant C. albicans and potentiating the antifungal activity of azoles from fungistatic to fungicidal, respectively [151,152]. Lopinavir is an antiviral that shows extremely strong synergy with voriconazole in a majority of azole-resistant C. auris strains, and certainly warrants further investigation [153].
Active Efflux Modulators
The active, ATP-dependent transport of antimicrobial compounds out of the fungal cell is one of the most concerning mechanisms of antifungal resistance emerging today. It is the principal mechanism of azole tolerance in C. glabrata and in a significant portion of C. auris and azole-resistant C. albicans isolates [44, 274,275]. There is also evidence that active efflux is responsible for azole resistance in some Aspergillus species and dermatophytes [276,277]. Due to the importance of pumps for resistance in bacterial pathogens and malarial parasites, inhibition of active efflux is a popular target in antimicrobial discovery and the development of combined antifungal treatments [278,279].
Most efflux pump inhibitors that have been investigated as azole synergists interact directly with the membrane-bound pump. Several drugs that affect the intracellular homeostasis of calcium by blocking calcium channels have been found to have some affinity for membrane-bound transporters, particularly tetrandrine and verapamil. Traditionally used as immunomodulators and vasodilators, these have produced antifungal synergy with a variety of azoles. Tetrandrine combined with posaconazole has been proposed as an effective solution for persistent and temporary candidiasis [154][155][156][157][158][159]280]. Eucalyptal D and dodecenoic acid are essential oil extracts that have been shown to directly antagonise ABC transporters [160,161]. These exhibited strong synergy with fluconazole and itraconazole, respectively, in azole-resistant Candida. Azoffluxin, an oxindole Cdr1 inhibitor, has been shown to synergise strongly with fluconazole in all non-clade III C. auris and azole-resistant C. albicans strains, both in vitro and in vivo [162]. Other receptor antagonists may be cross-reactive to membrane-bound transporters; for example, ospemifeme is a promising therapeutic lead that has a broad spectrum of activity and synergises very potently with fluconazole ( Figure 3g) [159,163].
Several efflux inhibitors that show promising synergy with azoles do not directly bind pump proteins but interfere with efflux through other mechanisms. Palmarumycin P3 and phialocephalarin B are naturally occurring quinone derivatives that synergise with fluconazole in pump-dependent, azole-resistant C. albicans. It appears that the mechanism of synergy is due to the ability of these derivatives to directly inhibit nuclear transcription factors, modulating the expression of the principal pump-coding gene MDR1 [164]. Geraniol is a unique synergist that causes the localisation of pumps to become dysregulated, preventing them from being incorporated into the membrane and resulting in weak synergy [165]. The controversial anti-cancer drug ponatinib is able to inhibit active efflux in multiple yeast pathogens by interfering with the proton motive force, producing strong synergy with fluconazole in all strains tested [196]. Other potential co-drugs like cationic triphenylphosphonium have been shown to improve the activity of efflux pump inhibitors, suggesting the potential development of a triple-drug antifungal treatment strategy [281].
Many other compounds currently being investigated for their ability to synergise with azoles promote the downstream inhibition of efflux, despite no direct action on the pumps themselves or their expression. Haloperidol and promethazine are repurposed drugs that have displayed the ability to modulate active efflux. Both show great promise as azole adjuvants for the treatment of cutaneous mycoses and dandruff [18,173,174]. Thymol and carvacrol are terpenoids extracted from thyme leaves that interact synergistically with azoles and inhibit efflux in azole-resistant Candida, including C. auris [211,212,282]. Numerous other natural products with synergistic potential have displayed similar indirect anti-efflux properties [203,211,223,227,233].
It should be noted that fungal drug efflux pumps have an extremely broad spectrum of substrates that they transport [283]. Given this, many of the compounds discussed in this review may be substrates of efflux pumps that compete with azoles, preventing the transport of the azole and prolonging its effect. This competition may contribute to the synergy observed between some drug pairs where the underlying mechanism is currently unknown.
Repurposing Other Pharmaceuticals
Repurposing existing drugs can short-cut the process of drug development and regulatory approval. In recent years, a popular approach in drug discovery has been to screen libraries of existing compounds for novel repurposed uses, including antifungal activity. The resulting "hits" may have completely different mechanisms of action to any other currently available antifungal, opening new avenues for drug design. Many of these repurposed antimicrobials are also tested for their capacity as antifungal adjuvants [284,285].
Statins
Statins are common anti-cholesterol medications and until recently were considered one of the most promising routes for antifungal discovery. Statins operate on HMG-CoA reductase in the mevalonate pathway, upstream from azole-targeted demethylases Table 2, synergy between azoles and statins is often minor or bordering on indifferent. In addition, while it might appear that some statin-azole pairings have a decent spectrum of activity, most studies have tested only 1-3 strains per species. Their applicability to a wide variety of mycoses is therefore difficult to gauge [166][167][168][169][170]. Where synergy has been observed, the mechanism was shown to be primarily driven by the co-operation of the drug pairs on the same pathway ( Figure 2) [286]. Newer statins such as pitavastatin have been found to exhibit extremely strong antifungal synergy with fluconazole in azole-resistant Candida, which may reignite future interest [171]. Statins can unfortunately have adverse off-target effects and drug interactions that range from unpleasant to lethal, especially for already vulnerable mycosis patients. These include diabetes, liver cirrhosis, irreparable damage to skeletal muscle and sexual dysfunction [287,288].
Bisphosphonates
Bisphosphonates are anti-osteoporosis drugs and show promising synergy with fluconazole. Like statins, bisphosphonates operate on the mevalonate pathway where they target farnesyl pyrophosphate synthase (or Erg11; Figure 2) [289]. Of the bisphosphonates tested, zoledronate resulted in strong synergy across numerous strains of Cryptococcus tested in vitro and in an in vivo model and significantly limited the development of antifungal resistance [172]. Due to their propensity to bind to bone mineral and their implication in osteonecrosis, market bisphosphonates have limited applicability for the treatment of invasive mycoses [289,290]; however, their antifungal synergy and potent immunostimulatory properties make them attractive lead compounds for further development [291].
Repurposing Miscellaneous Pharmaceuticals
Although most fail to produce strong, broad-spectrum synergy and may have undesirable anti-inflammatory or immunosuppressive effects, certain immunomodulators could be useful as lead compounds for development as synergists to treat Candida biofilms [175][176][177][178]292]. The antihistamine promethazine appears potentially attractive as a novel topical anti-dandruff and anti-tinea treatment, as it synergised strongly with azoles in all strains of dermatophytes tested [18,173,174].
Some psychoactive drugs have displayed limited synergy with azoles along a narrow spectrum of activity. Bromperidol and fluoxetine show limited potential as azole adjuvants for the treatment of candidiasis, while the commonly prescribed antidepressant sertraline synergised weakly with azoles in the opportunistic yeast Trichosporon [179][180][181].
Haloperidol is an antipsychotic that may be more promising as a topical combined treatment due to its strong synergy with fluconazole and itraconazole in many strains of dermatophytes [18,173].
Two of the most attractive groups of compounds for developing new synergists are calcineurin inhibitors and calcium channel blockers. These drugs modulate calcium ion homeostasis, which is vital for cellular signalling, and have been considered promising antifungal leads for a variety of diverse pathogens for more than a decade [184][185][186]293]. Calcium channel blockers have also been shown to further enhance the synergy between fluconazole and doxycycline in a series of three-way checkerboards [131]. The calcium channel blockers tetrandrine and verapamil are discussed above in the context of their role in efflux, but their ability to disturb calcium homeostasis is likely also part of their antifungal effect. Other inhibitors like cyclosporine and tacrolimus target calcineurin and calmodulin, a calcium-activated complex responsible for the up-regulation of genes related to growth and the fungal stress response (Figure 3c) [18,[182][183][184][185]187,294]. Tacrolimus specifically produces significant synergy for the majority of dermatophyte species, while cyclosporine reliably and potently interacts with fluconazole in C. albicans [183,188,189]. It should also be noted that calcineurin inhibitors like tacrolimus directly affect the ATPase activity of efflux pumps in addition to acting on calcineurin, thereby both directly and indirectly inhibiting active efflux [295].
The growing drug repurposing initiative has yielded synergists with novel mechanisms of action with significant therapeutic potential, as illustrated in Figure 3. Fungal membrane proton pumps are vital enzymes, generating the membrane potential required for membrane-bound transporter function and the uptake of nutrients required for ATP synthesis, thereby enabling ATP-dependent drug efflux [52]. A wide variety of proton pump inhibitors have been shown to produce strong synergy with fluconazole in azoleresistant Candida isolates [194]. HSP90 (HSP82 in yeast) inhibitors like geldanamycin and ganetespib repress the fungal response to stress, reducing the survival of yeasts during azole-induced inhibition (Figure 3a) [195,197,209]. Histone deacetylase inhibitors have been popular in the development of antifungal therapies but surprisingly few have been tested for synergy with azoles. One exception is givinostat, a potential anti-aspergillosis treatment when paired with posaconazole [198]. Lonafarnib inhibits farnesyltransferase, preventing vital post-translational modifications of fungal proteins and producing moderate azole synergy in Aspergillus (Figure 3b) [199]. Other recently discovered azole synergists with more limited utility inhibit superoxide dismutase, chemosensitising C. albicans to oxidative stressors induced by azole treatment (Figure 3f) [200]. Others such as DIBI, lactoferrin, D-penicillamine and EDTA chelate ions vital for enzymatic function, resulting in dysregulated apoptosis in the cell, but these result in only a weak or narrow-spectrum antifungal synergy (Figure 3d) [201,202,245,253].
Some repurposed pharmaceuticals interact synergistically with azoles through a mechanism that has yet to be fully elucidated. Licofelone failed to make it through clinical trials as an anti-arthritic but was shown to abrogate azole resistance in biofilms of C. albicans [203]. Phenylbutyrate is an aromatic fatty acid used to treat hyperammonaemia and has been shown to weakly synergise with various systemic azoles against resistant Candida species [204]. Other repurposed inhibitors including sedatives, antiseptics, diuretics and analgesics have displayed some promising synergy with azoles against a narrow spectrum of strains that may warrant further exploration [205][206][207][208]210]. Pharmaceuticals 2021, 14, x FOR PEER REVIEW 17 of 35 Figure 3. Proposed novel mechanisms of synergy between azoles and inhibitors that operate on entirely separate pathways. (a) HSP82 inhibitors like geldanamycin prevent the association of Hsp82 with proteins, inhibiting proper folding of nascent proteins and degradation of senescent proteins. Accumulation of toxic oxygen radicals results in oxidation of proteins, which would ordinarily be degraded. HSP82 inhibitor-azole synergy therefore appears to rise from the accumulation of oxidatively damaged, toxic proteins [195]. (b) Inhibition of protein farnesylation by farnesyltransferase (Ram1:Ram2) inhibitors such as lonafarnib results in reduced translocation of membranebound proteins. This decline in the population of membrane proteins combines synergistically with the azole-induced build-up of toxic sterols, resulting in increased membrane instability [199]. (c) Calcium channel blockers and calcineurin inhibitors prevent the activation of the calcineurin Accumulation of toxic oxygen radicals results in oxidation of proteins, which would ordinarily be degraded. HSP82 inhibitor-azole synergy therefore appears to rise from the accumulation of oxidatively damaged, toxic proteins [195]. (b) Inhibition of protein farnesylation by farnesyltransferase (Ram1:Ram2) inhibitors such as lonafarnib results in reduced translocation of membrane-bound proteins. This decline in the population of membrane proteins combines synergistically with the azole-induced build-up of toxic sterols, resulting in increased membrane instability [199]. (c) Calcium channel blockers and calcineurin inhibitors prevent the activation of the calcineurin complex by calmodulin (Cmd1). This results in an inability of calcineurin to dephosphorylate Crz1, which would ordinarily mobilise it to the nucleus. Crz1 is a transcription factor responsible for the regulation of several stress-related genes. Calcium channel blockers and calcineurin inhibitors therefore impair the cellular stress response, sensitising the cell to the antifungal effect of azoles [182,280,296]. (d) Ion chelators like DIBI and D-penicillamine bind to ions and disrupt cellular ion homeostasis. Evidence suggests that it is the disturbance of calcium homeostasis that results in the promotion of metacaspase (Mca1)-dependent apoptosis when paired with azoles, synergistically enhancing the fungicidal effect [202,253]. (e) AT406 is an antagonist of the inhibition of apoptosis proteins (IAPs) such as Bir1, which is present in both the mitochondrion and the nucleus. There is evidence that membrane weakness due to toxic sterol build-up improves the pro-apoptotic effects of AT406 [256]. (f) Some novel synergists, such as isoquercitrin, have demonstrated the ability to inhibit mitochondrial superoxide dismutase, Sod1. Sod1 becomes unable to neutralise harmful reactive oxygen species that accumulate during azole treatment, resulting in rapid accumulation of radicals and potentiating a toxic oxidative effect [200]. (g) Direct and indirect inhibitors of both ABC transporters such as Cdr1 and MFS transporters such as Mdr1 prevent the active efflux of toxic compounds such as fluconazole out of the cell, resulting in an accumulation of the drug and extending its antifungal effect. In turn, the destabilised membrane may reduce or prevent incorporation of transmembrane proteins including pumps, further reducing the efflux capabilities of the cell [173,183,240].
Azole Synergy with Natural Products
Nature has long been the source of novel compounds, including important anti-cancer and antimicrobial chemotherapies [297]. Table 3 lists naturally produced synergists that have been included in this review alongside their original biological source.
Some essential oils and essential oil extracts have shown strong potential as lead compounds, but most are only effective at concentrations that are not clinically achievable or are active only in particular isolates. Thymol and carvacrol are well-characterised terpenoids from thyme and oregano essential oils (Table 3) and have been found to synergise with fluconazole in wild-type C. albicans and have displayed some activity in C. auris biofilms [211,212,282]. Acetophenone is a small ketone present in many foods that showed promise as a topical antifungal when paired with ketoconazole [213]. Osthole from tonka bean oil and houttuyfonate from fish mint oil both displayed excellent synergy with azoles in azole-resistant species of Candida [214,215]. Oridonin is a staple of traditional Chinese medicine that has displayed strong synergy with common azoles in resistant isolates of Candida [222]. Menthol extracted from mint synergised well with itraconazole, but only for a fraction of the Candida strains tested [216]. Glabridin from liquorice root synergised with voriconazole in all A. fumigatus strains tested [220]. Several oil extracts displayed excellent anti-biofilm activity when combined with fluconazole, including chito-oligosaccharides, tyrosol from olive oil, allyl isothiocyanate from mustard oil and butylphthalide from celery oil [217][218][219]221]. Some crude essential oils have also been investigated for potential synergy, including oils from Indian frankincense, tea tree, sea buckthorn and guava leaf. While some promising anti-Candida and anti-dermatophyte synergy was observed, these crude oils are too complex to be called therapeutic leads, and more refined bioactive fractions need to be identified [223][224][225][226].
Harmine is an alkaloid that interacts extremely synergistically with multiple systemic azoles, but only in a fraction of the strains tested, while guttiferone, a terpenoid, synergises less acutely but with more total strain coverage, particularly in non-albicans species of Candida [235,236]. Berberine and guttiferone are two of the very few phytochemicals actually taken past the point of a therapeutic lead, undergoing chemical modifications and development into more synergistic novel derivatives in further studies [236,250,298]. Farnesol is an isoprenoid that displays limited or no synergy in planktonic forms of C. auris but is highly synergistic against its biofilms. This contrasts with asiatic acid, a terpenoid that synergises with fluconazole only in planktonic Candida or in vivo, but not in biofilms [238]. Farnesol may, thus, be a promising tool for fighting highly problematic biofilms of C. auris [237].
Phenol derivatives are an important class of organic phytochemicals, often vital for mediating the plant response to stress [299]. Many phenolics have been shown to have antimicrobial activity, and several may be potential synergistic azole co-drugs [300]. Both the lignan magnolol and the diphenol diorcinol have proven effective against every Candida strain tested, with the former synergising well with fluconazole and the latter not technically synergising but sharply decreasing the required inhibitory dosage, bringing it to well within a clinically achievable concentration [239,240]. In contrast, proanthocyanidin, a plant polyphenol, synergised with fluconazole very weakly and in only a small number of the azole-resistant non-albicans Candida isolates tested [241]. Other phenolics show greater promise, particularly for the treatment of cutaneous or oropharyngeal candidiasis. Epigallocatechin gallate and asarones both synergise with topical azoles, with the former particularly effective against Candida biofilms [242,243]. Pyrogallol is a phenol that synergises with various market antifungals by inhibiting active efflux in both azolesusceptible and azole-resistant Candida [244]. Catechol, while unable to inhibit Candida itself, potentiates the antifungal activity of azoles and polyenes, and prevents up-regulation of virulence-associated genes. Curiously, catechol did not reduce the viability of Candida biofilms, but did reduce their hydrophobicity [301].
Antimicrobial peptides are found in cells from all taxonomic Kingdoms and are vital for the defence against infection, potentially synergising with other antimicrobials [302]. Synergy has been seen with the milk protein lactoferrin with azoles in resistant isolates of Candida, but not Cryptococcus [201,245]. Beauvericin is an antibiotic and insecticidal peptide derivative called a depsipeptide that synergises well with fluconazole. With only a limited group of azole-susceptible strains of C. albicans tested, however, it remains unclear whether it can be called a truly promising therapeutic lead [246].
Azole Synergy with Novel Compounds
Numerous new compounds that have been shown to have good antifungal activity as a monotherapy have also been found to synergize with azoles. Synergy is often inconsistent for azole-susceptible and azole-resistant strains, however. Consistent with the cross-resistance observed between azoles, novel azole derivatives are generally effective only in sensitive yeasts and not in resistant ones [247,248]. Novel azoles conjugated with triphenylphosphonium cations have displayed improved mitochondrial targeting and have shown an improved fungicidal effect when combined with Hsp90 inhibitors [303].
Several of the more promising novel compounds currently under investigation are chemically modified derivatives of promising therapeutic leads. Derivatives of the aforementioned isoquinolone and phthalazine, natural metabolites berberine, piperidol, caffeic acid and guttiferone, the anti-inflammatory celecoxib, and phenylpentanol all demonstrated strong synergy with fluconazole in a significant portion of Candida strains tested, and in particular in azole-resistant strains [171,236,[249][250][251]254,257]. Other promising novel compounds like beta-glucan synthase inhibitor SCY-078, ion chelator DIBI, TOR inhibitor AZD8055, efflux modulators, and a group of novel antifungal chalcones have all proven highly synergistic against azole-resistant C. albicans, C. glabrata and other Candida species with reduced sensitivity to azoles [173,252,253,255,260]. A group of novel ultra-short cationic lipopeptides were not able to fully synergise with fluconazole, but did interact additively [259].
Two promising classes of compounds that have been developed in the past year are "dual-inhibitors", which are single compounds that are designed to attack more than one druggable target at once. One group of these that was strongly antifungal contains a piperazine moiety, allowing it to inhibit 14α-demethylase, and a zinc binding group to inhibit HDAC [304]. Another used fluconazole conjugated with COX inhibitors and was able to consistently inhibit pathogenic Candida [305]. As these dual inhibitors are single compounds, they cannot be considered truly synergistic; however, a class of novel Hsp90/HDAC dual inhibitors has displayed strong synergy with fluconazole in azoleresistant Candida [258].
An interesting novel antifungal mechanism of action is the promotion of dysregulated apoptosis in the fungal cell. Control of apoptosis in yeasts is partially governed by the regulation of Inhibitors of Apoptosis Proteins (IAPs), which prevent progression to cell death [306]. A new IAP antagonist, AT406, promotes apoptosis in C. albicans and the outbreak pathogen Exophiala dermatitidis, sensitising the cell to the oxidative stress produced by the azoles (Figure 3e) [256]. This is a truly novel mode of fungicidal action that may be a source of an entirely new class of azole synergists.
Conclusions
With the rise of opportunistic and emerging fungal pathogens and increasing rates of antifungal resistance, there is an urgent need for new therapies in our antifungal toolbox. As this review has shown, a reliable path to success is to improve commonly used azole antifungals with synergists, and there is substantial diversity in the compounds and approaches that have yielded synergy. Although high-throughput screening has become a popular method for discovering new therapeutic agents, a rational, target-based approach to drug discovery may yield more reliable and effective therapeutic leads. Compounds co-operating with azoles on the ergosterol and mevalonate biosynthesis pathways have displayed consistent synergy in various fungal pathogens. Rational drug design, building on a known mechanism of action or starting with already approved drugs with known pharmacokinetic data, may take newly developed drugs into market more rapidly. On the other hand, hypothesis-free drug screening initiatives may yield novel synergies that would otherwise be undiscovered, opening new avenues for drug design.
There are several gaps in current studies that await further research. From Table 2, there is a clear focus on developing combined treatment strategies to combat Candida, particularly azole-resistant clinical isolates, and with a few notable exceptions there is a paucity of data exploring azole synergy in Cryptococcus and filamentous fungi. There is also a focus on improving the systemic azoles, particularly fluconazole, while for topical pathogens where oral bioavailability is not required, more could be gained by exploring other azoles. Finally, any translation of synergy into clinical use must deal with the issue of co-administration of two (or more) compounds. New systems that package drugs into nanoparticle delivery systems or co-crystallise compounds into a single formulation, may enable the development of single-dose synergistic treatments [307,308]. There are currently no azole-based antifungal combinations used to treat mycoses, but the need for new treatments and the threats to azoles from intrinsic and acquired resistance make drug synergy an increasingly attractive avenue for antifungal development.
|
2022-04-21T15:18:02.265Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "67826f11ca914977a29525287f5672bdd61d935e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/15/4/482/pdf?version=1650273721",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab1fe8a79a4053920d33f6da6953c02c1c2b768d",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257077053
|
pes2o/s2orc
|
v3-fos-license
|
Highlighting the immunohistochemical differences of malignant mesothelioma subtypes via case presentations
Abstract Malignant mesothelioma (MM) is a rare tumor of mesothelial cells, with an increasing incidence both in developed and developing countries. MM has three major histological subtypes, in order of frequency, according to the World Health Organization (WHO) Classification of 2021: epithelioid, biphasic, and sarcomatoid MM. Distinction may be a challenging task for the pathologist, due to the unspecific morphology. Here, we present two cases of diffuse MM subtypes to emphasize the immunohistochemical (IHC) differences, and to facilitate diagnostic difficulties. In our first case of epithelioid mesothelioma, the neoplastic cells showed cytokeratin 5/6 (CK5/6), calretinin, and Wilms‐tumor‐1 (WT1) expression, while remaining negative with thyroid transcription factor‐1 (TTF‐1). BRCA1 associated protein‐1 (BAP1) negativity was seen in the neoplastic cells' nucleus, reflecting loss of the tumor suppressor gene. In the second case of biphasic mesothelioma, expression of epithelial membrane antigen (EMA), CKAE1/AE3, and mesothelin was observed, while WT1, BerEP4, CD141, TTF1, p63, CD31, calretinin, and BAP1 expressions were not detected. Due to the absence of specific histological features, the differentiation between MM subtypes could be a challenging task. In routine diagnostic work, IHC may be the proper method in distinction. According to our results and literature data, CK5/6, mesothelin, calretinin, and Ki‐67 should be applied in subclassification.
INTRODUCTION
The first case of malignant mesothelioma (MM) was described in 1767 by Joseph Lieutand. He characterized it as "pleural tumor", while in 1931, Rabin and Klemperer recommended the use of the term mesothelioma. 1 MM is a rare tumor of mesothelial cells, with an increasing incidence both in developed, and developing countries. Males are 3-4 times more likely to be affected, and the average age for patients is 70 years. Few cases have been described in children, albeit in those cases, no etiological connection has been found with asbestos exposure. 2,3 This type of malignancy has high mortality due to its aggressive growth, unspecific symptoms, and difficulties in surgical removal. The pleura is by far the most commonly affected area, followed by the peritoneum, and the pericardium. Even though it is stated that MM is caused by industrial pollutants, most of all, asbestos has been defined as a causative agent, and it has also been associated with prior ionizing radiation. 4,5 Symptoms of MM are nonspecific, including dyspnea, chest pain, and general tumor manifestations, such as cachexia, fever, and fatigue. Therefore, the diagnosis is often encumbered, and delayed. 6 Primary peritoneal mesothelioma often presents as abdominal pain, and is first misdiagnosed as cholecystitis. 7 The first-line diagnostic tool for MM are imaging techniques, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography/ computed tomography (PET/CT), and ultrasonography. 8 Despite the development of radiology, a definitive diagnosis can only be facilitated with histological evaluation. 9 According to the World Health Organization (WHO) Classification of 2021, MM has three major histological subtypes, namely epithelioid, biphasic, and sarcomatoid. [10][11][12] MM has to be distinguished from primary or secondary lung tumors, and then evidently the MM subtypes have to be differentiated from each other. Proper distinction may be a challenging task for the pathologist, due to the unspecific morphology.
The preferred treatment option for all MM subtypes is surgical resection, and a favourable outcome has been reported when combined with chemo-or radiotherapy, although relapse is still fairly common. 9,[13][14][15] Regardless of the therapeutic options, the prognosis of diffuse MM remains dismal. Amin and coauthors analyzed 888 cases of pleural and peritoneal MM, and in their study, the median overall survival of these patients was 15 months, with better outcomes in patients with peritoneal involvement. Favorable prognostic factors have been identified, namely female gender, younger age (less than 45 years), epithelioid histological subtype, stage I category, peritoneal presence, and combined surgical and chemotherapeutical treatment. 16 Sarcomatoid MM represents an even poorer prognostic group. This subtype also mainly arises from the pleura, and an association with asbestos was not found in the majority of cases examined. 17 Two cases of MM are presented here to emphasize the clinicopathological features of this tumor focusing on immunohistochemical (IHC) characteristics, and to facilitate the establishment of correct diagnosis.
CASE PRESENTATION 1-EPITHELIOID MM
A 78-year-old male patient with a history of ischemic heart disease, type 2 diabetes, and atrial fibrillation had been treated for months with therapy-resistant hydrothorax. Even though several thoracenteses were carried out, the evaluation of the drained fluid did not confirm malignancy. In April 2021, he was admitted to the University of Szeged to surgically manage the recurring hydrothorax. During the surgery, the preliminary diagnosis was pyothorax or disseminated tumor. Chest X-ray examination of the patient showed signs of congestion of the pulmonary circulation, cardiomegaly, and fluid accumulation in the left sinus ( Figure 1a).
Macroscopic examination of the surgical specimen revealed firm, gray thickening of the pleura. Histological evaluation showed that the entire extent of the pleura had been infiltrated by relatively monomorphic, epithelioid neoplastic cells, forming solid nests or trabeculae between collagen bundles (Figure 1b). IHC was applied to identify the nature of the malignant neoplasm. The neoplastic cells showed cytokeratin 5/6 (CK5/6; Figure 1c), calretinin ( Figure 1d) and Wilms-tumor-1 (WT1) expression, and were negative for thyroid transcription factor-1 (TTF-1), and BerEP4. BRCA1-associated protein 1 (BAP1) negativity was seen in the nucleus of neoplastic cells, reflecting the loss of BAP1 tumor suppressor genes (Figure 1e). Ki-67 proliferation fraction was approximately 10% (Figure 1f). The case was concluded to be epithelioid MM.
Although since the surgery, the patient did not cooperate with the medical team and did not appear in the control examinations, he is still alive (overall survival: OS = 20 months).
CASE PRESENTATION 2-BIPHASIC MM
A 69-year-old male patient with a history of smoking, chronic obstructive pulmonary disease, prostatic hyperplasia, and cataract was admitted to the hospital, due to an accident at home. The patient complained of severe thoracic pain, localized specifically to the ribs, and his left shoulder. During the exploration of the patient's medical history, it was discovered that he had been working as a mechanic, and although he was not known to have been exposed to asbestos, he had been heating his home with coal for a decade.
The first CT scan revealed tumorous thickening of the left sixth and seventh ribs, nearly 8 cm in largest diameter. The sixth, seventh, and eighth ribs and the intercostal muscles were surgically resected, and a GORE-TEX patch was applied for the reconstruction of the chest wall defect.
Histological examination of the specimen showed tumorous infiltration of a biphasic neoplasm, consisting of both epithelioid, and spindle cell components. The former component formed solid nests, the latter created irregular fascicles. The atypical spindle cells demonstrated expression of epithelial membrane antigen (EMA), cytokeratin AE1/AE3 (CK AE1/AE3), and mesothelin, while WT1, epithelial cell adhesion molecule (EpCAM), thrombomodulin, TTF1, p63, CD31, and calretinin remained negative. According to the histomorphology, and immunophenotype, a rib destructing biphasic MM was diagnosed. Complete resection of the tumor could not be confirmed from the surgical specimen.
The patient received four cycles of cisplatin, and pemetrexed combined chemotherapy. At the end of 2020, the PET/CT examination reported recurring tumorous involvement of the pleura, ribs, and also the lungs (Figure 2a). The metastatic foci located in the left upper, and lower lobes of the lung, and the infiltrated chest wall including the residual sixth rib, were removed in a second surgical procedure. The chest wall defect was covered with a GORE-TEX patch.
Microscopic examination described tumor cells with decisively spindle cell morphology, surrounded by abundant hyalinised stroma. Focally, extreme pleomorphism, and multinucleated tumorous cells, and a large number of mitotic figures were also seen ( Figure 2b). Signs of vascular, lymphovascular or perineural invasion were not present. The IHC examination revealed WT1, and mesothelin positivity in the tumor cells, while CK5/6 and calretinin remained negative (Figure 2c,d). Loss of BAP1 expression was also described, corresponding with the presence of mutant BAP gene (Figure 2e). Mitotic rate was high (21 mitoses/10 high power fields). Ki-67 proliferation marker was expressed in 60% of tumor cells (Figure 2f). The results of IHC examination, and the microscopic morphology ratified the diagnosis of sarcomatoid MM. Alongside the GORE-TEX patch, severe foreign body reaction developed, with chronic inflammation, and numerous giant cells.
Novel tumorous infiltration of the basis of the left lung has been reported in the most recent control PET/CT examination. The third surgical procedure will be performed with video-assisted thoracic surgery, and is due in the near future. The OS of the patient is currently 24 months.
Because complete resection of the tumor from the first surgery was not proven, it can be stated that the tumor developed as biphasic MM from the beginning, and later the more aggressive part with sarcomatoid morphology recidivated. The OS also supports this hypothesis. Even though the patient was not known to have been exposed to asbestos, the literature already describes the association of MM with coal. 18
DISCUSSION
Albeit histologically MM can show diverse morphology, prognosis correlates well with the epithelioid, biphasic, and sarcomatoid classification. 12 According to the study by Amin et al., the median survival of the epithelioid subtype was 18 months, while it proved to be 10 months for the biphasic subtype, and remained only 7 months for the sarcomatoid subtype. 16 In a large scale series by Brustugun and coworkers that examined 1509 MM cases in a 20-year-long period, an even worse prognosis was observed, with median survival of 5.1 months for nonepithelioid subtypes. 19 The chemotherapeutical response of MM subtypes has been investigated in some studies. In the meta-analysis by Mansfield et al., the results of 41 trials were analyzed and revealed that the rate of response to chemotherapy was only 21.9%, and 13.9% of patients with epithelioid, and sarcomatoid MM, respectively. 11 A primary diagnosis of MM is still demanding. Differential diagnosis includes primary lung adenocarcinoma, squamous cell carcinoma, sarcomatoid carcinoma, vascular tumors, melanoma, and metastatic origin (breast, gastrointestinal, prostate, kidney, ovary, thyroid cancer etc) also has to be excluded. Less frequent, but possible challenging diagnosis constitute lymphomas, SMARCA4-deficient thoracic tumors, desmoplastic small round cell tumor, monophasic synovial sarcoma, and CIC-translocated sarcomas. Regarding nontumorous conditions, inflammation, chronic pyothorax, reactive mesothelial hyperplasia, pleuritis, and callus must be considered. 20 Regarding differential diagnosis of diffuse MM, Ali and coauthors introduced a pattern-based approach in 2018. Regarding reactive pleural changes versus diffuse MM, the application of the following IHC markers are favored. 21 Desmin and glucose transporter 1 (GLUT1) remains generally positive in mesothelial hyperplasia. 22 In cases of p53, aberrant or nonwild-type expression could serve as a clue in comprehending malignant versus benign lesions. 23 EMA positivity has been linked both to reactive and neoplastic lesions, although its combined use with desmin could serve as a solution, while EMA positivity alongside with desmin negativity favors diffuse MM. Its opposite, EMA negativity and desmin positivity facilitates reactive processes. 24,25 Positivity of insulin-like growth factor II messenger ribonucleic acid binding protein-3 (IMP-3), and thrombomodulin IHC markers tend to be observed more in diffuse MM cases, rather than those which are reactive. 26,27 Differentiation of chronic, active, fibrosing pleuritis may be difficult, as a result of the misinterpretation of fatlike spaces being present in organizing pneumonia, and pleuritis cases to real fat tissue infiltration (stromal infiltration) of desmoplastic MM. In such scenarios, S100 can help in discerning actual fat tissue, and fatlike structures. In most cases, the discrimination is mainly based on examination of the hematoxylin and eosin (HE) staining, because a laminar appearance has to be present in fibrosing pleuritis. From inside to outside, several layers have to be defined, including fibrin, neutrophil granulocytes, mononuclear inflammatory cells, granulation tissue, and connective tissue composed of hyalinised collagen bundles. 21 Somatic mutation of tumor suppressor gene, BAP1 has been described as fairly common in diffuse MM. The loss of BAP1 can be observed in the majority of epithelioid, and mixed (60-70%), while it is present in 15% of sarcomatoid MM cases. Since the mutation results in protein loss, during IHC examination, BAP1 negativity could be seen. The lack of BAP1 expression has a low sensitivity (20%-53%), but approximately a 100% specificity as a marker of diffuse MM, therefore BAP1 can serve as a useful tool for distinguishing MM from reactive lesions. 21,28 According to the results of Ali et al., fluorescent in situ hybridization (FISH) could be useful in selected cases in order to differentiate benign, and malignant lesions. Since CDKN2A gene codes two proteins via alternative splicing (p16/INK4A and p14/ARF), its loss is detectable. Although this examination has 100% specificity for the diagnosis of MM, it is not sufficient for differentiating epithelioid, and sarcomatoid subtypes. 21 Further molecular diagnostic procedures have not yet been described. Methylthioadenosine phosphorylase (MTAP) is a newly described IHC surrogate of FISH. 29 According to the recommendations of the current WHO, in cases of distinction of carcinoma versus epithelioid, and mixed MM subtypes, at least two carcinoma and two mesothelial IHC markers are required, due to their low sensitivity. 12,30,31 Spindle cell malignancies can be differentiated from sarcomatoid mesothelioma, with calretinin and D2-40. 21 Even after finally agreeing upon a diagnosis of MM, the histological evaluation of MM subtypes could also be a challenging task for pathologists because of their nonspecific morphology; therefore, conducting IHC can help in confirming the final diagnosis. 32 In compliance with the above mentioned sections, the following diagnostic algorithm can be applied in cases of epithelioid MM. After exclusion of reactive processes, carcinomas, and mesenchymal neoplasms, additional EMA, desmin, IMP-3, and thrombomodulin positivity can be observed in the majority of cases, alongside with BAP1 loss.
On the other hand, sarcomatoid MM tends to be negative with WT1, B cell lymphoma-2 (Bcl-2), CD34, and desmin. In light of the results of Chirieac et al., the majority of sarcomatoid MM cases showed either negativity, or focal positivity of keratin markers, including CKAE1/AE3, CAM 5.2, and MNF 116. Solely, one fourth of cases were positive with calretinin. 33 The review by Rossi et al. highlights the possible aberrant expression of several markers including p40 (5,5%) and p63 positivity in epithelioid MMs, as well as the positivity of the TTF1 SP141clone in 42% of sarcomatoid MM cases. 34 Husain and coauthors emphasize that there is currently no useful IHC recommendation on this matter, furthermore, in some cases, no positivity could be observed, due to the overfixation of the surgical specimen. 35 We would like to further illustrate the diagnostic challenges of MM by mentioning the reproducibility examinations previously reported.
The first dates back to 1997 when five pathologists evaluated 77 cases of HE staining, and later evaluated the cases with IHC markers, including cytokeratins, vimentin, HMFG-2, CD15, BerEP4, B72.3, and carcinoembyonic antigen (CEA). The results reflect that IHC did not change the diagnosis of MM in most cases. 36 Brči c et al. focused on the differentiation of MM subtypes. Three pathologists assessed 200 MM cases, one representative HE slide from each, and moderate agreement (κ = 0.36) was achieved at the first round, while substantial agreement (κ = 0.63) was observed in the second round, after a consensus meeting. The authors emphasize the use of a strict, consensus based diagnosis. 37 A diagnosis of biphasic mesothelioma possibly remains the hardest task after all. Based on the reproducibility examination of by the International Mesothelioma Panel from the MESOPATH Reference Center, moderate interobserver correlation was achieved (weighted κ = 0.45), with 14 examiners evaluating 544 cases by using only BAP1 and p16 IHC stainings. 38 Our two cases and Table 1 summarize the most commonly used and worldworld widely available IHC markers for the differentiation of MM subtypes. Mutual positivity was observed with WT1, and mutual negativity was seen with TTF-1 or napsin-A, excluding the possibility of primary lung cancer. In both subtypes, BAP1 was negative, reflecting the loss of gene expression. The most helpful markers in our cases proved to be CK5/6, mesothelin, calretinin, and Ki-67. The epithelioid subtype showed positivity with all of them, and Ki-67 proliferation marker was 10%. On the other hand, the sarcomatoid subtype remained negative with CK5/6 and calretinin, had focal cytoplasmic positivity with mesothelin, and Ki-67 proliferation marker was 50%-60%. We recommend the use of these widely available markers.
CONCLUSIONS
The differentiation between MM subtypes could be a challenging task, due to the lack of specific histological features. IHC may be the optimal method in distinction. WT1, TTF-1, BAP1 markers help setting the diagnosis of MM, while CK5/6, mesothelin, calretinin, and Ki-67 are helpful in the establishment of subclassification.
AUTHOR CONTRIBUTIONS
Concept and design -Anita Sejben, Tam as Zombori, Tam as Pancsa. Search and evaluation of referencesall authors. Drafting the manuscript -Anita Sejben, Tam as Zombori. Approval of final manuscriptall authors.
|
2023-02-23T06:18:21.173Z
|
2023-02-20T00:00:00.000
|
{
"year": 2023,
"sha1": "9cbeab56baadb8e5c88e7fe2d2254cb7148d7034",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.14827",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "ac0946348f6ef68dff17a744f23b3863f2ba24fb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14806354
|
pes2o/s2orc
|
v3-fos-license
|
Study of the Ground-State Geometry of Silicon Clusters Using Artificial Neural Networks
Theoretical determination of the ground-state geometry of Si clusters is a difficult task. As the number of local minima grows exponentially with the number of atoms, to find the global minimum is a real challenge. One may start the search procedure from a random distribution of atoms but it is probably wiser to make use of any available information to restrict the search space. Here, we introduce a new approach, the Assisted Genetic Optimization (AGO) that couples an Artificial Neural Network (ANN) to a Genetic Algorithm (GA). Using available information on small Silicon clusters, we trained an ANN to predict good starting points (initial population) for the GA. AGO is applied to Si 10 and Si 20 and compared to pure GA. Our results indicate: i) AGO is, at least, 5 times faster than pure GA in our test case; ii) ANN training can be made very fast and successfully plays the role of an experienced investigator; iii) AGO can easily be adapted to other optimization problems.
Introduction
Artificial Neural Networks (ANN) and other artificial intelligence algorithms have proved to be very useful tools in theoretical and experimental Chemistry.Recently, Gasteiger and Zupan 1 have compiled some of the most important applications of ANN in Chemistry.Some interesting examples include automatic identification of groups of molecular spectrum and the determination of the sequence 1 of amino-acids in a protein.Other important applications are: i) comparison of ANN with quantum mechanical techniques for the prediction of molecular properties for inorganic systems 2 .ii) predictions, made by Sigman and Rives 3 , of atomic ionization potentials using shell model parameters as input data for the ANN.These applications encourage us to explore the potential of ANN in yet another field: the prediction of the ground-state geometry of clusters.
Due to the problems in experimental 4,5 production and selection of silicon clusters, traditional methods fail to establish their ground-state geometry.Therefore, one must infer the structure of these clusters, either from indirect evidence or from theoretical calculations.On the other hand, theoretical calculation of the geometry of the ground-state of a large collection of atoms is an extremely complicated task due to the following reasons: i) most of these problems requires quantum mechanical methods to produce a realistic total energy.These calculations are very demanding of computer resources 6 , and ii) the energy hyper-surface depends on a large number of variables and has countless local minima.For instance, a cluster composed of ~150 noble-gas atoms 7 has an estimated number of 10 60 minima!An even larger number of local minima is expected for covalent materials.Obviously to select the global minimum among so many local minima is a very difficult task.
ANN atractive features make them useful for modelling, simulation, control and prediction [8][9][10] in many fields of sciences.In most of these applications, ANN are trained with data collected during operations or experiments.After training, ANN are able to deliver the desired predictions thanks to their natural generalization capability.
Traditionally, practical problems in Chemistry and Physics are transformed into optimization problems.Many algorithms exist to solve these problems and they may be split in two groups: i) gradient based methods 11,12 .For instance, Materials Research the conjugate gradient 12 method is a procedure based on the use of derivatives.These methods are not designed to avoid being trapped by local minima.Thus, it must be repeated several times starting from different initial points.The best result of a series of iteractive procedures is assumed as the sought solution; and ii) methods that do not use derivatives 13 .Genetic algorithms [14][15][16] and simulated annealing 13,17 are optimization methods that do not depend on the calculation of gradients.They imitate natural processes and they are able to overcome barriers to avoid local minima.No matter which optimization method, the choice of efficient starting point (or points) is of vital importance for a successful search.
Recently, Cundari and Moody 18 used ANN to predict molecular properties of a series of diatomic molecules.They showed that, after proper training, ANN can predict chemical quantities such as vibration frequency, binding energy and equilibrium distance as accurately as "ab-initio" calculations.
Here, we want to associate ANN to a quantum chemistry method to search for the geometry of the ground-state of silicon clusters.We used ANN to select good starting points for an iterative optimization method.Specifically, an ANN will provide candidate structures for the genetic algorithm.Differently from ref. 18 that used ANN to compare with first-principles calculations, we used the ANN to accelerate the quantum chemistry calculation instead of replacing it.We chose to test the method under stringent conditions.The miniaturization of devices stimulates the interest in the properties of Silicon clusters 19 because silicon remains as the most important element for the development of electronic devices.Therefore, the search for structural models of silicon cluster is technologically motivated because structure determines, in good part, the electrical and mechanical properties of the material 20 .
Previous works tried to predict three-dimensional geometry of silicon clusters.First-Principle calculations [21][22][23] are limited to few atoms.Only small clusters (up to ~ 10 atoms) can be completely investigated 21 .For larger clusters, first-principle calculations are not feasable.For clusters with more than 10 atoms, searches are artificially restricted to models 24,25 .Previous attempts were limited to high symmetries or were based on the geometry of the crystal 26 , or yet, on the reconstruction surfaces of Silicon 27 .
In this work, we used ANN to distinguish the affinity among different atomic layers.Starting with information obtained from small clusters whose energies were previously calculated, we wanted ANN to identify which layers tend to attract each other more strongly.Avoiding sequences of layers that ANN predicts as unfavorable we can keep the search algorithm from wasting valuable time.In this case, ANN learning power plays the role of an experienced investigator.
A small set of training data is used to train the ANN.Obviously.one should not expect that an ANN trained with such information could accurately predict energy values for new clusters.However, it is able to select structures efficiently for the subsequent global optimization algorithm.In our test case, we used the genetic algorithm .Our results show that the ANN significantly increase the efficiency of the algorithm optimization.
Next, we will present how we transformed the chemical problem into a classification problem.Then we discuss, the architecture of the ANN and the results obtained by the combination of the classifier-ANN with the genetic algorithm.
Artificial Neural Network Coupled to Genetic Algorithm
In order to insert geometric information into ANN, we have described the structure of a cluster as a piling up of plane layers of atoms.Such treatment resembles the one presented by Grossman and Mitas 24 .They suggested a geometric description of the silicon clusters as a stacking of triangular elements, with some atoms in the ends, according to Fig. 1.
Figure 1 shows three-dimensional structures described as a series of layers, each one containing three atoms.Here, we described each cluster as a piling up of planar polyatomic layers with up to five atoms.This choice restricts the number of different descriptions of a cluster with N atoms.
We selected a group of possible structures for the description of the layers.(Fig. 2).
As an example, Fig. 3 shows Si 6 cluster represented by 5 different descriptions, based on the elements of Fig. 1.It is important to point out that the geometric elements used in each description and its associated energy will be used as input data for ANN.
The neural classifier was built to filter the configurations that would be supplied to the genetic algorithm as possible candidates.ANN distinguishes which piling up of atomic layers would probably have high binding energy, i.e., the more stable structures.It also should be possible to The first step consists of obtaining information and the necessary elements to the description and characterization of the system.We supplied, as input data for the ANN, the binding energy of 110 clusters.The training group comprises structures of clusters with 9 or less atoms.It is important to point out that the application of the neural classifier is combined to a method of total energy calculation.Any method that we chose would be equally convenient.In this work, we used a Tight Binding (TB) semi-empirical method whose detailed description can be found in the references [28][29][30] .
Next we trained the ANN, adopting as input data the 110 structures and as output data their respective energies.This is an extremely important step because it will determine the quality of the predictions to be accomplished by the ANN.We used the training method known as backpropagation 31 .We have trained ANN to discern the structures appropriate for global minimization from the inadequate ones.Thus, based on the previous knowledge of smaller clusters, ANN distinguishes high binding energy structures and send them to the Genetic Algorithm (GA).Table 1 shows that the binding energy per atom for Si 6 is larger than 3 eV.It is expected that binding energy per atom increases with the number of atoms, we chose 2,8 eV as a reference value.This choice takes into consideration that ANN was trained with very few input data and therefore, it is not expected that ANN predictions should be of quantitative quality.It is fast and simple to expand the training set to use the same approach to other clusters.Training does not need to be very long to yield reliable results, even fast training improves the performance of the Genetic Algorithm.We'll make it clear in the results section.
Finally, the ANN make their predictions.Select the size of the cluster Si N (N > 9) whose ground-state geometry we want to predict.From every possible combination of layers, the preditor selects those classified as appropriate and it eliminates the others.The "good" structures are sent to the GA in two ways: (i) a certain number of them is used as the GA's initial population; (ii) the remaining ones are introduced in the population of the algorithm through mutations.In other words, every n generations a new structure is introduced into the population.Cluster's binding energy are calculated by the TB approach and this is the quantity that is maximized by the genetic algorithm.In order to test the method just presented, we chose to determine the groundstate geometry of the Si 10 cluster.This an interesting test because this system possesses many local minima and its energy can be calculated rather quickly in the TB approach.
Application and Results
We defined the architecture of our 3 ANN in the following way: an input layer with 11 elements, an intermediate layer and an output layer with 2 elements.In the intermediate layer, we used 12 (ANN 12 ), 6 (ANN 6 ) and 3 (ANN 3 ) neurons respectively, whose results will be presented in this section.We have tested sigmoid, hyperbolic tangent and A fast training is capable of identifying a high percentage of inadequate structures.We decided to stop the training procedure when 60% of the structures of all possible geometries were recognized as inadequate.
The following procedure was performed for each one of the nets.i) the ANN indicates a group of n p = 10 structures chosen randomly to generate the initial population for a genetic algorithm calculation; ii) cross-over is performed as described in reference 16; iii) every n m = 10 generations, two new structures, chosen among those considered appropriate by the ANN, replace the "less-fit" elements of the population.This is a special kind of mutation.
Genetic Algorithm, pre-conditioned by each one of the ANN, was executed for 3000 generations and their results compared to those of Pure GA.We represented a N-atom silicon cluster by a list of 3N atomic cartesian coordinates, that is, a chromosome constituted by N genes, each one composed by three coordinates, representing the position of an atom.We use this codification because a bit string, as commonly used, is not very efficient to optimize the geometry of atomic clusters 32 .Crossover probability was defined according to rank selection.Greedy overselection is a procedure designed to improve the population used in GA.Unfortunately, it can only be used if the number of individuals in the population is rather large (> 1000).Since in our case, no more than 10 elements form the initial population greedy overselection cannot help us.
Since genetic algorithm uses random numbers, 10 different calculations for each ANN was considered.We believe that the average of the 10 calculations reliably demonstrates the characteristics of this new procedure.Figures 4, 5 and 6 show a comparison between the best calculation and their average, with a pure genetic algorithm, i.e., genetic algorithm without ANN.Notice that these graphs present the evolution of the opposite of the binding energy per atom as function of the number of generations.Thus, the most stable structures corresponds to the smallest values of energy.
Figure 4 shows the performance of GA coupled to ANN 12 .One notices that while the pure genetic calculation takes about 4500 generations to find structures with binding energy per atom larger than 3 eV, ANN 12 best calculation reached the same mark with only 500 generations!The average of 10 runs, reached 3 eV after just 1600 generations.
Figure 5 shows the performance of GA coupled to ANN 6 .One notices that while the pure genetic calculation takes about 4500 generations to find structures with binding energy per atom larger than 3 eV, ANN 6 best calculation reached the same mark with only 300 generations!The av- reach our goal, i.e., to find the global minimum.Another interesting observation is that ANN 3 faces more difficulties to get satisfactory results then ANN 6 , ANN 12 .It means that too small an ANN may not be efficient to perform generalization.On the other hand, as ANN 6 and ANN 12 present comparable performances, it means that one does not need a large ANN to make our approach to work properly.
As a further test we decided to analyze the performance of a GA that used the combined result of the three ANN presented.Only those structures that were simultaneously considered appropriate by all ANN were used to generate the population for the GA.Another calculation was performed using only those structures considered inappropriate by all ANN.These results are shown in Fig. 7.
Figure 8a shows the best geometry obtained after applying AGO for 3000 generations.Figure 8b shows the best geometry obtained by pure GA after the same number of generations.One can easily notice that GA's "predict" structure still has a long way to go before finding the ground state geometry.
Next, we used ANN6 to select reasonable candidate geometries for Si 20 .Figure 9 compares AGO with pure GA's performances.One notices that it takes more than 100 generations of pure GA to reach the starting value of AGO!
Conclusions
We used total energy information for small silicon (Si n , n ≤ 9) clusters to train ANN.The training followed standard back-propagation procedure and our only concern was to keep it fast.Next, we took advantage of ANN natural ability to recognize affinity between layers of silicon erage of 10 runs, reached 3 eV after just 300 generations.
Figure 6 shows the performance of GA coupled to ANN 3 .One notices that while the pure genetic calculation takes about 4500 generations to find structures with binding energy per atom larger than 3 eV, ANN 3 best calculation reached the same mark with only 800 generations!The average of 10 runs, reached 3 eV after about 2000 generations.
Figures 4-6 show that ANN dramatically reduce the total number of generations that a genetic algorithm needs to atoms.Thus, it yields candidates solutions to the Genetic Algorithm.This kept the search algorithm from wasting time.
Our results showed that artificial neural networks can be trained to incorporate information from quantum mechanics and to accelerate total energy calculations of polyatomic systems.All three different ANN (ANN3, ANN6, ANN12) could improve GA's performance if compared to Pure GA.After a fast training procedure, ANN's select efficient starting points for methods of global optimization.If one generation is taken as time unit (tu), training takes about 70 tu.Thus AGO saves at least 2000 tu to reach the ground state geometry for Si 10 !We consider this method very promising to be adapted for larger cluster (Si n n> 10) because each generation would take more time but the training time would remain the same.
Finally, our algorithm can be easily adapted for other materials, for other methods of total energy calculation and yet for other optimization problems.
Figure 1 .
Figure 1.Si 14 cluster.a) global view of the cluster; b) one can see the sequence of atomic layers.
Figure 2 .
Figure 2. Layers that are piled up to form the clusters.Notice that each corner corresponds to the position of an atom.
Figure 3 .
Figure 3. Configurations of Si 6 , with their respective description.
Table 1 .
It shows the structure and the corresponding TB energy, for three of the 110 cases used.The first example describes the structure shown in Fig 3.The second example describes the structure shown in Fig 2.
Materials Research gaussian activation functions.Results were not very sensitive to the activation function chosen.The Figs. 4 -7 correspond to the gaussian activation function.
Figure 6 .
Figure 6.The comparative evolution for hidden layer with 3 neurons.
Figure 5 .
Figure 5.The comparative evolution for hidden layer with 6 neurons.
Figure 4 .
Figure 4.The comparative evolution for hidden layer with 12 neurons.
Figure 7 .
Figure 7. Structures appropriate x Structures considered inappropriate.
Figure 8 .
Figure 8.The structure Si 10 with AGO a) and without AGO b).
|
2014-10-01T00:00:00.000Z
|
2002-09-01T00:00:00.000
|
{
"year": 2002,
"sha1": "06ccccde5cd4803e29f171a81915dc8816fe623c",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/mr/a/WzPHnTMDbs9DYKFnTDCf9DN/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "06ccccde5cd4803e29f171a81915dc8816fe623c",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
72941036
|
pes2o/s2orc
|
v3-fos-license
|
Joint Learning of Brain Lesion and Anatomy Segmentation from Heterogeneous Datasets
Brain lesion and anatomy segmentation in magnetic resonance images are fundamental tasks in neuroimaging research and clinical practice. Given enough training data, convolutional neuronal networks (CNN) proved to outperform all existent techniques in both tasks independently. However, to date, little work has been done regarding simultaneous learning of brain lesion and anatomy segmentation from disjoint datasets. In this work we focus on training a single CNN model to predict brain tissue and lesion segmentations using heterogeneous datasets labeled independently, according to only one of these tasks (a common scenario when using publicly available datasets). We show that label contradiction issues can arise in this case, and propose a novel adaptive cross entropy (ACE) loss function that makes such training possible. We provide quantitative evaluation in two different scenarios, benchmarking the proposed method in comparison with a multi-network approach. Our experiments suggest that ACE loss enables training of single models when standard cross entropy and Dice loss functions tend to fail. Moreover, we show that it is possible to achieve competitive results when comparing with multiple networks trained for independent tasks.
Introduction
Segmentation of anatomical and pathological structures in volumetric images is a fundamental task for biomedical image analysis. It constitutes the first step in several medical procedures such as shape analysis for population studies, computed assisted diagnosis/surgery and automatic radiotherapy planning, among many others. Segmentation accuracy is therefore of paramount importance in these cases, since it will necessarily influence the overall quality of such procedures.
During the last years, convolutional neural networks (CNNs) proved to be highly accurate to perform medical image segmentation (Ronneberger et al., 2015;Kamnitsas et al., 2016Kamnitsas et al., , 2017aShakeri et al., 2016). In this scenario, a training dataset consists of medical images with expert annotations associated to a particular task of interest. Following a supervised approach, CNNs are trained to perform such task by learning the network parameters that minimize a given loss function over the training data. In the context of Figure 1: Example of brain MRI with overlapped annotations corresponding to anatomy, lesion and joint segmentations. Note that whatever is considered as background in both, WMH and tumor segmentation datasets, should be classified as tissue according to the anatomy dataset. This fact misleads the training process of a single CNN when using standard categorical cross entropy or Dice losses to perform joint learning of lesion and anatomy segmentation.
brain image segmentation (of main interest in this work), publicly available datasets with manual annotations usually correspond to single tasks. These tasks might be associated to anatomy segmentation (e.g. brain tissues (Mendrik et al., 2015;Cocosco et al., 1997), sub-cortical structures (Rohlfing, 2012)) or pathological segmentation (e.g. brain tumours (BRATS, 2012), white matter hiper-intensities (WMH, 2017)). Even if most publicly available datasets provide image annotations for single tasks, in practice it is usually desirable to train single models which can learn to perform multiple segmentation tasks simultaneously. We focus on the particular case of brain magnetic resonance images (MRI), where segmenting both brain lesions and anatomical structures is especially relevant. For example, in the context of neurovascular and neurodegenerative diseases (Moeskops et al., 2018), white matter hyper-intensity (WMH) segmentation in brain MRI is usually combined with brain tissue segmentation when studying cognitive dysfunction in elderly patients (De Bresser et al., 2010). Another example is related to brain tumour segmentation (Menze et al., 2015). Combining brain tumor segmentation with brain tissue classification (Moon et al., 2002) would have an enormous potential value for improved medical research and biomarkers discovery. We will explore both application scenarios and provide experimental evidence about the effectiveness of the proposed method to perform joint learning of brain lesion and anatomy segmentation in these cases.
Learning to segment multiple structures from heterogeneous datasets is a challenging task, since labels coming from different datasets may contradict each other and mislead the training process. In the particular case of brain lesion and anatomy segmentation from MRI, Figure 1 illustrates this issue. Given two datasets with disjoint labels (for example, brain tissues and WMH lesions), whatever is considered as background in the lesion dataset, should be classified as tissue according to the anatomy dataset. This raises a label contradiction problem that will be studied in this work.
We interpret brain lesion and anatomy segmentation as two different tasks which are learned from heterogeneous datasets, meaning that each dataset is annotated for a single task. In what follows, we briefly describe related works about learning to segment from disjoint annotations, discuss the issues that arise when training a single CNN model to perform both tasks with standard loss functions, and propose a simple, yet effective, adaptive loss function that makes it possible to train such model using heterogeneous datasets.
Related Work
Similar multi-task problems in the context of image segmentation were explored in recent works. Regarding segmentation for medical images, (Moeskops et al., 2016) studied how a single deep CNN can be used to predict multiple anatomical structures for three different tasks including brain MRI, breast MRI and cardiac computed tomography angiography (CTA) segmentation. They showed that a standard combined training procedure with balanced mini-batch sampling results in segmentation performance equivalent to that of a deep CNN trained specifically for that task. This problem differs from our setting since every dataset is associated to a different organ. Therefore, labels from different datasets can not co-exists in a single image avoiding the label contradiction problem illustrated in Figure 1.
Closest to our work are those by (Fourure et al., 2017;Rajchl et al., 2018), where a single segmentation model is learned from multiple training datasets defined on images representing similar domains. In (Fourure et al., 2017), the authors train a model to perform semantic full scene labeling in outdoor images coming from different datasets with heterogeneous labels. They propose a selective cross entropy loss that, instead of considering a single final softmax activation function defined over the entire set of possible labels, is computed using a dataset-wise softmax activation function. This dataset-wise softmax only takes into account those labels available in the dataset corresponding to the current training sample. A similar strategy is followed by (Rajchl et al., 2018) in the context of brain image segmentation. The authors propose the NeuroNet, a multi-output CNN that mimics several popular and state-of-the-art brain segmentation tools producing segmentations for brain tissues, cortical and sub-cortical structures. Differently from (Fourure et al., 2017), NeuroNet combines a multi-decoder architecture (one decoder for every dataset/task) with an analogous multi-task loss based on cross entropy, defined as the average of independent loss functions computed for every single task. Note that our problem differs from those tackled in both papers: our aim is to produce a segmentation model that assigns a single label to every voxel (considering the union of anatomical and pathological labels). On the contrary, they aim at predicting one and exactly one label from each labelset for every voxel, i.e. multiple labels will be assigned to every voxel.
Learning Brain Lesion and Anatomy Segmentation from Heterogeneous Datasets
Problem Statement: Given a set of K heterogeneous datasets {D k }, 1 ≤ k ≤ K, let us formalize the joint learning segmentation problem. Each dataset D k = {(x, y) n } is composed of pairs (x, y) n , where x is an image and y a segmentation mask assigning a label l ∈ L k to every i-th voxel x i . L k is the labelset associated to dataset D k . We assume disjoint Figure 2: (a) Example of image patches with overlapped segmentation masks sampled from: the lesion datasets (tumor and WMH), the anatomical (brain tissue) dataset and the desired combined segmentation for which we do not have training data. Problematic areas are those for which the original lesion datasets indicate background label, while they should be annotated as actual tissue labels. (b) The proposed adaptive cross entropy behaves differently depending on the structures of interest under consideration. We reinterpret the meaning assigned to the lesion background label (in blue) as 'any label that is not lesion' and modify the loss function accordingly.
labelsets, except for the background label included in all datasets. We aim at learning the parameters Θ for a single segmentation model f (x; Θ) that, given a new imagex, produces a segmentation maskŷ where every voxelŷ i ∈L = K k=1 L k . The label spaceL is built as the union of all labelsets, and we assign a single label to every voxelŷ i .
Note that, since the new labelsetL includes all labels from all datasets, some structures that were labeled as background in one dataset may be labeled as foreground in other datasets, raising the label contradiction problem shown in Figures 1 and 2.a. In these cases, the foreground labels (e.g. brain tissue labels) should prevail over the background labels in the final mask generated by the segmentation model.
In case of MRI brain lesion and anatomy segmentation, we have K = 2 brain MRI datasets. The first one, denoted D A , is annotated with anatomical (brain tissue) labels while the second one, referred as D L , considers brain lesions (tumor or WMH are the application scenarios studied in this work). The corresponding label spaces for every dataset are L A and L L . In what follows, we describe multiple alternatives to train such model based on a standard U-Net architecture (Ronneberger et al., 2015).
Naive Models
We first consider a naive model where a single U-Net is trained by minimizing standard loss functions (typical categorical cross entropy and Dice losses), to perform joint learning from heterogeneous datasets. We employ a standard U-Net architecture (see Appendix A for a complete description of the architecture) with a final softmax layer producing |L| probability maps, i.e. one for each class in the joint labelsetL. Patch-based training is performed by constructing balanced mini-batches of image patches. We balance the minibatches by sampling with equal probability from all datasets and all classes.
As stated in section 1.1 and illustrated in Figure 2.a, labels coming from different datasets may contradict each other and mislead the training process. Brain tissue segmentations or cortical/sub-cortical structures generally cover the complete brain mass. However, lesion annotations like WMH and tumour cover only a small portion of it. The main issue with the proposed naive model arises from this fact: when sampling image patches containing small lesions, whatever is considered background in the patch should be actually classified as some type of brain tissue. However, since the lesion dataset does not contain brain tissue annotations, it will be considered as background. In other words, the model will be encouraged to classify brain tissue as background. In the results that will be presented in Section 3, we provide empirical evidence of this issue and its impact in model performance.
Multi-network Baseline
A trivial solution to the aforementioned problem is to use multiple independent models, trained for every specific task. In this case, segmentation results are then combined following some kind of fusion scheme. In case of brain lesion and tissue segmentation, since lesion labels prevail over tissue labels, we can simply overwrite them. However, note that such model requires extra efforts at training time: we need to train a single model for every dataset, increasing not only the training time but also the overall model complexity, i.e. the number of learned parameters. Moreover, at test time, every model is evaluated on the test image and a label fusion strategy must be applied to combine the multiple predictions.
We consider a multi U-Net model as baseline to benchmark the proposed solution, training a single U-Net with categorical cross entropy in every dataset. Label fusion is implemented by overwriting the brain tissue segmentation with the (non-background) lesion masks.
Adaptive Cross Entropy
In this work, we propose to overcome the issues that arise when training a single CNN from heterogeneous (and potentially contradictory) datasets with a new loss function titled adaptive cross entropy (ACE). Let us first recall the classical formulation of cross entropy. Given an estimate distribution q for a true probability distribution p defined over the same discrete set (in our setting, the setL of possible labels, with C = |L|), the cross entropy between them is computed as: (1) For a given voxel x i with ground-truth label y i ∈L (with 1 ≤ y i ≤ C = |L|), we compute the categorical cross entropy loss between the voxel-wise model prediction f (x i ; Θ), and the corresponding one-hot encoded version of y i denoted by e (y i ) as: The standard voxel-wise cross entropy loss L H is aggregated as the average loss considering all voxels {x i } 1≤i≤m in the image patch: The cross entropy loss L H is minimized when the prediction equals the ground-truth. In the multi-task context discussed in this work, this raises the label contradiction problem between lesion background and brain tissue segmentation illustrated in Figure 2.a. This fact motivates the design of the adaptive cross entropy (ACE) loss which behaves differently depending on the structures of interest under consideration. We reinterpret the meaning assigned to the background label of the lesion dataset as 'any label that is not lesion' and modify the loss function accordingly. The proposed adaptive cross entropy is therefore defined as: where the set {L \ L(y)} contains all labels, except those in the current image patch groundtruth (referred as L(y)). Equation 4 shows that ACE employs the standard cross entropy formulation when voxel i is labeled as anything but lesion background. However, when voxel i corresponds to lesion background, we compute −log(s), where s = j∈{L\L(y)} f (x i ; Θ) j is the sum of scores f (x i ; Θ) j for all classes j that are not present in the patch y (including background). In this way, when the label is not in conflict, minimizing H A is equivalent to maximizing the score for the correct class. However, when dealing with a voxel whose ground truth is lesion background (i.e. we are not sure about the brain tissue that corresponds to it), the model tends to maximize the probability for all non-lesion classes. Figure 2.b illustrates this idea. In practice, we compute the aggregated ACE loss L A H for all voxels {x i } 1≤i≤m in the image patch as: Note that in the ACE formulation, we sum over the scores before taking the logarithm. The reasoning behind having the sum inside the log function on the proposed adaptive cross entropy is to effectively unify those labels that are not lesion (i.e. background and brain tissue segmentations, which raise the label contradiction problem illustrated in Figure 2.a) in a unique class. We do that by assigning to this virtual class the sum of the scores the model assigned to each of those labels. Note that in the application scenarios studied in this work, lesion labels collide with brain tissues, motivating the ACE formulation given in Equation 4. Nonetheless, given an arbitrary number of K datasets, in general it is straightforward to apply the proposed ACE loss to different labels raising similar issues, by just changing the condition that adapts the loss behaviour.
Experiments & Results
Six different datasets were used in the experimental comparative analysis. We consider joint learning of brain tissue segmentation and two separate type of lesions: brain tumor and WMH. We trained models specialized for brain tissue + WMH, and other models for brain tissue + tumor, showing that the proposed ACE loss function can generalize to different scenarios.
Brain tissues + WMH scenario
We employed the training data provided by the MRBrainS13 Challenge (Mendrik et al., 2015) (brain tissue annotations), the WMH Segmentation Challenge (WMH, 2017) (WMH lesions) and MRBrains18 (MRBrainS, 2018) (brain tissues + WMH). We trained/validated our models using the training partition of MRBrainS13 as anatomical dataset (D A ) and WMH Segmentation Challenge as lesion dataset (D L ). For testing, we used the joint segmentations provided for training in the MRBrainS2018 Challenge, to evaluate the simultaneous predictions. The data from the MRBrainS13 Challenge consists of 5 images with brain tissue annotations, of which 4 were used for training, and the remaining one for valida- Figure 4: Qualitative results for both scenarios (brain tissues + WMH in the top row, and brain tissues + tumor segmentation in the bottom row). Note that using naive cross entropy and Dice losses result in very poor performance. The proposed ACE makes it possible to train a single model for both tasks with equivalent performance to multiple networks by solving the label contradiction issues.
tion. The WMH Segmentation Challenge provides 60 images with the corresponding WMH reference segmentation, of which 48 were used for training, and the rest for validation. The MRBrainS18 Challenge provides 7 images, which were all used for evaluation.
Brain tissues + Tumor scenario
Given the lack of datasets with simultaneous annotations for brain tumors and tissues, we resorted to using synthetic and simulated images. We trained/validated our models using 15 images from the Brainweb (Cocosco et al., 1997) synthetic brain phantoms with brain tissue annotations for the anatomical dataset (D A ). For the lesion dataset (D L ) we employed 50 simulated tumor images available from the BRATS2012 challenge (BRATS, 2012). For testing, we simulated 20 brain tumors using Tumorsim (Prastawa et al., 2009), using 5 healthy Brainweb phantom probability maps. In that way, combined segmentations of brain tissue and tumors were available for testing. Note that, for the sake of fairness, healthy images used to simulate brain tumors for testing were not included in the training dataset (D A ).
Results & Discussion
Figure 3 summarizes the quantitative results for both application scenarios, when comparing the Multi-UNet model with single models trained with naive cross entropy and Dice functions as well as the proposed ACE 1 (see Figure 4 for qualitative results). As expected, the Multi-UNet model trained with standard cross entropy outperforms the single models trained with naive losses. More importantly, our proposed ACE makes it possible to train a single model for joint learning of brain lesion and anatomy from heterogeneous datasets, achieving equivalent performance to that of Multi-UNet. This is due to the fact that both, Multi-UNet and the single ACE models, are not affected by the label contradiction problem illustrated in Figure 2.a. Note that in case of brain tissue segmentation, the single model trained with ACE tends to outperform even the Multi-UNet model. As discussed in (Rajchl et al., 2018), learning jointly from hierarchical sets of class labels has the potential to increase the overall accuracy based on theory derived from multi-task learning. We hypothesize that this increase in performance is related to this fact: since the model trained with ACE learns to predict lesion and tissues simultaneously, it can also learn label interactions that the Multi-UNet can not capture.
A deeper analysis of the quantitative results reveals that the single UNet model trained with the proposed ACE achieved equivalent performance to the MultiUNet in WMH segmentation (no significant differences according to Wilcoxon test), better or equivalent performance in terms of brain tissue segmentation (depending on the brain structure) and only worse performance for edema and tumor. This worse performance for edema and tumor is explained by the fact that the MultiUNet was trained using all available modalities per dataset, while the single UNet was trained using only those modalities available in both, anatomical and lesion datasets. This is a limitation of our approach when compared with multiple UNets trained for specific tasks: since we perform joint training of a single model with fixed number of input channels, we can only use those sequences available in both anatomy and lesion datasets. In case of edema and brain tumor segmentation, the Multi-UNet was trained with multiple MR modalities for the tumor segmentation task (it uses T1, T1g, T2 and FLAIR) while the single UNet was trained using only T1 images (all details about available MR modalities for every dataset are provided in Appendix B). This requirement may represent a limitation if the datasets depend on different types of image modalities. There are alternatives that could be considered to deal with this issue like imputing the missing modalities by means of image synthesis or using ad-hoc techniques like the HeMIS (Hetero-Modal Image Segmentation) model by (Havaei et al., 2016).
Even if all images used in the experiments are MRI, there is a shift in the distribution of image intensities when we go from datasets used at training and test time. This is known as the multi-domain problem, and is usually addressed using domain adaptation techniques (Kamnitsas et al., 2017b). In this work, we did not take into account the multi-domain problem. In the future, we plan to extend the proposed method and incorporate domain adaptation, further improving the accuracy of the results.
Conclusions
In this work we proposed the adaptive cross entropy loss, a novel function to perform joint learning of brain lesion and anatomy segmentation from heterogeneous datasets using CNNs. The proposed loss takes into account potential label contradiction conflicts that can arise when training segmentation algorithms for multiple tasks using datasets with disjoint annotations. We trained single CNN models using the proposed ACE, naive cross entropy and Dice losses, and compared their performance with a Multi-UNet model where independent CNNs were trained for every task. Experimental evaluations in two scenarios provided empirical evidence about the effectiveness of the proposed approach.
In the future, we plan to extend the evaluation of the proposed loss function to other CNN architectures (Deepmedic (Kamnitsas et al., 2016) for example) and to alternative brain MRI segmentation scenarios (e.g. considering subcortical structures as anatomical segmentation or traumatic brain injuries as lesions). Moreover, we plan to investigate the effects of the multi-domain problem in this context, and incorporate domain adaptation strategies to address this issue when learning from heterogeneous datasets.
Regarding the ACE formulation, we plan to explore alternative weighting mechanisms within the loss function that could help to alleviate the class-imbalance problems that could emerge when dealing with tiny structures of interest.
• Brain Tissue + WMH scenario: The Multi-UNet model was trained and tested using T1+IR+FLAIR for the brain tissue segmentation task, and T1+FLAIR for the WMH segmentation task. The single UNet models were trained using only T1+FLAIR for all tasks.
• Brain Tissue + Tumor scenario: The Multi-UNet model was trained and tested using T1 for the brain tissue segmentation task, and T1+T1g+T2+FLAIR for the tumor segmentation task. The single UNet models were trained using only T1 for all tasks.
Note that this setting gives some advantages to the Multi-UNet model over the single model trained with ACE, since it uses more MR sequences for the lesion segmentation task. This is reflected in the results shown in Figure 3, specially for the brain lesion segmentation task, where the better performance shown by the Multi-UNet model with respecto to the single model trained with ACE can be explained by this difference in the number of sequences used to train them.
|
2019-03-08T13:49:44.000Z
|
2018-12-13T00:00:00.000
|
{
"year": 2019,
"sha1": "cdcdd96b53b6909d68f580f51647ed68a66147ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e86c89a9aac9973c890b9300b4e15a60db79173d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
216056395
|
pes2o/s2orc
|
v3-fos-license
|
Graph-based Kinship Reasoning Network
In this paper, we propose a graph-based kinship reasoning (GKR) network for kinship verification, which aims to effectively perform relational reasoning on the extracted features of an image pair. Unlike most existing methods which mainly focus on how to learn discriminative features, our method considers how to compare and fuse the extracted feature pair to reason about the kin relations. The proposed GKR constructs a star graph called kinship relational graph where each peripheral node represents the information comparison in one feature dimension and the central node is used as a bridge for information communication among peripheral nodes. Then the GKR performs relational reasoning on this graph with recursive message passing. Extensive experimental results on the KinFaceW-I and KinFaceW-II datasets show that the proposed GKR outperforms the state-of-the-art methods.
INTRODUCTION
Some research [1] in biology finds that human facial appearance contains important kin related information. Inspired by this finding, many methods [2,3] have been proposed for kinship recognition from facial images. The goal of kinship verification is to determine whether or not a kin relation exists for a given pair of facial images. Kinship verification has attracted increasing attention in the computer vision community due to its broad applications such as automatic album organi- Fig. 1. Key differences between our method and other methods. Most existing methods usually apply a similarity metric such as cosine similarity, or a multilayer perceptron to the extracted image features, which couldn't fully exploit the hidden relations. On the other hand, our method builds a kinship relational graph and performs relational reasoning on this graph. zation [4], missing children searching [2], social media-based analysis [5], and children adoptions [6].
Although a variety of efforts [2,3,6] have been devoted to kinship verification, it is still far from ready to be deployed for any real-world uses. There are several challenges preventing the development of kinship recognition. First, as other face-related tasks [7,8], facial kinship verification is also confronted with a large variation of the pose, scale, and illumination, which makes learning discriminative features quite challenging. Second, unlike face verification which investigates the relations between different images of an entity, kinship verification has to discover the hidden similarity inherited by genetic relations between different identities, which naturally leads to a much larger appearance gap of intra-class samples, especially when there are significant gender differences and age gaps in kinship verification.
Many methods have been proposed to address these challenges over the past few years. Most of them pay attention to learning discriminative features for each facial image of a paired sample. For example, Lu et al. [2] proposed the NRML metric to pull intra-class samples as close as possible and push interclass samples in a neighborhood as far as possible in the learned feature space. Nevertheless, these approaches usually apply a similarity metric [9] or a multilayer perceptron (MLP) [6] to the extracted features to obtain the probability of kinship between two facial images, which couldn't fully exploit the genetic relations of two features.
In this paper, we focus on the aspect of how to compare and fuse the two extracted features of a paired sample to reason about the genetic relations. We hypothesize that when people reason about the kinship relations, they usually first compare the genetically related attributes of two individuals, such as cheekbone shape, eye color, and nose size, and then make comprehensive judgments based on these comparison results. For two given features of a paired sample, we consider that each dimension of the feature encodes one kind of kin related information. Therefore, we explicitly model the reasoning process of humans by comparing the features for each dimension and then fusing them. More specifically, we build a star graph named kinship relational graph for two features to perform relational reasoning, where each peripheral node models one dimension of features and the central node is utilized as a bridge of communication. We further propose a graph-based kinship reasoning (GKR) network on this graph to effectively exploit the hidden kin relations of extracted features. The key differences of our method and most existing methods are visualized in Figure 1. We validate the proposed GKR for kinship verification on two benchmarks: KinFaceW-I [2] and KinFaceW-II [2] datasets, and the results illustrate that our method outperforms the state-of-the-art approaches.
RELATED WORK
Kinship Verification: In the past few years, many papers [3,6,10,11] have been published for kinship verification and most of them focus on extracting discriminative features for each image, which can be divided into three categories: hand-crafted methods, distance metric-based methods, and deep learning-based methods.
Hand-crafted methods require researchers to design the feature extractors by hand. For example, as one of the earliest works, Fang et al. [10] proposed extracting color, facial parts, facial distances, and gradient histograms as the features for classification. Zhou et al. [4] further presented a Gaborbased gradient orientation pyramid (GGOP) feature representation method to make better of multiple feature information. Distance metric-based methods [12,13] are the most popular methods for kinship verification, which aim to learn a distance metric such that the distance between positive face pairs is reduced and that of negative samples is enlarged. Yan et al. [3] first extracted multiple features with different descriptors and then learned multiple distance metrics to exploit complementary and discriminative information. A discriminative deep metric learning method was introduced in [9], which learned a set of hierarchical nonlinear transformations with deep neural networks. Zhou et al. [11] explicitly considered the discrepancy of cross-generation and a kinship metric learning method with a coupled deep neural network was proposed to improve the performance. Recent years have witnessed the great success of deep learning. However, few deep learning-based works have been done for kinship verification. Zhang et al. [6] were the first attempt for kinship verification with deep CNNs and demonstrated the effectiveness of their method. Hamdi [14] further studied video-based kinship verification with deep learning. All these methods only focus on learning good feature representations, which ignore how to reason about the kin relations with extracted embeddings. Graph Neural Networks: A variety of practical applications deal with the complex non-Euclidean structure of graph data. Graph neural networks (GNNs) are proposed to handle these kinds of data, which learn features on graphs. Li et al. [15] proposed gated graph neural networks (GG-NNs) with gated recurrent units, which could be trained with modern optimization techniques. Motivated by the success of convolutions on the image data, Kipf and Welling introduced the graph convolutional networks (GCNs) [16] by applying the convolutional architecture on graph-structured data. A layer-wise propagation rule was utilized in GCNs and both local graph structure and node features were encoded for the task of semisupervised learning. Petar et al. [17] further proposed the graph attention networks (GATs) to assign different weights to different nodes in a self-attentional way. The generated weights didn't require any prior knowledge of the graph structure. GATs were computationally efficient and had a larger model capacity due to the attention mechanism. GNNs have proven to be a good tool for relational reasoning. For example, Sun et al. [18] constructed a recurrent graph to jointly model the temporal and spatial interactions among different individuals with GNNs for action forecasting.
PROPOSED APPROACH
In this section, we first present the problem formulation. Then we illustrate the details of the kinship relational graph building process. Lastly, we introduce the proposed graph-based kinship reasoning (GKR) network.
Problem Formulation
We use P = {(x i , y i )|i = 1, 2, ..., N } to denote the training set of paired images with kin relations, where x i and y i are the parent image and child image, respectively, and N is the total number of the positive training set. Therefore, the negative training set is built as N = {(x i , y j )|i, j = 1, 2, ..., N, i = j}, where each parent image and each unrelated child image form a negative sample. However, the size of the negative training set is much larger than that of positive training set given that |P| = N and |N | = N (N − 1). So we randomly select negative samples from the set N to build a The goal of kinship verification can be formulated as learning a mapping function, where the input is a paired sample (x i , y j ) and the output is the probability of i = j. Most existing methods aim to learn a good feature extractor g(·). Hand-crafted methods usually design shallow features by hand to implement g(·), whereas deep learning-based methods usually learn a deep neural network as the extractor g(·). Metric learning-based methods usually first use handcrafted features or deep features as the initial sample features (g (x i ), g (y j )), and then learn a distance metric: In the end, we obtain the projected features g( Having obtained the features (g(x i ), g(y j )) ∈ (R D , R D ), we still need to learn a mapping function f (·) to map the features (g(x i ), g(y j )) to a probability of kin relation between x i and y j . Most current methods mainly focus on the feature extractor g(·) and usually neglect the design of f (·). One choice is to simply concatenate two features and send them to a multilayer perceptron (MLP): where || represents the concatenation operation. Another commonly used way is to calculate the cosine similarity of two features: Both methods can't fully exploit the relations of two features.
In this paper, we aim to design a new f (·) to effectively perform relational reasoning on the two extracted features.
Building a Kinship Relational Graph
In recent years, deep CNNs have achieved great success in many computer vision tasks, such as image classification, object detection, and scene understanding, which demonstrates their superior ability for feature representation. Therefore, we utilize a deep CNN as the feature extractor g(·) in this paper.
Having obtained the deeply learned sample features (g(x i ), g(y j )), we consider how to perform relational reasoning on them. To achieve this, we first observe how humans reason about kin relations. As the genetic traits are usually exhibited by facial characteristics, humans reason about the kin relations by comparing the genetically related attributes to discover the hidden similarity. For example, if we find that the persons on two facial images have the same eye color and similar cheekbones, the probability that they are related will be higher. After comparing a variety of informative facial attributes of two persons, humans make the final decision by combining and analyzing all the information.
We explicitly model the above reasoning process by constructing a kinship relational graph and performing relational reasoning on this graph. We consider that each dimension of the extracted features encodes one kind of genetic information and we can reason about the kin relations by comparing and fusing all the genetic information. Since we use the same CNN to extract features for two images, the values of two features in the same dimension represent the comparison of one kind of kinship related information encoded in that dimension. We use one node in the kinship relational graph to denote the comparison of one feature dimension, then we have D nodes which describe the comparisons in all dimensions. To fuse these comparisons, we need to define the interactions of these D nodes. One intuitive way is to connect all the nodes given that any two nodes may have a relation. However, such a graph greatly increases the computational complexity of subsequent operations. Therefore, we create a super node that is connected to all other nodes while all other nodes are only connected to the super node. The super node is also the central node of the star-structured kinship relational graph, which plays an important role in the interaction and information communication of D surrounding nodes. In this way, we build the kinship relational graph and will elaborate on the reasoning process with the proposed graph-based kinship reasoning network in the following subsection.
Reasoning on the Kinship Relational Graph
Having built the kinship relational graph, we consider how to perform relational reasoning on this graph. Recently, graph neural networks (GNNs) have attracted increasing attention for representation learning of graphs. Generally speaking, GNNs employ a recursive message-passing scheme, where each node aggregates the messages sent by its neighbors to update its feature. We follow this scheme and propose the graph-based kinship reasoning (GKR) network to perform relational reasoning on the kinship relational graph.
Formally, Let G = (V, E) denote the kinship relational graph with the node set V and the edge set E. Each node in the graph has a feature vector and we have V = {h c } {h d |d = 1, 2, ..., D}, where h c represents the feature vector of the central node and h d is that of d th surrounding node. The edge set of this graph is formulated as E = {e cd |d = 1, 2, ..., D}, where e cd denotes the edge between node h c and h d . The proposed GKR propagates messages according to the graph structure defined by E and the aggregated messages are utilized to update the node features. As mentioned above, we set the initial node features as the values of two extracted image features in one dimension. Mathematically, the initial node features are set as follows: where h 0 d ∈ R 2 denotes the initial feature of d th node, g d (x i ) and g d (y j ) represent the values in the d th dimension of features g(x i ) and g(y j ), respectively. In this way, each node encodes one kind of kinship related information.
The proposed GKR consists of K layers where each layer represents one time step of the message passing phase. The k th (1 ≤ k ≤ K) layer transforms the node features h k−1 .., h k D ∈ R F k with message passing to perform relational reasoning, where R F k−1 and R F k represent the corresponding feature dimensions. Having obtained the node features of the (k − 1) th layer, we first generate the message of each node which is going to be sent out in the following message passing process. The message of the surrounding node is generated following: where W mess ∈ R F k−1 ×F k is employed to transform the node features into messages. We apply the same operation for the central node with the same parameter W mess : With these messages, we propagate and aggregate them according to the graph structure. Then we update the node features with the aggregated messages. For the peripheral nodes, since the central node is the only neighbor node, the aggregation is implemented by concatenating the message of the central node and its own message. Then we use the aggregated messages to update the node feature as follows: where W peri ∈ R 2F k ×F k is used to fuse all information to generate the new feature vector. For the central node, we first aggregate all the incoming messages: where the function AGGREGATE(·) is implemented by a pooling operation. Then the feature of the central node is updated as follows: where W cen ∈ R 2F k ×F k is utilized to update the feature of the central node. In this way, we obtain the updated features h k c , h k 1 , h k 2 , ..., h k D by message passing. We repeat the above process for K times and have the final node feature vectors: h K c , h K 1 , h K 2 , ..., h K D ∈ R F K . To make the final decision, we first combine all these features and send them to an MLP, which outputs a scalar value. Therefore, the mapping function f (·) of our proposed method can be formulated as: ). (10) Lastly, we obtain the probability of kin relation between x i and y j by applying a sigmoid function to the scalar value f (g(x i ), g(y j )).
Note that the proposed GKR and the feature extractor network g(·) are trained end-to-end. We employ the binary crossentropy loss as the objective function: , g(y))).
In this way, our method is optimized in a class-balanced setting. Lastly, we depict the above pipeline in Figure 2.
EXPERIMENTS
In this section, we conducted extensive experiments on two widely-used kinship verification datasets to illustrate the effectiveness of the proposed GKR.
Datasets and Implementation Details
We employ two widely-used databases: KinFaceW-I [2] and KinFaceW-II [2] for evaluation, which are collected from the internet. Four different types of kinship relations are considered in these two datasets: The main difference between these two databases is that each image pair with kin relation in KinFaceW-I comes from different photos whereas that in KinFaceW-II is collected from the same photo. We employed the ResNet-18 as the feature extractor network g(·), which was initialized with the ImageNet pretrained weights. Naturally, the dimension D of the extracted image features was equal to 512. Since both databases are relatively small, data augmentation is a crucial step to improve performance. We performed data augmentation by first resizing the facial images into 73 × 73 pixels and then random cropping a 64 × 64 patch. Following the design choice of most GNNs methods [16], we used a two-layer (K = 2) GKR and let F 1 = 512, F 2 = 4. Adam optimizer was utilized with a learning rate of 0.0005. The batch size was set to 16 and 32 for KinFaceW-I and KinFaceW-II, respectively given that the size of the KinFaceW-I database is only about half the size of the KinFaceW-II database. For a fair comparison, we performed the five-fold cross-validation following the standard protocol provided in [2].
Comparison with the State-of-the-Art Methods
We first compare our GKR with several state-of-the-art methods including metric-learning based methods and deep learning-based methods. Table 1 shows the comparison results on KinFaceW-I and KinFaceW-II datasets. We observe that our method achieves an average verification accuracy of 79.2% on KinFaceW-I and that of 90.6% on KinFaceW-II, which outperforms state-of-the-art methods. Some early metric learning-based methods, such as MNRML [2] and DMML [3] learn the proposed metric with hand-crafted features, which leads to unsatisfactory results. The method of WGEML [20] achieves state-of-the-art results with deep features, which demonstrates the superiority of deep learning. Compared with WGEML, our method improves the mean accuracy by 0.5% and 7.8% on KinFaceW-I and KinFaceW-II, respectively, which shows the superior relational reasoning ability of the proposed GKR. Zhang et al. [6] propose the CNN-Basic and CNN-Point, which directly learn deep neural networks for kinship verification to exploit the power of deep learning. Our method, which is also a deep learningbased method, outperforms the CNN-Point by 1.7% and 2.2% on KinFaceW-I and KinFaceW-II, respectively. Note that the CNN-Point contains 10 CNN backbones whereas our approach only employs one CNN backbone, which further illustrates the effectiveness of the proposed GKR.
Ablation Study
To investigate the influence of individual design choices and validate the effectiveness of the proposed GKR, we further conducted ablation experiments in this subsection. Initialization of the Central Node: The initialization of the central node is an important design choice given that the central node is the bridge of the kinship relational graph. One strategy is to aggregate the initial values of all other nodes by mean or max pooling. Another way is to initialize the central node with constant values, such as 0, 0.5, and 1. The results are listed in Table 2 and we see that the initialization with constant value 0.5 gives the best performance, which is employed in the following experiments.
Pooling Operations of AGGREGATE(·): Two different pooling operations: max-pooling and mean-pooling are considered to implement the function AGGREGATE(·). Table 3 tabulates the verification accuracy of these two pooling operations. We observe that the max-pooling achieves better results, perhaps it can better select more important information while the mean-pooling treats all messages equally.
Mapping Function f (·): To validate the effectiveness of our proposed GKR, we compare it with other widely used design choices for f (·): MLP as formulated in (2) and cosine similarity as formulated in (3). For a fair comparison, all of them employ the ResNet-18 to extract image features. Table 4 shows the results on two datasets. We see that our method outperforms the MLP and cosine similarity by a large margin on both databases, which demonstrates that our method can better exploit the relations of two extracted features and perform relational reasoning with the kinship relational graph.
CONCLUSION
In this paper, we have proposed a graph-based kinship reasoning network to effectively exploit the generic relations of two features of a sample. Different from other methods, the proposed GKR focuses on how to compare and fuse the two extracted features to perform relational reasoning. Our method first builds a kinship relational graph for two extracted features and then perform relational reasoning on this graph with message passing. Extensive experimental results on KinFaceW-I and KinFaceW-II databases demonstrate the effectiveness of our approach.
|
2020-04-23T01:01:00.347Z
|
2020-04-22T00:00:00.000
|
{
"year": 2020,
"sha1": "62a57a7278548604274b395537da120e741e262a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.10375",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "62a57a7278548604274b395537da120e741e262a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
222170668
|
pes2o/s2orc
|
v3-fos-license
|
Adherence to the Mediterranean Diet and Overall Cancer Incidence: The Netherlands Cohort Study
Background Mediterranean diet adherence has been associated with reduced risks of various cancer types. However, prospective fi ndings for overall cancer risk are inconclusive. Objective The aim of this study was to examine sex-speci fi c relations of Mediterranean diet adherence with overall cancer risk. Design This analysis was conducted as part of the prospective Netherlands Cohort Study. Baseline data on diet and other cancer risk factors were collected using a self- administered questionnaire. Participants were followed up for cancer incidence for 20.3 years through record linkages with the Netherlands Cancer Registry and the Dutch Pathology Registry. The alternate Mediterranean diet score without alcohol was the principal measure of Mediterranean diet adherence. Participants/setting The study population consisted of 120,852 inhabitants of the Netherlands, who were aged 55 to 69 years in September 1986. Main outcome measure The primary outcome was overall cancer incidence. Statistical analyses performed Cox regression analyses (case-cohort design) were used to estimate hazard ratios (HRs) and 95% con fi dence intervals (CIs) for associations of Mediterranean diet adherence with incidence of cancer (subgroups). In total, 12,184 male and 7,071 female subjects with cancer had complete data on potential confounders and were eligible for inclusion in the Cox models. Results Middle compared with low Mediterranean diet adherence (alternate Mediterranean diet score without alcohol) was signi fi cantly associated with a reduced overall cancer risk in women (HR [95% CI]: 0.85 [0.75-0.97]). Decreased HR estimates for the highest Mediterranean diet adherence category and per 2-point increase in score were also observed, but did not reach statistical signi fi cance in multivariable-adjusted analyses. In men, there was no evidence of an association for overall cancer risk (HR per 2-point increment [95% CI]: 1.02 [0.95-1.10]). Results for cancer subgroups, de fi ned by relations with tobacco smoking, obesity, and alcohol consumption, were largely similar to the overall fi ndings. Model fi ts diminished when alcohol was included in the Mediterranean diet score. Conclusions Mediterranean diet adherence was not associated with overall cancer risk in male participants of the prospective Netherlands Cohort Study. HR estimates in women pointed in the inverse direction, but lost statistical signi fi cance after full adjustment for confounding in most cases.
estimated that just under 10% of the cancer diagnoses in the Netherlands in 2010 could be attributed to a less than optimal diet. 2 The traditional Mediterranean diet of the early 1960s is characterized by a high consumption of plant foods (ie, vegetables, legumes, fruits, nuts, and whole grains). Meat and dairy products are consumed in low to moderate amounts, which in combination with the abundant use of olive oil leads to the high ratio of monounsaturated to saturated fatty acids that is characteristic of the Mediterranean diet. Alcohol, particularly wine, is consumed in moderate amounts in the traditional Mediterranean diet. 3,4 Mediterranean diet adherence has been associated with reduced risks of multiple, but not all, types of cancer. [5][6][7][8] Therefore, it would be useful to have an impression of the relation between Mediterranean diet adherence and overall cancer risk as well. Currently, the available prospective evidence for the potential relation between a priori defined Mediterranean diet adherence and overall cancer risk is inconclusive. [9][10][11][12][13] Higher Mediterranean diet adherence was associated with a reduced overall cancer risk in some studies, 9,10,13 but not in others. 11,12 Furthermore, the potential inverse relation might be stronger in women compared with men. 9 However, only 3 prospective studies have reported sex-specific associations thus far. 9,10,13 The aim of the present study was to evaluate the association between a priori defined Mediterranean diet adherence and overall cancer risk in men and women participating in the prospective Netherlands Cohort Study (NLCS). It was hypothesized that Mediterranean diet adherence is inversely related to overall cancer risk in both sexes. In addition to overall cancer risk, associations of Mediterranean diet adherence with risk of cancer subgroups defined as cancers known to be related to tobacco smoking, obesity, and alcohol consumption were investigated. As a final aim, performances of models including Mediterranean diet scores with and without alcohol were compared.
MATERIALS AND METHODS Study Population and Cancer Follow-Up
The prospective NLCS was initiated in September 1986. [14][15][16][17] In total, 58,279 men and 62,573 women, aged 55 to 69 years, completed the baseline questionnaire on cancer risk factors, including diet. A case-cohort approach was used to process and analyze the data efficiently. 14,17,18 In the casecohort design, accumulated person-time at risk in the whole cohort is estimated based on a randomly sampled subcohort. Cases are identified in the entire cohort. The NLCS subcohort (n ¼ 5000) was randomly selected immediately after baseline and biennially followed up for vital status. Subcohort members contributed to the number of person-years at risk from baseline until December 31, 2006, or censoring (cancer diagnosis, death, emigration, or loss to follow-up). The NLCS was approved by institutional review boards from Maastricht University and the Netherlands Organization for Applied Scientific Research. Study participants consented to participation by filling out the baseline questionnaire.
Incident cancer cases in the total NLCS cohort were identified through annual record linkage with the Netherlands Cancer Registry and the nationwide Dutch Pathology Registry. 15 During 20.3 years of follow-up, 25,848 participants were diagnosed with a microscopically confirmed, first primary cancer (excluding basal cell carcinoma of the skin). In the total NLCS cohort, 4.5% of the participants reported prevalent cancer at baseline (other than skin cancer) and were excluded. Additional exclusion of participants with incomplete or inconsistent data regarding diet, alcohol, or Mediterranean diet adherence left 22,228 cancer cases (men: 13,657, women: 8571) and 4084 subcohort members (men: 2057, women: 2027), who were eligible for inclusion in the present analysis ( Figure). Besides overall cancer incidence, incidence of cancers known to be related to tobacco smoking, obesity, and alcohol consumption were considered as secondary end points. The subgroup of smoking-related cancers comprised cancers of the oral cavity (including lip) and pharynx, esophagus, stomach, colorectum, liver, pancreas, nasal cavity and paranasal sinuses, larynx, trachea, lung, uterine cervix, ovary, kidney, ureter, and urinary bladder as well as myeloid leukemia. 10,13,19,20 Obesity-related cancers were defined as cancers of the esophagus (adenocarcinoma), stomach (cardia), colorectum, liver, gallbladder, pancreas, breast, corpus uteri, ovary, kidney, and thyroid, and multiple myeloma. 13,20,21 Finally, alcohol-related cancers included cancers of the oral cavity (including lip) and pharynx, esophagus (squamous cell carcinoma), colorectum, liver, larynx, and breast. 10,19,20 Results for the cancer subgroups were compared with results obtained combining all other cancers (ie, cancers not classified as being related to tobacco smoking, obesity, or alcohol consumption, respectively).
Exposure Assessment
The NLCS baseline questionnaire included a 150-item, semiquantitative food frequency questionnaire (FFQ) focusing on the study participant's dietary habits over the past 12 months. This FFQ performed adequately as judged by comparison with 9-day diet records. 16 Spearman correlation coefficients for intakes of food groups ranged from 0.38 for vegetables to 0.89 for alcoholic beverages with a median of 0.60. Moreover, the average test-retest correlation of the FFQ was 0.66 for all nutrients. 22 Intakes of most nutrients were found to be relatively stable for over at least 5 years. After 5 years, correlations between baseline and repeated measurements had declined on average only 0.07. 22 Mean daily nutrient intakes were calculated from the FFQ data utilizing the Dutch food composition table of the year 1986. 23 In addition to dietary intake, the self-administered baseline questionnaire measured detailed smoking habits, anthropometry, physical activity, educational level, reproductive factors, and other risk factors related to cancer. 14 Body mass index (BMI) was calculated (weight in kilograms / [height in square meters]) from the self-reported height and weight data. To calculate the level of nonoccupational physical activity, the minutes spent per day on cycling or walking, shopping, walking the dog, gardening, and sports or exercise were added, as described previously. 24
Mediterranean Diet Adherence
The relative level of Mediterranean diet adherence was determined using the alternate Mediterranean diet score (aMED), 25,26 which is a variation of the original traditional Mediterranean diet score. 27,28 aMED is composed of 9 dietary components (scored by 0 or 1 point each), which are typical of the Mediterranean diet. 25,26 Participants obtain 1 point for mean daily intakes at or above the sex-specific median of vegetables (excluding potatoes), legumes, fruits, nuts, whole grains, and fish. Reverse scoring is applied to the intake of red and processed meats. Finally, scores of 1 point are assigned to a moderate alcohol consumption of 5 to 25 g per day and a high (sex-specific median) ratio of monounsaturated to saturated fatty acids. Thus, a maximum score of 9 points can be obtained, reflecting the highest level of Mediterranean diet adherence. 25,26 Food intakes were adjusted to daily energy intakes of 2500 (men) and 2000 (women) kcal to control for differences in energy intake. 26,27 Furthermore, a reduced variant from the original aMED was created that did not contain the alcohol component (aMEDr), 6,29 because alcohol consumption has been associated with an increased risk of multiple cancer types even at moderate levels. 19,30,31 aMEDr was considered to be the primary measure of Mediterranean diet adherence.
Statistical Analyses
Sex-specific hazard ratios (HRs) and 95% confidence intervals (95% CIs) for the relation between Mediterranean diet adherence and overall cancer incidence were estimated by Cox proportional hazards modelling using duration of followup as time scale. Standard errors of the HRs were estimated using the robust Huber-White sandwich estimator, which accounts for the additional variance associated with sampling from the total cohort. 32 The validity of the proportional hazards assumption was evaluated by scaled Schoenfeld residuals tests. 33 Because of the large number of cases, these tests may easily yield significant results. Therefore, -ln(-ln) survival plots were visually inspected, and it was concluded that the proportional hazards assumption was met for the exposure variables.
Age-and multivariable-adjusted effect estimates were obtained for the Mediterranean diet scores, which were modeled as categorical (low [3], middle [4][5], or high [6]) 26 and continuous (per 2-point increment) terms. The multivariable-adjusted HRs were corrected for potential confounding by age at baseline, cigarette smoking (status, frequency, and duration), BMI, height, alcohol consumption (except for models containing the original aMED including alcohol), total daily energy intake, highest level of education, nonoccupational physical activity, and family history of cancer. Effect estimates obtained among women were additionally adjusted for reproductive factors (age at menarche, parity, age at first birth, age at menopause, oral contraceptive use, and use of postmenopausal hormone replacement therapy). All potential confounders were predefined and selected from the literature. For each adherence category, sex-specific median Mediterranean diet score values were determined in the subcohort. Next, these values were fitted as continuous terms in the Cox regression models to test for linear trends. Akaike's Information Criterion (AIC) was used to evaluate whether inclusion of alcohol in the Mediterranean diet score affected the model performance. 34 Besides overall cancer incidence, sex- Figure. Flow diagram of the number of participants of the Netherlands Cohort Study, who are eligible for inclusion in the analyses concerning overall cancer (case-cohort design) specific associations of aMEDr were also estimated with incidence of smoking-, obesity-, and alcohol-related cancers as well as cancers not classified as being related to these factors. Statistical significance of differences in HRs obtained for cancers related vs not related to tobacco smoking, obesity, or alcohol consumption was assessed using a competing risks procedure as previously described. 35 Standard errors for the observed differences in HRs were estimated using a bootstrapping method developed for the case-cohort design. 36 Furthermore, sex-specific associations between aMEDr and overall cancer risk were estimated within strata of cigarette smoking status, alcohol consumption, BMI, educational level, and family history of cancer. To assess the statistical significance of potential differences across strata, Wald tests were performed on interaction terms between aMEDr and the stratifying covariates. Finally, the main analyses were repeated excluding the first 2 years of follow-up to check for potential reversed causation, since the presence of preclinical cancer at baseline could have influenced dietary habits. Analyses were performed using Stata software (version 15; 2017, StataCorp, College Station, TX). Statistical significance was indicated by a 2-sided P value < .05. Table 1 summarizes baseline characteristics of male and female subcohort members and subjects with cancer. The mean (standard deviation) values of aMEDr were 3.9 (1.6) and 4.0 (1.6) in male and female subcohort members, respectively. Largely comparable aMEDr values were observed in subjects with cancer. Furthermore, daily intakes of the aMEDr components did not notably differ between subcohort members and subjects with cancer, regardless of sex. Subjects of both sexes with cancer were more likely to smoke and more often reported a family history of cancer compared with subcohort members. Moreover, male subjects with cancer had a higher level of alcohol consumption than subcohort members. Concerning reproductive factors, female subjects with cancer were older at the birth of their first child and were less often users of oral contraceptives than subcohort members.
RESULTS
Age-and multivariable-adjusted HRs and 95% CIs for associations of Mediterranean diet adherence with overall cancer risk are shown in Table 2, for men and women separately. Of the eligible study population, 3499 subcohort members (men: 1834, women: 1665) and 19,255 subjects with cancer (men: 12,184, women: 7071) had complete data on all potential confounders and could be included in the Cox regression analyses.
Mediterranean diet adherence was not associated with overall cancer risk in men in age-and multivariable-adjusted analyses ( Table 2). Multivariable-adjusted HRs (95% CIs) for aMEDr were 0.99 (0.84-1.17) comparing the highest with the lowest adherence category and 1.02 (0.95-1.10) per 2-point increase in score, respectively. Although aMEDr was not significantly associated with any of the cancer subgroups in men ( For both sexes, largely comparable HRs and 95% CIs for overall cancer risk were obtained when alcohol was included in the Mediterranean diet score. However, AIC values were higher for models in which Mediterranean diet adherence was assessed using the Mediterranean diet score variant including alcohol (aMED), indicating a worse fit. The respective AIC values for the categorical Mediterranean diet score variants (without vs with alcohol) were 172,977 vs 173,025 in men and 101,366 vs 101,402 in women (data not shown). Associations between aMEDr and overall cancer risk within strata of potential effect modifying factors are presented in Table 4. The relation of aMEDr with overall cancer risk in men became more positive with increasing level of education (P interaction ¼ .049), reaching statistical significance in the highest category. Although aMEDr did not significantly interact with educational level in women, a similar pattern was observed. Associations did not significantly differ across strata of cigarette smoking status, alcohol consumption, BMI, and family history of cancer in both men and women. Excluding the first 2 years of follow-up did not essentially change the associations (data not shown).
DISCUSSION
In this NLCS analysis, sex-specific associations of a priori defined Mediterranean diet adherence with risks of overall cancer and cancer subgroups defined by relations with 3 major cancer risk factors (tobacco smoking, obesity, and alcohol consumption) were investigated. In women, middle compared with low aMEDr values were significantly associated with a reduced risk of overall cancer and the majority of the cancer subgroups investigated. Other associations in women were not statistically significant after full adjustment for confounding, but all estimates were below 1. No association was observed between aMEDr and risk of overall cancer or any of the cancer subgroups in men. Inclusion of alcohol in the Mediterranean diet score diminished the model performance.
Even though the association of Mediterranean diet adherence with overall cancer risk is comprised of a combination of potentially diverging associations with individual cancer (sub)types, overall cancer risk is an interesting end point for epidemiological studies. It provides insight in the overall possible benefits of Mediterranean diet adherence and the potential of the Mediterranean diet as a dietary strategy for cancer prevention. Findings of previously conducted prospective studies evaluating the relation between a priori defined Mediterranean diet adherence and overall cancer risk have been inconclusive and were rarely specified by sex.
A priori defined Mediterranean diet adherence has previously significantly been associated with a reduced overall cancer risk in the total European Prospective Investigation into Cancer and Nutrition (EPIC) cohort as well as the Greek EPIC cohort. 9,10 Comparing the highest with the lowest Mediterranean diet adherence category in the total EPIC cohort, HRs (95% CIs) of 0.93 (0.88-0.99) and 0.93 (0.89-0.96) were observed for men and women, 13 In the present analysis of the NLCS cohort, a priori defined Mediterranean diet adherence was not associated with overall cancer risk in men. In regard to women, although the multivariable- Adjusted for age at baseline (years), cigarette smoking status (never, former, current), cigarette smoking frequency (cigarettes smoked per day, centered), cigarette smoking duration (years, centered), body mass index (<18.5, 18.5 to <25.0, 25.0 to <30.0, 30.0), height (cm), alcohol consumption (0, >0 to <5, 5 to <15, 15 to <30, 30 g/d), daily energy intake (kcal), highest level of education (primary school or lower vocational, secondary school or medium vocational, higher vocational or university), nonoccupational physical activity (30, >30 to 60, >60 to 90, >90 min/d), and family history of cancer (no, yes). c Analyses conducted among women were additionally adjusted for age at menarche (12, 13-14, 15-16, 17 years), parity (nulliparous, 1-2, 3 children), age at first birth (<25, 25 years), age at menopause (44,(45)(46)(47)(48)(49)(50)(51)(52)(53)(54), 55 years), oral contraceptive use (never, ever), and use of postmenopausal hormone replacement therapy (never, ever). adjusted associations in female NLCS participants were not statistically significant in most cases, effect estimates were stronger inverse than those observed for women in the total EPIC cohort, which did reach statistical significance possibly due to the larger number of cases. 10 Additional cohort studies in Germany and France have investigated the association between Mediterranean diet adherence and overall cancer risk in men and women together and did not observe an association. 11,12 Besides the prospective cohort evidence, a reduced overall cancer risk (borderline significant, P ¼ .05) was indicated in patients with coronary heart disease who followed an a-linolenic acid-rich Mediterranean-type diet as opposed to a control diet close to the step 1 prudent diet of the American Heart Association in the randomized Lyon Diet Heart Study. 37 However, results should be interpreted with caution because they were based on only 24 incident cancer cases. Differential adjustment for potential confounding factors and residual confounding, particularly by tobacco smoking and female reproductive factors, may have contributed to the varying associations between a priori defined Mediterranean diet adherence and overall cancer risk that have been reported thus far. Other potentially contributing factors include differences in the method of Mediterranean diet assessment, the composition of the study population, and the time period and/or geographical region in which the study was conducted. The distribution of the specific cancer types in the overall cancer outcome is likely to vary over time and between countries because of, for example, different distributions of risk factors and the introduction of cancer screening programs. Some specific cancer types are inversely associated with Mediterranean diet adherence, whereas null associations have been observed for others. For example, Mediterranean diet adherence has inversely been associated with risks of postmenopausal breast cancer (particularly of the estrogen receptor negative subtype) and subtypes of esophageal and gastric cancer in previous NLCS analyses. 6,38 However, no association was found with colorectal cancer risk and a positive association with nonadvanced prostate cancer risk. 39,40 Therefore, differences in the relative incidence of specific cancer types could also (partly) be responsible for the inconsistent findings concerning overall cancer risk.
Results of the present study indicated that the inverse association between Mediterranean diet adherence and overall cancer risk, if present, might be restricted to women. In line with these findings, slightly stronger inverse associations were observed in female participants of EPIC-Greece, though the interaction by sex did not reach statistical significance. 9 Cancers arising in men and women may etiologically differ. The sex-specific levels of sex hormones may influence tumor development and could therefore potentially modulate the association of dietary factors with cancer risk. [41][42][43][44][45] Apart from other factors, sex-related differences may also exist in exposure levels to risk factors and carcinogen metabolism. 41,[43][44][45] Furthermore, the disparate associations of Mediterranean diet adherence with commonly diagnosed sex-specific cancers (ie, postmenopausal breast and prostate cancer) are likely to have contributed to the heterogeneous relations of Mediterranean diet adherence with overall cancer risk for men and women. It should be noted that other studies did not observe clear differences in associations between the sexes, 10,13 stressing the importance of additional research on this topic.
Associations with Mediterranean diet adherence among women in the present study appeared comparable for overall cancer risk and risks of cancer subgroups defined by the presence of a relation with tobacco smoking, obesity, or alcohol consumption. In contrast to the findings for women, significant heterogeneity was observed in all subgroup comparisons in men. However, associations with Mediterranean diet adherence did not reach statistical significance for any of the subgroups in men and the differences did not seem to be relevant. The statistical power in the present study was high, especially for men, which increased the likelihood for small and irrelevant differences to become statistically significant. Additionally, one should realize that the distribution of the individual cancer types differs between the subgroups in men and women, and that in certain subgroups a substantial proportion can be comprised by sex-specific cancers.
Regarding cancers related vs not related to obesity and alcohol consumption, similar results were obtained in previous studies. 10,13 The inverse association with Mediterranean diet adherence was stronger for smoking-related cancers compared with cancers not related to tobacco smoking in the total EPIC cohort, 10 whereas the opposite was observed in the Greek EPIC cohort. 9 Furthermore, associations did not seem to differ in a Swedish cohort. 13 These contrasting findings may have resulted from differences in the classification of cancer types as being related to tobacco smoking or not. For example, although cancers of the colorectum/large bowel were classified as being smoking-related in the studies by Couto et al 10 and Bodén et al, 13 they were considered not being related to smoking in the study by Benetou et al. 9 Moreover, the subgroup of cancers not being related to tobacco smoking constituted all cancers not classified as being related to smoking in one study, 13 whereas the 2 other studies selected specific cancer types. 9,10 The cancer-preventive effect of the Mediterranean diet seems biologically plausible. The high intake of dietary antioxidants in the Mediterranean diet (eg, polyphenols and vitamins from plant foods and olive oil) and the resulting higher total antioxidant capacity that has been associated with adherence to this dietary pattern may defend the body against the DNA-damaging effects of free radicals and other oxidants. [46][47][48] Moreover, the anti-inflammatory effects of polyphenols and the favorable fatty acid profile of the Mediterranean diet (high in anti-inflammatory omega-3 polyunsaturated fatty acids) may reduce inflammation. 47,49 Several additional mechanisms have been proposed for the cancer-preventive effect of the Mediterranean diet, which were among others related to body weight regulation 50 and the low consumption of red and processed meats. 31,48 Important strengths of the NLCS include the large sample size, prospective design, and nearly complete follow-up of 20.3 years, which make information and selection biases unlikely. The statistical power was adequate to perform sexspecific analyses for overall cancer risk as well as risks of cancer subgroups defined by relations with three major cancer risk factors. The possibility of residual confounding was minimized through comprehensive adjustment for cigarette smoking and other potential confounders, including reproductive factors in women.
Limitations of this study include the lack of updated dietary information during follow-up and possible measurement errors in the exposure assessment, which may have attenuated some associations. The use of cohort-specific cutoffs in the assessment of Mediterranean diet adherence may pose a final weakness. Participants with high aMEDr values in the non-Mediterranean study population of the NLCS could potentially be classified in intermediate or low adherence categories in populations with higher intakes of typically Mediterranean foods. As expected, intakes of typically Mediterranean food groups (eg, vegetables, fruits [including nuts], and legumes) were lower in NLCS subcohort members compared with participants of the Greek EPIC cohort, whereas the opposite was observed for the intake of meat. 28 Among men, median daily intakes were 207 and 550 g/d for vegetables, 166 and 363 g/d for fruits (including nuts), 6 and 9 g/d for legumes, and 141 and 121 g/d for meat (all types) in the NLCS subcohort and EPIC-Greece, respectively. The respective intakes among women were 219 and 500 g/d for vegetables, 215 and 356 g/d for fruits, 5 and 7 g/d for legumes, and 124 and 90 g/d for meat.
CONCLUSIONS
Mediterranean diet adherence was not associated with risk of overall cancer or any of the cancer subgroups in male participants of the prospective NLCS. Multivariable-adjusted HR estimates in women pointed in the inverse direction, but were only statistically significant when comparing the middle with the lowest aMEDr category. Associations of Mediterranean diet adherence with subgroups of cancer defined by relations with tobacco smoking, obesity, and alcohol consumption closely resembled the results obtained for overall cancer risk in women.
|
2020-10-06T13:33:22.839Z
|
2020-09-25T00:00:00.000
|
{
"year": 2020,
"sha1": "db728b6bea1b0665394945dad42c600844c27159",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jand.2020.07.025",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "93a62a90958de10d791243fa93d2f7c87d026a9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3632699
|
pes2o/s2orc
|
v3-fos-license
|
Hyperspectral measurements of immature Lucilia sericata (Meigen) (Diptera: Calliphoridae) raised on different food substrates
Immature Lucilia sericata (Meigen) raised on beef liver, beef heart, pork liver and pork heart at a mean temperature of 20.6°C took a minimum of 20 days to complete development. Minimum development time differences within stages were observed between the meat types (pork/beef), but not the organ types (liver/heart). Daily hyperspectral measurements were conducted and a functional regression was completed to examine the main effects of meat and organ type on daily spectral measurements. The model examined post feeding larval spectral measurements of insects raised on beef liver alone, the effect of those raised on pork compared with those raised on beef, the effect of those raised on heart compared with those raised on liver and the interactional effect of those raised on pork heart compared with those raised on beef liver. The analyses indicated that the spectral measurements of post feeding L. sericata raised on pork and beef organs (liver and heart) are affected by the meat and organ type.
Introduction
Blow flies (Diptera: Calliphoridae) are holometabolous insects and, in some species, most of the immature stages are associated with carrion or animal remains. These remains are an ephemeral resource that the larval stages depend on for their nutritional requirements. The immature blow fly develops at a predictable rate which is temperature dependent on the known food source and it is precisely that on which medico-legal entomology is based [1].
One area of medico-legal entomology, a subdiscipline offorensic entomology includes examining immature blow fly development in order to estimate the tenure of blow flies developing on decomposing corpses in death investigations. The estimated tenure or time since colonization can then be used to infer the minimum elapsed time since death occurred [2][3][4].
Unfortunately due to many variables, the minimum time since death, although accurate, is a modest estimate [5]. Since methods of extrapolating time since death are estimates, precision is lessened in the latter stages of the lifecycle as those stages are much lengthier than earlier PLOS stages [5]. Finding techniques with more precision in estimating the minimum time since death is of great importance. Forensic entomologists have sought ways to improve current estimation methods by using Gas Chromatography/Mass Spectrometry to identify changes in cuticular hydrocarbons commensurate with development [6][7][8][9][10], Micro-Computed Tomography scanning to assist in identifying morphological changes in intra-puparial development throughout metamorphosis [11], integrating gene expression variations with conventional methods [12][13][14] and, most recently, examining microorganisms associated with carrion [10,[15][16][17][18], all of which improve current methods. Hyperspectral remote sensing has joined the ranks of these methods by improving current means and also contributing to much needed methods offering confidence intervals [5,19,20] as gene expression has [12,21,22], thereby satisfying the United States National Research Council's criticisms of many forensic sciences [2,4,23,24]. Hyperspectral remote sensing is a non-intrusive means of sensing and recording reflected energy from a target surface [25]. The use of hyperspectral remote sensing in medico-legal entomology provides a non-destructive technique to record reflectance from the ageing insect and can be used to identify differences in the insect surface over time which can be used further to identify an estimated time within stage of development [19,20]. Depending on the wavelengths examined, it combines exhaustive details from the visible spectrum, short wave, near and far infrared. In this case, with each measurement, it is a means to identify changes in the target surface of the immature blow fly as it develops. A spectral signature for each day for each target surface is identified and these signatures change daily [19]. These target surface changes can be used to further identify demarcations within immature stages and allow for more precision of time within larval stadia estimations [19,20] and the intrapuparial period [19,26].
An obstacle that potentially arises when using laboratory collected data in case analyses is that most laboratory experiments are not completed on entire remains but instead on beef liver or other animal tissues. Differences in development times have been found among different food substrates for several different blow fly species [27][28][29][30][31][32][33][34][35]. Protophormia terraenovae (Robineau-Desvoidy) development on beef liver was shown to be representative of a whole animal (wounded rat carcass), which supports the use of beef liver in laboratory experiments [34]. Comparisons of other species of blow fly development on whole animals to animal tissue has not been done, and so cannot be commented on; only comparisons between tissues have been made [27][28][29][30][31][32][33][35][36][37]. The effects of the food substrate on hyperspectral measurements have not been examined. The objective of this research was to examine the effects of different food substrates on developing immature Lucilia sericata (Meigen) and, consequently, the effects on the hyperspectral measurements of the lengthy post feeding stage.
Insect rearing
Black film canisters positioned on their sides with approximately 50 g of beef liver within were used to collect eggs from two separate colonies of L. sericata [19]. The two colonies originated from recently wild-trapped flies collected from Burnaby, Langley, and Vancouver, British Columbia and were used within a year of trapping and were provided by Simon Fraser University's Biological Sciences Department. The colonies were maintained on a diet of water, sugar and milk powder ad libitum. Also, beef liver was added to the cages regularly as an oviposition medium.
Once eggs were oviposited (~two hours), they were divided among 16 treatments, four each of beef liver, beef heart, pork liver and pork heart. Eggs were divided such that each treatment received an estimated 240 eggs combined from the two colonies. Each treatment consisted of a one gallon/4L wide mouth glass jar with approximately a five centimetre depth of moistened sawdust topped by a folded industrial paper towel and the appropriate meat source (approximately 250g). An estimated 200-240 combined eggs from the colonies were placed onto each meat type in each treatment. The number of eggs were estimated based on egg mass size. Each jar was secured with two pieces of industrial paper towel and two elastic bands to prevent escape during the post feeding stage.
All treatments were placed into a Conviron 1 E/7 environmental chamber set for 75% relative humidity and a 14:10 (L:D) photoperiod. A mean constant temperature of 20.6˚C was maintained in the chamber and recorded by ACR Systems Inc. Smartbutton 1 data loggers and confirmed daily with Fisherbrand™ thermometers. The treatments were rotated daily to account for temperature differences within the chamber. Development stage reached was recorded daily and was presented as thermal units, accumulated degree days (ADD). A base temperature of 0˚C was applied since the base temperature for this species is unknown for this geographic location [38]. To calculate, [39] ADD ¼ Time ðdaysÞ X ðTemperature ð CÞ À lower threshold ð CÞ Þ:
Spectral measuring
Spectral measurements of ten post feeding L. sericata from each of the 16 treatments were taken using an ASD (Analytical Spectral Devices™, Boulder CO) LabSpec 4 Bench Benchtop Analyzer Spectrometer. All measured larvae were washed with deionized water and patted with filter paper and finally patted with dry filter paper to dry them before measuring. Following the measurements, the larvae were placed back into the treatment container. Measurements were completed in a blackened laboratory to ensure that measurements were of the larvae and not interfering reflecting surfaces. All surfaces and instruments were painted with a matte black paint and the light in the room was that of the light source only. The minimal light from the turned away computer screen and from under the door were consistent and trivial to the measurements.
Each treatment was removed from the environmental chamber once daily beginning at noon and 10 insects were measured from each treatment. Point measurements were taken from the anterior, middle and posterior regions of the washed insect. Calibration using a Spec-tralon™ panel was completed before starting and following every five to seven measurements. A Spectralon™ panel is a pure diffuse reflectance standard and is the baseline against which all measurements were compared. A black reference was completed each time with the process of optimization of the spectrometer. Data files were collected by RS 3™ software, the program that is specific to ASD spectrometers. Viewspec pro™ was then used to convert the files to text files. Mathworks™ Matlab formulae were then used to transfer the files and organize the Matlab files by day, meat type and region of measurement to be manipulated for statistical analyses along with fdaM (functional data analysis Matlab) tools (http://www.psych.mcgill.ca/misc/fda/downloads/FDAfuns/).
Functional model
The raw spectral reflectance observations, X i (w) across wavelength (w) for insect i on day Yi were smoothed using a 6 th order B-Spline basis while controlling roughness through a 3 rd derivative penalty to reduce the noise associated with the raw spectra [40]. Smoothing was performed using generalized cross validation to ensure the resulting smooth functions X_ i, smooth (w) tracked the signal without succumbing to the minute level of noise in the reflectance data.
The functional data approach avoids subjectively binning the reflectance across intervals of wavelength and instead treats an entire reflectance curve as a single functional observation. The functional data analysis approach assumes that reflectance varies smoothly with changes in wavelength which coincides with the spectral leakage exhibited by frequency domain correlation. The maximum spectral reflectance scale for each observation was set to one and data were also scaled to have an average reflectance value of zero between 400 and 550 nm wavelengths.
The functional regression equations, where β(w) is the contributing regression coefficient for spectral measurements from insects raised on beef liver alone, is given below: The coefficient function β Pork (w) allows for differences in the spectral measurement on the day of development due to changing from beef to pork measurements regardless of organ. The coefficient function β Heart (w) allows for differences in the spectral measurement on the day of development due to changing from liver to heart measurements regardless of meat type. The coefficient function β Pork Heart (w) allows for an interaction in the differences in the spectral measurement on the day of development due to changing from beef liver to pork heart. The model (1a-1d) has additive coefficients from a beef liver baseline. As such, predicting day Y i with spectral reflectance X(w) for a pork liver substrate, the model terms used are 1a and 1b. Predicting the day from a beef heart substrate uses 1a and 1c, and predictions when a pork heart substrate is used, all terms 1a-d are applied.
The goal was to predict the development day based on the spectral reflectance curves and to see if the reflectance is affected by changes in meat type and organs used for rearing the insects. A test for the interaction effect of changing from beef liver to pork heart was performed through testing the null hypothesis that β Pork Heart (w) = 0 for all wavelengths, w. Regardless of whether or not there was an interaction effect then the main effect of moving from beef to pork irrespective of organ type can be tested with the null hypothesis that β Pork (w) = 0 for all wavelengths. Similarly, the main effect of moving from liver to heart can be tested with the null hypothesis β Heart (w) = 0 for all wavelengths. Finally, a test of significance of the reflectance in estimating the day of development can be performed by testing the null hypothesis that β(w) = 0 for all w.
All of the functional coefficients β(w) were modelled as 6 th order B-Spline functions with a roughness penalty on their 3 rd derivative to prevent unrealistic fluctuations in the reflectance effect across nearby wavelength. The roughness penalties for the X(w) and β(w) were determined via cross validation so as to avoid overfitting. The model in (1a-1d) is not a regression model with 2500 covariates per observations but instead the smooth functional form of the reflectance curve is exploited to transform the reflectance measurements into a single functional covariate that happens to span 2500 wavelengths. The estimated coefficient functions β (w), highlight smoothly varying regions of the 800 reflectance curves for each of the three body regions measured that assist in estimating the day of development. All of the estimation uncertainty, from the initial reflectance smoothing to the estimation of the coefficient functions β (w), is carried forward into producing confidence intervals and inference.
Results
Lucilia sericata raised at an average of 20.6˚C took a minimum of 20 days to complete immature development on beef liver, beef heart, pork liver and pork heart. Development stage reached and accumulated degree days (ADD) with 0˚C base temperature is presented for each of the meat substrates in Table 1. An extra day was spent in the feeding third instar on each of the pork substrates compared with the beef substrates but development to the adult stage took the same number of days. The insects raised on pork were in the intra-puparial period for nine days rather than 10. Although not measured, based on observation alone, the feeding larvae were smaller on the pork substrates compared with the beef substrates but caught up in size to those feeding on the beef substrates with the extra day of feeding.
Examinations of the spectral measurements of the post feeding larvae raised on each of the substrates were made in relation to beef liver, as beef liver is a substrate that is used regularly to rear blow flies in laboratory research [34].
The functional regression model fits for the post feeding stage from measurements of the posterior end, anterior end and midsection are presented in Figs 1, 2 and 3, respectively. The measurements from the midsection and posterior end of the post feeding larvae outweigh the spectral measurements from the anterior end for predicting day within the post feeding stage. The actual day of post feeding development falls outside of the 95% prediction interval more often in the anterior measurements (Fig 1) than the midsection and posterior end measurements (Figs 2 & 3) in the functional regression plots. Also, most days of development are clearly distinguished from each of the other days in the post feeding stage in the midsection and posterior measurements. As well as the functional regressions, the mean squared error (MSE) indicates that the functional prediction on the meat types have the highest error on anterior end measurements (Table 2). Therefore, it has the least accurate prediction capability compared with the midsection and the posterior end of the larvae. To confirm this, the overall percentage of true measurements falling outside of the 95% prediction interval for the anterior end, midsection and posterior end is 40.9%, 25.1% and 31.3%, respectively. The poor prediction capability of the anterior measurements is due to days one and five. This is evident when examining the anterior functional prediction plots in relation to the functional prediction plots of the midsection and posterior end. Many of the true values that tend to fall outside of the 95% prediction interval are from days one and five.
In addition to the body region findings, the post feeding larvae that were raised on pork heart have the lowest MSE and the lowest number of times that the hyperspectral measurement falls outside of the 95% confidence interval for each body region. It is lowest for pork heart except with midsection measurements where with beef heart the percent of time that the true value falls outside of the 95% confidence interval is lower by a minimal difference of 0.4% than it is with post feeding larvae that were raised on pork heart.
In a comparison between the observed day and predicted day and the uncertainty associated with the predicted day for each of the measured body regions, it is evident that days one and day five provide the weakest prediction (Figs 4, 5 & 6). In the anterior end measurement plots of predicted versus observed day (Fig 4), days two, three and four predictions are similar to the observed or actual day and days one and five predictions are least like the observed day which is consistent with the mean squared error findings. In the midsection measurement plots of predicted versus observed day (Fig 5), in most days, the predicted day falls within the inter-quartile range and the median measurements predict the observed day. The predicted day based on midsection spectral measurements of post feeding larvae raised on beef heart for day three did not match the observed day within the interquartile range but did just outside in the lower whisker or 25 th percentile of measurements. For post feeding larvae raised on both beef and pork liver, the predicted day from the spectral measurements for day five falls outside of the interquartile in the upper whisker and so the observed day does not fall in the middle 50% of measurements. The median prediction based on posterior spectral measurements of the post feeding larvae that were raised on beef heart and liver and pork heart and liver was accurate for days one, two, three and four but was just outside in the upper whisker of the interquartile for day five (Fig 6). The predicted day falls closest to the observed day in the midsection and posterior measurement compared to the anterior measurements. Prediction of development day was most accurate based on the models that examined insects raised on pork heart.
Based on the coefficient functions (Figs 7, 8 & 9), the ranges of wavelengths that are significant and contributing to the prediction for each meat type can be identified and were highlighted with green vertical bands. The green highlighted bands are the regions of wavelengths where the null hypothesis that there is no regression effect for predicting the day of development is rejected at the 5% significance level. Equivalently, these blue bands in Figs 7-9 show the confidence intervals for the regression coefficient effects across wavelengths. The regression effect is particularly evident at wavelengths 350-800nm except for the anterior measurements where the greatest contributions appear to fall between 900 and 1350nm. Each of the regression coefficient functions have significant non-zero effect regions and therefore, the Table 2. The fixed effect models for the hyperspectral measurements of the anterior end, midsection and posterior end of post feeding Lucilia sericata raised on beef heart, (BH), beef liver (BL), pork heart (PH), and pork liver (PL). null hypotheses for L. sericata raised on all the meat types were rejected at p 0.05 for at least some wavelength bands. The contributing wavelengths for each of the coefficient functions differ, in some cases only slightly, from each other when examining the effect of the organ type and meat type and the interaction effect of different organ and different meat type with pork heart.
Discussion
Lucilia sericata raised on beef liver and heart increased in size visibly faster than those raised on pork liver and heart. They fed for one day less on beef organs than they did on pork organs. The larvae feeding on the pork organs were noticeably much smaller in size but increased in size with the extra day of feeding. These findings were very different from previous findings, which indicate that L. sericata grew faster on pork lung, liver and heart than on the same beef tissues [41]. Differences between the findings for the same species could be a result of geographically separate populations, as it is probable that the earlier research was performed on L. sericata trapped in the United Kingdom [42,43]. Genetic and phenotypic differences have been found in L. sericata from environmentally separate populations and ecological differences may be a contributor [44]. A temperature and size relationship between strains was found in the studied populations [44]. Interestingly, adult L. sericata reared on beef or pork began emerging on the same day, and so the L. sericata raised on pork spent one less day in the intra-puparial period. The nutritional value of pork heart and liver does not explain the need for the extra day of feeding in comparison with the beef organs (Table 3) but fatty acids may. Fatty acids increase the oily consistency of the meat substrate [45] and the beef heart was noticeably oilier than the pork heart. The oily consistency of the beef and pork liver was not detectable because of the moist surface consistency of liver. Since beef animals primarily have a grass diet, their vitamin E intake is higher thereby increasing their poly unsaturated fatty acid (PUFA) levels at slaughter to higher than that of pork [45]. Without Vitamin E in a ruminants' diet, however, oxidation of the fatty acids is faster than that of porcine following slaughter [45]. Vitamin E slows the oxidation of PUFAs and causes an oily consistency [45], which may have made it easier for the third instar larvae to break the surface of the beef substrates when feeding compared with the larvae feeding on the pork substrates accounting for the extra day of feeding. There was no delay with the earlier larval stages feeding on pork and this is probably because there was enough liquid protein in those first feeding days for the less developed mouth parts of first and second instar larvae.
One of the four replicates for each of the pork liver and pork heart took more time (an extra day for pork heart and an extra two days for pork liver) for the adults to begin emerging. This was probably due to much greater mortality in these replicates as higher mortality was observed in these slower developing replicates.
The anterior measurement median predictions are consistent with the observed day for days two, three and four but are not for days one and five of the post feeding stage. These results support the findings of the higher mean squared error for the anterior measurements. The median prediction based on midsection spectral measurements of post feeding larvae that were raised on pork heart according to the box plots was most consistent with the observed day as compared with development on the other meat types and organs. This is consistent with the mean squared error findings. The lowest means squared error was observed with development on pork heart and spectral measurements of the midsection. The accurate median prediction for most days but day five from the posterior measurements is consistent with the low mean squared errors from the posterior end measurements of post feeding larvae raised on all the meat types. Posterior end measurements are usually superior to midsection and anterior end measurements when examining post feeding larvae [19,20]. The weaker Day 5 prediction from the posterior end measurements would, however, explain the higher mean squared error subtotal for the posterior end measurements than the midsection measurements.
Based on the coefficient functions, Figs 7,8 & 9, there are fewer significant wavelengths for pork heart, particularly in the midsection and posterior measurements, this can potentially explain the lower MSE for pork heart compared with the other meat types (Table 2). Surprisingly the MSE is very slightly lower in the midsection measurements than the posterior end measurements for predicting day of development. Previous findings have found that prediction based on posterior measurements has outweighed those of anterior and midsection measurements for P. terraenovae [20] and L. sericata [19] when raised on veal liver and beef liver, respectively. The slightly lower MSE subtotal for midsection is probably a result of the lower pork heart MSE and the lower percent of times that the true value for beef heart fell outside of the 95% confidence interval ( Table 2). The true value fell outside of the 95% confidence interval only 0.4% fewer times for the post feeding larvae that were raised on beef heart than those raised on pork heart. From an overall perspective, the majority of the wavelengths at which measurements were taken do not contribute to the prediction as their functional coefficients are not significant and focus can remain on those wavelengths identified in Figs 7,8 & 9. The spectral measurements from the midsection and posterior end of the L. sericata larvae were found to be superior for predicting the day within the post feeding stage as compared with anterior measurements. This is probably a result of the ectodermal oenocytes which produce cuticular hydrocarbons [46]. They are often located in the abdomen of the larvae in close proximity to the spiracles but are species and stage dependent [47]. The cuticular hydrocarbons are then transported by lipophorin in the haemolymph to the remaining cuticle and fat body [47,48]. The oenocytes have been found to grow and form new variations with each moult [48,49], and so it is very probable that the change in oenocytes may result in changes to the cuticular hydrocarbons.
The anterior end of the larvae was found to be particularly poor for predicting age of larvae and there may have been several causes for this. First, the anterior end was a smaller target and the larvae had a tendency to move their anterior regions away from the fiber optic probe when attempting to position them and hold them still for long enough to complete a measurement. Second, the cuticular hydrocarbons may potentially not be as abundant or were lacking precision in delivery to that region since it is farthest from the oenocytes and the hydrocarbons require transport to this region. Third, feeding has stopped upon entering the post feeding stage and so there may no longer be a release of digestive enzymes potentially laced with bacteria on the anterior end of the insect surface [15]. Day five prediction was probably least convincing because the insects were transitioning from the post feeding stage to the intra-puparial period and so was reducing its transport of cuticular hydrocarbons to the insect surface in preparation for apolysis. This would reduce the changes to the insect cuticle and make it more difficult to distinguish from the previous day as was seen in Figs 1, 2 & 3 where the blue prediction line somewhat blurs between the last two days. This is much more evident in Fig 1, the anterior end measurements.
The experiments showed that the food substrate on which insects are raised does have a minimal effect on the day of development prediction from spectral measurements. The functional regressions from each body region indicated that, when examining the effect of spectral measurements from insects raised on pork compared with those on beef, there is an effect on predicting day within the post feeding stage. Similarly, when examining the effect of L. sericata The β(w) coefficients (y-axis) and contributing wavelengths to the coefficients of the linear regression covariate model for the spectral measurements of the anterior end of post feeding Lucilia sericata. The blue area represents the 95% confidence interval and the green bands indicate wavelengths where β(w) coefficients are significant. β(w) is the contributing β coefficient for spectral measurements from insects raised on beef liver alone. β Pork (w) is the contributing β coefficient due to changing from beef to pork measurements regardless of organ. β Heart (w) is the contributing β coefficient due to changing from liver to heart measurements regardless of meat type. β Pork Heart (w) is the contributing β coefficient due to changing from beef liver to pork heart. https://doi.org/10.1371/journal.pone.0192786.g007 spectral measurements raised on heart compared with those raised on liver, there was also an effect. Day predictions within the post feeding stage were also affected when examining the interactional effect of both organ and meat type, pork heart in reference to beef liver.
It is most probable that differences in the cuticular hydrocarbons are due to differences in the food substrates since diet has been shown to affect Drosophila spp. and ants [50][51][52][53]. There is a strong possibility that the fatty acids in the food substrates were impacting the cuticular hydrocarbon profile since this has been reported to occur in the herbivorous mustard leaf beetle, Phaedon cochleariae (F.). The mustard leaf beetle was fed artificial diets of fatty acids and this resulted in changes to the straight chain and methyl-branched cuticular hydrocarbons [54].
Based on the coefficient functions (Figs 7, 8 & 9), the significantly nonzero portions of the β Pork Heart (w) functions that are contributing to the interactional effect are not as numerous as the contributing β (w) coefficients in the beef liver alone model. Also, for the meat and organ type effect, there are missing and extra contributing β(w) coefficients that are not contributing to the beef liver alone model. Hence the significance of different wavelength regions within all of the coefficient functions across all of the body regions shows that there are additional wavelengths of spectral measurements contributing to differentiating the model during changes in meat type and organ choice. The β coefficients indicate whether or not a significant relationship exists between wavelength and spectral reflectance for the measured insects and also indicates at which wavelengths a significant relationship exists. Although day predictions are accurate, the differences in β coefficients and therefore; different contributing wavelengths indicate why care must be taken when using spectral measurements to age larvae raised on different food substances. This is particularly important when applying findings from different food substrates to casework. The β(w) coefficients (y-axis) and contributing wavelengths to the coefficients of the linear regression covariate model for the spectral measurements of the midsection of post feeding Lucilia sericata. The blue area represents the 95% confidence interval and the green bands indicate wavelengths where β(w) coefficients are significant. β(w) is the contributing β coefficient for spectral measurements from insects raised on beef liver alone. β Pork (w) is the contributing β coefficient due to changing from beef to pork measurements regardless of organ. β Heart (w) is the contributing β coefficient due to changing from liver to heart measurements regardless of meat type. β Pork Heart (w) is the contributing β coefficient due to changing from beef liver to pork heart. https://doi.org/10.1371/journal.pone.0192786.g008
Acknowledgments
We would like to thank the many volunteers that assisted with the spectral measuring and maintaining the Lucilia sericata colonies. We would also like to thank ASD Inc, a Panalytical company for awarding J. Warren the Alexander Goetz prize so that this research could be conducted.
Author Contributions
Conceptualization: Jodie A. Warren, Gail S. Anderson.
Data curation: T. D. Pulindu Ratnasekera, David A. Campbell. The β(w) coefficients (y-axis) and contributing wavelengths to the coefficients of the linear regression covariate model for the spectral measurements of the posterior end of post feeding Lucilia sericata. The blue area represents the 95% confidence interval and the green bands indicate wavelengths where β(w) coefficients are significant. β(w) is the contributing β coefficient for spectral measurements from insects raised on beef liver alone. β Pork (w) is the contributing β coefficient due to changing from beef to pork measurements regardless of organ. β Heart (w) is the contributing β coefficient due to changing from liver to heart measurements regardless of meat type. β Pork Heart (w) is the contributing β coefficient due to changing from beef liver to pork heart. https://doi.org/10.1371/journal.pone.0192786.g009
|
2018-04-03T03:25:28.589Z
|
2018-02-13T00:00:00.000
|
{
"year": 2018,
"sha1": "802f2d49a176e287c01a83d33e5b13d957872a3f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0192786&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "802f2d49a176e287c01a83d33e5b13d957872a3f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
222184174
|
pes2o/s2orc
|
v3-fos-license
|
Use of a Hybrid Adeno-Associated Viral Vector Transposon System to Deliver the Insulin Gene to Diabetic NOD Mice
Previously, we used a lentiviral vector to deliver furin-cleavable human insulin (INS-FUR) to the livers in several animal models of diabetes using intervallic infusion in full flow occlusion (FFO), with resultant reversal of diabetes, restoration of glucose tolerance and pancreatic transdifferentiation (PT), due to the expression of beta (β)-cell transcription factors (β-TFs). The present study aimed to determine whether we could similarly reverse diabetes in the non-obese diabetic (NOD) mouse using an adeno-associated viral vector (AAV) to deliver INS-FUR ± the β-TF Pdx1 to the livers of diabetic mice. The traditional AAV8, which provides episomal expression, and the hybrid AAV8/piggyBac that results in transgene integration were used. Diabetic mice that received AAV8-INS-FUR became hypoglycaemic with abnormal intraperitoneal glucose tolerance tests (IPGTTs). Expression of β-TFs was not detected in the livers. Reversal of diabetes was not achieved in mice that received AAV8-INS-FUR and AAV8-Pdx1 and IPGTTs were abnormal. Normoglycaemia and glucose tolerance were achieved in mice that received AAV8/piggyBac-INS-FUR/FFO. Definitive evidence of PT was not observed. This is the first in vivo study using the hybrid AAV8/piggyBac system to treat Type 1 diabetes (T1D). However, further development is required before the system can be used for gene therapy of T1D.
Introduction
Type 1 diabetes (T1D) is characterised by the autoimmune destruction of pancreatic beta (β) cells, resulting in a lack of insulin secretion and hyperglycaemia [1]. Currently, a patient's blood glucose levels are controlled by multiple daily injections of insulin or by insulin pumps [2] and the development of fast and long-acting insulin analogues has provided more physiological control than older insulins [3]. However, this approach also results in susceptibility to severe hypoglycaemia, specific promoter (LSP) [32] to the AAV8 vector, which allowed the systemic delivery by intraperitoneal (i.p.) injection to mouse livers.
We transduced the livers of diabetic NOD mice via i.p. injections of AAV8-INS-FUR and AAV8-containing the β-cell transcription factor, Pdx1 (AAV8-Pdx1), with the intention of inducing pancreatic transdifferentiation in the livers. Diabetic mice that received i.p. injections of AAV8-INS-FUR became hypoglycaemic with abnormal responses to (i.p.) glucose tolerance tests (IPGTTs). In addition, expression of β-cell transcription factors was not detected in the livers, indicating that this approach was not able to induce β-cell transdifferentiation as anticipated.
In previous successful studies, the lentiviral vector system stably incorporated the INS-FUR gene into liver cells [9][10][11]25]. In an attempt to investigate whether it was the episomal expression of INS-FUR using the AAV8 system that resulted in the absence of pancreatic transdifferentiation and persistence of abnormal glucose tolerance, we also employed the AAV8/piggyBac system, which can mediate the transposition of transgenes into the host genome [33][34][35]. As the piggyBac system has shown sustained gene expression in adult mice [35], we hypothesised that the AAV8-INS-FUR-piggyBac system would result in somatic integration and long-term gene expression, therby reversing their diabetes in a similar manner to the lentiviral system [9,10,25]. Expression of INS-FUR resulted in euglycaemia and normal IPGTTs in the mice that received AAV8/piggyBac-INS-FUR and these were subsequently subjected to FFO surgery. The INS-FUR gene and the pancreatic hormones, somatostatin and pancreatic polypeptide, were detected in the livers. However, wider evidence of pancreatic transdifferentiation was not seen. This is the first in vivo study using the hybrid AAV8/piggyBac system in an attempt to cure autoimmune T1D. However, whilst integration of the AAV8/piggyBac vector produced superior results compared to the episomal AAV8 system, it is apparent that the lentiviral system possesses a certain factor(s) that enables widespread pancreatic transdifferentiation to occur in the animal livers that was not seen with either AAV vector.
AAV Vectors
AAV vector constructs ( Figure S1) were prepared using a previously reported construct [36], where a powerful liver-specific promoter drives transgene expression. New constructs were built using an In-Fusion cloning kit (Takara-bio, Scientifix Pty Ltd., Clayton, Australia), where the GFP transgene was replaced by sequences encoding INS-FUR or murine Pdx1 with a downstream IRES. AAV vector stocks were produced by triple transfection of HEK 293 cells, as previously described [36]. The titre was acquired using real-time quantitative PCR (qPCR) (Table S1) [37]. The vectors were diluted with phosphate-buffered saline to the required concentration for injection. When combinations of vectors were used, the vectors were mixed and delivered in a single (i.p.) injection.
HIV/MSCV Lentiviral Vector
The HIV/MSCV (HMD) lentiviral vector, which expresses the enhanced green fluorescent protein (EGFP), has a HIV/murine stem cell virus (MSCV) hybrid long-terminal repeat as the promoter [38]. The vector was produced by calcium phosphate precipitation in 293T cells using conditioned medium, as previously described [9]. The culture medium was harvested 48 h after transfection and subjected to syringe and tangential flow filtration, followed by centrifugation to pellet the vector (50,000× g, 2 h). Virus titre was determined by transducing 293T cells (5 × 10 5 ) with serially diluted vector stocks and quantifying numbers of EGFP-positive cells by flow cytometry, as previously described [2]. Viral replication-competency was also assessed by RT-PCR [9].
Transduction of Liver Tissue
Female NOD mice were obtained from the Animal Resources Centre, Perth, Australia and were housed at the Ernst Facility, University of Technology Sydney, Sydney, Australia). The housing and experimental conditions complied with the Australian Code for the Care and Use of Animals for Scientific Purposes. Experiments were approved by the Animal Care and Ethics Committee, University of Technology Sydney (ETH17-1559). The mice received treatments after they had spontaneously developed diabetes (blood glucose levels ≥ 10 mmol/L for at least 3 consecutive days).
The vector dose used for a mouse was 5 × 10 10 vector genomes (vg). To study the effect of the AAV8-LSP system, the mice were divided into groups of seven and injected i.p. with AAV8 vectors expressing appropriate marker genes: AAV8-INS-FUR-mCherry or a combination of AAV8-INS-FUR-venus and AAV8-Pdx1 at equivalent doses. Untreated female diabetic and non-diabetic NOD mice were used as controls. To determine whether the FFO technique had a stimulatory effect on pancreatic transdifferentiation of the livers, the surgery was performed 7 days after the mice received i.p. injections of AAV8-INS-FUR-venus in order to allow expression from AAV8-INS-FUR prior to performing the FFO technique.
To determine whether the lentiviral capsid/promoter combination was capable of stimulating pancreatic transdifferentiation in the livers expressing AAV8-INS-FUR-mCherry, a further group of diabetic NOD mice (n = 6) received 5 × 10 6 transduction units (TU) of HMD/MSCV-eGFP infusion via the portal vein during FFO surgery 7 days after i.p. injections of AAV in order to enable expression from the AAV vector to develop prior to injecting the lentiviral vector.
The AAV8/piggyBac vector system is comprised of two different vectors: a transposon vector carrying the INS-FUR-mCherry construct and a transposase vector, which works by a 'cut and paste' mechanism [33]. To determine whether FFO surgery could induce pancreatic transdifferentiation, FFO surgery was carried out 7 days after the diabetic NOD mice received i.p. injections of the AAV8/piggyBac-INS-FUR-mCherry (transposon and transposase dose was 3 × 10 10 vg and 3.5 × 10 10 vg, respectively). It was anticipated that the mild injury induced by the FFO surgery [9][10][11]25] would stimulate hepatocyte regeneration thereby clearing the transposase to avoid the continuous excision and insertion of the transgene on the chromosomes.
Functional Analysis
Mouse body weights and blood glucose levels (BGLs) were monitored daily after the AAV8 treatments. IPGTTs were performed under anaesthesia after fasting the mice for 6 h with water ad libitum. For the IPGTTs, glucose was injected i.p. at a dose of 2 g/kg body weight. Blood was collected and glucose levels were measured at 0, 5, 15, 30, 60, 90 and 120 min after i.p. glucose injection. Human insulin in sera was quantitated using an Invitron Insulin ELISA Kit (IV2-102E, Invitron Ltd., Monmouth, UK), according to the manufacturer's protocol.
Microscopic Analysis
Serial frozen sections (15 µm) of the livers were prepared and fixed using acetone. Mounting medium containing DAPI (Vector Laboratories, Burlingame, CA, USA) was applied to the fixed sections to visualise nuclei. Images were acquired using a fluorescent microscope and camera (Olympus BX60, Olympus Imaging, Macquarie Park, Australia). The excitation and emission ranges of the marker genes were as follows: 400-550 and 500-650 nm for venus, and 540-590 and 550-650 nm for mCherry, respectively.
Vector Copy Number Analysis
Livers were collected for analysis of vector copy number (VCN) at the end of the experiment. Finely diced tissue pieces (25 mg) were homogenized in 500 µL of lysis buffer (10 mM Tris-Cl (pH 8), 0.1 M EDTA (pH 8), 20 mg/mL RNase A) and incubated for 1 h at 37 • C. Proteinase K (Sigma-Aldrich, North Ryde, Australia) was then added at a final concentration of 100 µg/mL, and the digestion was continued overnight at 55 • C. DNA was extracted by adding phenol/chloroform/isoamyl alcohol (25:24:1) (Thermo Fisher Scientific, Macquarie Park, Australia). The mixture was then centrifuged (16,000× g, 5 min), after which the aqueous phase containing the DNA was collected. This extraction process was performed twice and was followed by two extractions with chloroform/isoamyl alcohol (24:1). The DNA was precipitated with ice-cold 100% ethanol containing 1.7 M ammonium acetate and was washed with 70% ethanol. The DNA was dissolved in 10% TE buffer. The amount of DNA extracted from each of the samples was quantified using the Nanodrop spectrophotometer (Thermo Fisher, Macquarie Park, Australia).
The VCN for the mice that received AAV-INS-FUR-mCherry was quantified using primers and probes specific to the woodchuck hepatitis post-transcriptional regulatory element (WPRE) [39]. Quantitative PCR was carried out using Platinum ® Taq DNA polymerase enzyme (Invitrogen/ Thermo Fisher Scientific, Macquarie Park, Australia), as per the manufacturer's instructions. WPRE primers and probe concentrations were 0.8 and 0.2 µM, respectively (Table S1). The initial denaturation was carried out at 95 • C for 10 min (1 cycle), followed by 40 cycles of 95 • C for 15 s, and 60 • C for 60 s. Fluorescence was acquired at 60 • C and all the analyses were performed on the Rotorgene 8000 system (Qiagen, Chadstone Centre, Australia). The VCN in all samples was normalised against a qPCR specific for mouse GAPDH [34]. All standards consisted of linearised plasmids. The details of the primer and probe (Sigma-Aldrich, North Ryde, Australia) sequences are presented in Table S1.
The VCNs for the mice that received the combination of AAV-INS-FUR-venus and AAV8-Pdx1 were analysed in two steps. Firstly, to determine the total VCN, the copy number of the internal ribosome re-entry site (IRES) sequence was determined using the SYBR ® Premix Ex Taq™ system (Takara Bio, Scientifix, Clayton, Australia). Each of the samples was analysed in duplicate. The reaction (25 µL) contained 0.4 µM final concentrations of each primer, 50 ng of DNA template and 12.5 µL of the 2x SYBR premix. The reactions were carried out at 95 • C for 30 s (1 cycle), followed by 40 cycles of 95 • C for 5 s, 60 • C for 20 s, and 72 • C for 20 s. Fluorescence was acquired at 72 • C. In the next step, the copy numbers of INS-FUR and Pdx1 were analysed separately using the respective primers. The VCN for the transposon vectors in the mouse livers was determined by real-time qPCR. The VCN was expressed as vector copies/50 ng DNA.
Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) Analysis
For RT-PCR analysis, liver and pancreas were collected at experimental end points and frozen in dry ice. Control pancreas and liver tissues were obtained from NOD mice that did not develop diabetes. Total RNA was extracted using the MaxWell ® RSC instrument and the MaxWell ® RSC Simply RNA Tissue Kit (Promega, Madison, WI, USA). RNA samples were treated with DNase I (Applied BioSystems, Thermo Fisher, Macquarie Park, Australia), according to the manufacturer's protocol. Reverse transcription was performed using the Tetro cDNA Synthesis Kit (Bioline, Everleigh, Australia) and random primers, as per the manufacturer's protocol. PCRs were performed using GoTaq Green PCR ® Master Mix (Promega, Madison, WI, USA) with PCR parameters optimised for the amplification of the following genes: Beta-Actin, INS-FUR, Pdx1, NeuroD1, Nkx2.2, Nkx6.1, MafA, Pax6, P48, Mouse Insulin 1 and Insulin 2, Glut 2, pancreatic polypeptide and somatostatin (Table S2). Primers used were designed to cross intron exon boundaries to avoid amplification of any residual genomic DNA.
Statistical Analysis
Data were analysed using GraphPad Prism 8 software (GraphPad Software, San Diego, CA, USA). Two-way ANOVA followed by Tukey's multiple comparison tests were performed to compare the BGLs during the IPGTTs and the weekly random BGLs of the treated mice with that of the control groups. The Mann-Whitney test was applied when comparing the vector dosages and the VCN between the experimental groups. For the mice that were transduced by a combination of INS-FUR and Pdx1, a paired t-test was applied when comparing the AAV8-INS-FUR-venus and AAV8-Pdx1 copy numbers. The differences were considered significant when p < 0.05.
Microscopic Analysis
Immunoflourescent expression of the flurophores, mCherry and venus was examined in frozen sections of the transduced livers to examine transduction efficiency. Figure 1A shows expression of the mCherry marker gene (AAV8-INS-FUR-mCherry), Figure 1B,C show DAPI-stained nuclei and a merged image, respectively, 9 weeks after initial transduction with the non-integrating AAV8 vector. Figure 1D-F shows images of the venus marker gene (AAV8-INS-FUR-venus) and DAPI-stained nuclei at the experimental end point of 9 weeks after initial transduction with the non-integrating AAV8 vector. Figure 1A-F indicates widespread hepatocyte transduction as seen in our previous studies [39]. By comparison, normal liver tissue showed no expression of venus or mCherry ( Figure 1G-I), with only the DAPI-stained nuclei evident. Likewise, expression of the AAV8/piggyBac-INS-FUR-mCherry transposon/transposase system was also extensive in the liver tissue ( Figure Figure 1B,C show DAPI-stained nuclei and a merged image, respectively, 9 weeks after initial transduction with the non-integrating AAV8 vector. Figure 1D-F shows images of the venus marker gene (AAV8-INS-FUR-venus) and DAPI-stained nuclei at the experimental end point of 9 weeks after initial transduction with the non-integrating AAV8 vector. Figure 1A-F indicates widespread hepatocyte transduction as seen in our previous studies [39]. By comparison, normal liver tissue showed no expression of venus or mCherry ( Figure 1G-I), with only the DAPI-stained nuclei evident. Likewise, expression of the AAV8/piggyBac-INS-FUR-mCherry transposon/transposase system was also extensive in the liver tissue ( Figure 1J-L) at 15 weeks post-transduction.
Delivery of AAV8 Expressing INS-FUR ± Pdx1 Fails to Reverse Diabetes
In order to determine whether expression of INS-FUR alone would reverse hyperglycaemia in the diabetic NOD mice and establish normal glucose tolerance, the animals received an i.p. injection of the AAV8-INS-FUR-mCherry vector. The animals exhibited normalisation of BGLs on week 3, but became hypoglycaemic on week 5 ( Figure 2A). The copy number of AAV8-INS-FUR-mCherry in the livers of these mice was 3.33 ± 0.18 × 10 5 copies per 50 ng of DNA ( Figure 2C). At all the time points during the IPGTTs, the BGLs of the mice which received AAV8-INS-FUR-mCherry were lower than those of the diabetic mice (p < 0.05) ( Figure 2B). During IPGTTs, the BGLs of the mice which received AAV8-INS-FUR-mCherry were also lower than those of the non-diabetic control mice at 0, 5, 15, 30 but became hypoglycaemic on week 5 ( Figure 2A). The copy number of AAV8-INS-FUR-mCherry in the livers of these mice was 3.33 ± 0.18 × 10 5 copies per 50 ng of DNA ( Figure 2C). At all the time points during the IPGTTs, the BGLs of the mice which received AAV8-INS-FUR-mCherry were lower than those of the diabetic mice (p < 0.05) ( Figure 2B). During IPGTTs, the BGLs of the mice which received AAV8-INS-FUR-mCherry were also lower than those of the non-diabetic control mice at 0, 5, 15, 30 and 120 min (i.e., all time points sampled excluding 60 and 90 min; p < 0.05) ( Figure 2B). In an attempt to force pancreatic transdifferentiation, the β-cell transcription factor Pdx1 [13] (AAV8-Pdx1) was expressed in the livers together with the INS-FUR gene (AAV8-INS-FUR-venus). Mice that received the combination of AAV8-INS-FUR-venus and AAV8-Pdx1 remained hyperglycaemic (Figure 2A). At all the time points (0-120 min) during the IPGTTs, the BGLs of the mice that received the combination of INS-FUR and Pdx1 were not significantly different from the diabetic mice ( Figure 2B). At the end point of the experiment, the mean AAV8-INS-FUR-venus copy numbers in the livers were 2.41 ± 0.37 × 10 4 copies per 50 ng of DNA, and the mean AAV8-Pdx1 copy numbers were 3.41 ± 1.12 × 10 3 copies per 50 ng of DNA ( Figure 2C). The paired t-test showed that the AAV8-INS-FUR-venus copy number was significantly higher than the AAV8-Pdx1 copy number (p < 0.05) in the livers of the animals.
We have previously hypothesised that since liver and pancreas are from the same endodermal origin, it is likely that the FFO procedure represents an insult to the liver that stimulates deifferentiation of the hepatocytes to an immature phenotype. It is this dedifferentiation process that causes expression of the β-cell transcription factors, allowing pancreatic transdifferentiation to occur in the presence of insulin and a hyperglycaemic environment. [9][10][11]25]. Given the previous efficacy of using the FFO surgical technique and the lentiviral delivery of INS-FUR alone to reverse diabetes, we attempted to induce pancreatic transdifferentiation and normalise BGLs of diabetic mice by performing this procedure subsequent to the i.p. injection of AAV8-INS-FUR-venus. Unfortunately, BGLs were not normalised ( Figure 3A). This group of mice had significantly lower VCNs (5.12 ± 1. In an attempt to force pancreatic transdifferentiation, the β-cell transcription factor Pdx1 [13] (AAV8-Pdx1) was expressed in the livers together with the INS-FUR gene (AAV8-INS-FUR-venus). Mice that received the combination of AAV8-INS-FUR-venus and AAV8-Pdx1 remained hyperglycaemic (Figure 2A). At all the time points (0-120 min) during the IPGTTs, the BGLs of the mice that received the combination of INS-FUR and Pdx1 were not significantly different from the diabetic mice ( Figure 2B). At the end point of the experiment, the mean AAV8-INS-FUR-venus copy numbers in the livers were 2.41 ± 0.37 × 10 4 copies per 50 ng of DNA, and the mean AAV8-Pdx1 copy numbers were 3.41 ± 1.12 × 10 3 copies per 50 ng of DNA ( Figure 2C). The paired t-test showed that the AAV8-INS-FUR-venus copy number was significantly higher than the AAV8-Pdx1 copy number (p < 0.05) in the livers of the animals.
We have previously hypothesised that since liver and pancreas are from the same endodermal origin, it is likely that the FFO procedure represents an insult to the liver that stimulates differentiation of the hepatocytes to an immature phenotype. It is this differentiation process that causes expression of the β-cell transcription factors, allowing pancreatic transdifferentiation to occur in the presence of insulin and a hyperglycaemic environment [9][10][11]25]. Given the previous efficacy of using the FFO Cells 2020, 9, 2227 8 of 16 surgical technique and the lentiviral delivery of INS-FUR alone to reverse diabetes, we attempted to induce pancreatic transdifferentiation and normalise BGLs of diabetic mice by performing this procedure subsequent to the i.p. injection of AAV8-INS-FUR-venus. Unfortunately, BGLs were not normalised ( Figure 3A). This group of mice had significantly lower VCNs (5.12 ± 1.06 × 10 4 per 50 ng DNA) ( Figure 3C) compared to the mice that received i.p. injections of AAV8-INS-FUR-mCherry (3.33 ± 0.18 × 10 5 copies per 50 ng DNA) (p < 0.001) ( Figure 2C). The higher BGLs and the lower AAV8 VCNs of the mice that had AAV8-INS-FUR-venus (i.p.) and FFO surgery, as compared to the mice that only received an i.p. injection of AAV8-INS-FUR-mCherry supported the hypothesis that the FFO surgery may have induced tissue damage, leading to the regeneration of hepatocytes and, therefore, the reduction in AAV8 VCNs. Despite having high BGLs, during the IPGTTs, the BGLs of the mice that received AAV8-INS-FUR-venus and FFO surgery were not significantly different from those of the normal controls ( Figure 3B). To determine whether the lentiviral capsid/promoter combination was capable of inducing pancreatic transdifferentiation in the livers expressing INS-FUR, NOD mice received HMD/MSCV-eGFP as an infusion via the portal vein during FFO surgery 7 days after having i.p. injections of AAV INS-FUR-mCherry. The BGLs of the mice were normalised on week 2, but they became hyperglycaemic from week 4 onwards ( Figure 3A). The general health of the mice also deteriorated and symptoms of chronic hyperglycaemia, such as polyuria and polydipsia, persisted, leading to termination of the experiment before IPGGTs were performed.
Reversal of Autoimmune Diabetes Using the AAV8/piggyBac-LSP-INS-FUR Vector System and FFO Surgery
The piggyBac transposition system, which allows for the stable expression of transgenes over time [33], was employed to determine whether the episomal (non-integrating) expression provided by the traditional AAV8 system was insufficient to stimulate pancreatic transdifferentiation in the mouse livers. Firstly, we examined if the hyperglycaemia of the diabetic NOD mice could be normalised by injection of INS-FUR alone. The BGLs of the mice that received an i.p. injection of the AAV8/piggyBac-INS-FUR-mCherry without FFO surgery were reduced, but normoglycaemia was not reached ( Figure 4A). The transposon and transposase copy numbers in the livers of the mice that received i.p. injections of the AAV8/piggyBac-INS-FUR-mCherry vector system were 1.89 ± 0.05 and 1.57 ± 0.05 × 10 5 copies per 50 ng DNA, respectively, and were not significantly different ( Figure 4B). Interestingly, despite abnormal BGLs, the IPGTT results for the animals that received an i.p. injection of the AAV8/piggyBac-INS-FUR-mCherry without FFO surgery were not significantly different from those for the controls ( Figure 5A). This was possibly related to this vector favouring integrated expression of INS-FUR. Alternatively, the constitutive expression of insulin may have reached a balanced level in response to rising glucose levels in these animals as normal IPGTTs were also seen when the non-integrating AAV8 was used to deliver INS-FUR ( Figure 3B). Interestingly, despite abnormal BGLs, the IPGTT results for the animals that received an i.p. injection of the AAV8/piggyBac-INS-FUR-mCherry without FFO surgery were not significantly different from those for the controls ( Figure 5A). This was possibly related to this vector favouring integrated expression of INS-FUR. Alternatively, the constitutive expression of insulin may have reached a balanced level in response to rising glucose levels in these animals as normal IPGTTs were also seen when the non-integrating AAV8 was used to deliver INS-FUR ( Figure 3B).
In order to determine whether the FFO procedure had a stimulatory effect on pancreatic transdifferentiation of the livers and correction of hyperglycemia, diabetic mice received the AAV8/piggyBac-INS-FUR-mCherry vector and FFO surgery 7 days later. These animals showed a reduction in BGLs at three weeks post-treatment that was then maintained at concentrations not significantly different to normal controls (experimental end point, week 15) ( Figure 4A). Additionally, for animals that reverted to normoglycaemia, the BGLs during IPGTTs were not statistically different from values observed for the control mice ( Figure 5B). Analysis of human insulin concentrations in sera obtained during the IPGTTs showed that the levels of human insulin for mice that had received AAV8/piggyBac-INS-FUR-mCherry and the FFO procedure peaked 15 min after glucose delivery and returned to baseline levels by 60 min ( Figure 5C). These results indicated that the AAV8/piggyBac-INS-FUR-mCherry and the FFO procedure normalised BGLs for a significant period of time with normal glucose tolerance on IPGTT and human insulin peaked at levels seen in normal animals [40]. In the livers of the mice that received the AAV/piggyBac-INS-FUR-mCherry system and FFO surgery, the copy numbers of the transposon (2.6 ± 0.05 × 10 5 copies per 50 ng DNA) and transposase (1.9 ± 0.03 × 10 5 copies per 50 ng DNA), were not significantly different ( Figure 4B).
with FFO surgery (n = 7). (B) Transposon and transposase copy numbers of diabetic NOD mice that received i.p. injections of AAV8/piggyBac-INS-FUR or i.p. injection of AAV8/piggyBac-INS-FUR and FFO sugery. The results are expressed as the means ± SEMs.
Interestingly, despite abnormal BGLs, the IPGTT results for the animals that received an i.p. injection of the AAV8/piggyBac-INS-FUR-mCherry without FFO surgery were not significantly different from those for the controls ( Figure 5A). This was possibly related to this vector favouring integrated expression of INS-FUR. Alternatively, the constitutive expression of insulin may have reached a balanced level in response to rising glucose levels in these animals as normal IPGTTs were also seen when the non-integrating AAV8 was used to deliver INS-FUR ( Figure 3B).
RT-PCR Analysis
It can be seen from Figure 6 (Nkx2.2, Nkx6.1, MafA, Pax6, or mouse insulin 1) was not observed. The exocrine marker p48 was also not expressed (data not shown). Similarly, no evidence of β-cell transdifferentiation was observed when the piggy/Bac system was used to express INS-FUR. In this instance, only the INS-FUR was expressed Use of the piggyBac-INS-FUR system in combination with FFO surgery only resulted in expression of somatostatin and pancreatic polypeptide.
Discussion
Whilst treatment options for T1D are numerous, they are all limited in their long-term effectiveness [8] and, as a result, the search for more innovative and efficacious ways to treat/cure T1D is urgently required. Both insulin gene therapy and the reprogramming of liver cells to a β-cell phenotype have been studied by many groups as potential options [41]. The liver is considered an appropriate choice for these studies, as the liver and pancreas share a close developmental origin and the liver has great regenerative capacity. These studies have largely centred on the delivery of insulin and insulin analogues and/or β-cell transcription factors to liver cells using viral vectors, which suffer from varying multiple drawbacks. The most commonly used viral vectors are adenoviral vectors which cannot provide long-term expression of genes and are immunogenic [42]. Retroviral vectors are limited by their inability to transduce non-dividing cells and insertional mutagenesis has been problematic in a clinical trial of a severe-combined immunodeficiency patient [43]. Lentiviral vectors demonstrate long-term transgene expression but, as an integrating vector, may suffer from issues of insertional mutagenesis, although third-generation vectors have a much improved safety profile [44]. Non-integrating adeno-associated vectors show long-term expression and lack pathogenicity and immunogenicity, together with the ability to transduce liver tissues with high efficiency [31]. The AAVpiggyBac system is known to confer stable integration, and studies with the AAV2/piggyBac in our laboratory have shown less frequent integrations in intragenic regions in comparison to lentiviral vectors [34] and more importantly, the integrations were not found in the loci of genes associated with hepatocellular carcinoma [36].
The β-cell transcription factor Pdx1 has been shown to induce pancreatic transdifferentiation of liver tissue when delivered using adenoviral vectors [13,16] and some improvement in hyperglycaemia when delivered to a humanized mouse model using an AAV2 vector [22]. However, in the current study, delivery of INS-FUR alone (AAV8-INS-FUR-mCherry) or INS-FUR together with Pdx1 (AAV8-INS-FUR-venus + AAV8-Pdx1), using the non-integrating AAV8 vector did not reverse hyperglycaemia and there was no evidence of expression of β-cell transcription factors that lead to pancreatic transdifferentiation. As noted in the methods, the mice received equal doses of AAV8-INS-FUR-mCherry, AAV8-INS-FUR-venus and AAV8-Pdx1, but the VCN of the INS-FUR-mCherry was significantly higher than the INS-FUR-venus at the conclusion of the experiments and the INS-FUR-venus was significantly higher than the Pdx1. We have much experience in quantifying the VCN by quantitative RT-PCR and are thus confident in the values presented. However, the differences in the VCN of the constructs cannot be attributed to the composition of the vectors ( Figure S1) and, therefore, a definitive explanation is not possible for this observation. The 10-fold difference in the insulin vectors may be explained by the age of the mice. The NOD mice used in these experiments spontaneously developed diabetes from 12 to 26 weeks of age and it is thus not possible to isolate a diabetic cohort that is of exactly the same age. The mice used in the early experiment with the INS-FUR-mCherry vector that recorded VCNs 10-fold higher than those of the INS-FUR-venus vector averaged 16 weeks of age, whereas the second group averaged 21 weeks of age, and there is evidence that the vectors may transduce young animals more efficiently [45]. The lower transduction efficiency of the Pdx1 vector may be due to some associated toxicity with the Pdx1 vector, where Pdx1-expressing cells are lost after vector transduction. Another possible scenario may involve immune reactions against the vector. It has been recently reported that significant barriers to effective AAV2/8-insulin gene therapy in NOD mice were caused by reactivation of anti-insulin autoimmune responses as well as immune reactivity against vector components [24]. The researchers found that the efficacy of AAV-gene therapy in the NOD mouse was improved with anti-CD4 antibody treatment, indicating that T-helper subsets occurred. Future studies in NOD mice should look more closely at the immunogenicity of the vector, which may also be age dependent and consideration should be given to inducing diabetes with multiple low doses of streptozotocin (STZ) so all experimental cohorts are a similar age. This is the first study to utilise the AAV8/piggyBac system to deliver human insulin to diabetic NOD mice. We showed that i.p. delivery of the AAV8/piggyBac-INS-FUR vector, significantly reduced the BGLs of spontaneously diabetic NOD mice, but did not completely reverse hyperglycaemia. By comparison, i.p. delivery of this vector, followed 7 days later by a surgical procedure that isolates the liver from the circulation (FFO), resulted in reversal of diabetes from week 3 to 15 (experimental end point), without induction of hypoglycaemia and with restoration of normal glucose toleracne. Interestingly, in both circumstances delivery of the AAV8/piggyBac-INS-FUR vector resulted in normal glucose tolerance following a 6 h fast. These results occurred without the expression of β-cell transcription factors, and, therefore, pancreatic transdifferentiation. These observations suggested that the integration of the INS-FUR gene alone was beneficial for the regulation of BGLs only if the FFO procedure was also used. This result was likely attributable to efficient integration of the INS-FUR construct (due to removal of a proportion of the transposase because of cell division) resulting in higher insulin production and reversal of hyperglcaemia. However, the integration of the INS-FUR gene induced by the AAV8/piggyBac system was insufficient to stimulate the liver-to-pancreas transdifferentiation seen with the use of the lentiviral system because the necessary pancreatic transcription factors were not also expressed. This observation suggested that the FFO surgery and the presence of a certain element(s) in the HMD vector, which were not present in the AAV8 vector, were required to induce the transdifferentiation process when INS-FUR was delivered.
Pancreatic transdifferentiation that results in insulin storage and regulated secretion from storage granules is one gene therapy strategy under investigation to cure T1D. It is likely that for this to occur, a "pancreatic switch" must be activated [46]. This switch may involve expression of β-cell transcription factors [9][10][11]25], transient destruction of some liver tissue by the FFO delivery technique [9][10][11]25], and/or factors present in the second generation lentiviral vector [38]. In our previous studies, the lentiviral vector likely induced pancreatic transdifferentiation in certain lineage(s) of hepatic cells that displayed plasticity, such as oval cells or stem cells, and/or took advantage of their propensity to transdifferentiate into different cell types when stressed [47]. A study by Wang et al. [19] using STZ-diabetic mice indicated that the forced liver-to-pancreas transdifferentiation was not possible utilising AAV8 vector expression of Pdx1 and NeuroD1. The additional insult of an adenoviral vector that induced immune responses was required for pancreatic transdifferentiation, and some amelioration of the diabetic hyperglycaemia. Likewise, a study by Cerad-Esteban et al. [48] reported that the TALE homeoprotein, TGIF2, acts as a developmental regulator of pancreas versus liver fate in cell lines and primary rodent hepatocytes. The AAV-mediated delivery of TGIF2 first represses hepatic identity and initiates a 'switch' that turns on a pancreatic cell identity. We saw a similar pattern in our earlier study in NOD mice using the lentiviral vector to deliver INS-FUR, where there was significant upregulation of key β-cell transcription factors (Pdx1, NeuroD1 and Neurog3), and significant down regulation of hepatic markers (C/EBP-β, G6 PC, AAT and GLUI) at 7 and 10 days post-transduction of the livers, which was maintained until the experimental end point (150 days) [10].
Based on our work, it would appear that for AAV vectors to induce liver-to-pancreas transdifferentiation an additional factor(s), such as concomitant immune responses, a minor insult, or a developmental regulator, is required. A combination of the AAV8/piggyBac system and a cocktail of β-cell transcription factors may warrant future investigation [49]. The current study suggests that, with further development of the AAV vector system and a better understanding of the pancreatic transdifferentiation process, the integrating AAV8/piggyBac system may be useful to at least satisfy basal insulin requirements, and pancreatic transdifferentiation may not be required to achieve some advantageous clinical outcomes. Such outcomes may also be achieved with the use of inducible promoter systems such as the Tet-off system that has been shown to regulate insulin delivered by an AAV8 system in diabetic NOD.cg-Prkdcscidll2 rgtm1 Wjl/szJ mice [23]. Non-viral delivery mechanisms such as insulin constructs in minicircle DNA [21] which resulted in glucose-regulated insulin production from rat livers is a promising system that avoids possible complications of viral vectors. Haematopoetic stem cell-mediated gene therapy can produce a tolerogenic environment for islets and prevent destruction on transplantation, by halting antigen-specific memory T-cell responses [50]. This is one of many other possible gene therapy technologies being examined to treat/ cure T1D.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2073-4409/9/10/2227/s1, Figure S1: AAV vector maps; Table S1: Primer and probe sequences used for the quantitation of vector and transcript copy number; Table S2: Primer sequences for the detection of target transcripts by RT-PCR.
|
2020-10-08T13:05:51.824Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "a65db94b60e6dc470f329ad38310634a22067d06",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/9/10/2227/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "258c7e38ead2bf1ad1ad94c076f63e51a3831654",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258844471
|
pes2o/s2orc
|
v3-fos-license
|
MultiNEP: a multi-omics network enhancement framework for prioritizing disease genes and metabolites simultaneously
Abstract Motivation Many studies have successfully used network information to prioritize candidate omics profiles associated with diseases. The metabolome, as the link between genotypes and phenotypes, has accumulated growing attention. Using a ”multi-omics” network constructed with a gene–gene network, a metabolite–metabolite network, and a gene–metabolite network to simultaneously prioritize candidate disease-associated metabolites and gene expressions could further utilize gene–metabolite interactions that are not used when prioritizing them separately. However, the number of metabolites is usually 100 times fewer than that of genes. Without accounting for this imbalance issue, we cannot effectively use gene–metabolite interactions when simultaneously prioritizing disease-associated metabolites and genes. Results Here, we developed a Multi-omics Network Enhancement Prioritization (MultiNEP) framework with a weighting scheme to reweight contributions of different sub-networks in a multi-omics network to effectively prioritize candidate disease-associated metabolites and genes simultaneously. In simulation studies, MultiNEP outperforms competing methods that do not address network imbalances and identifies more true signal genes and metabolites simultaneously when we down-weight relative contributions of the gene–gene network and up-weight that of the metabolite–metabolite network to the gene–metabolite network. Applications to two human cancer cohorts show that MultiNEP prioritizes more cancer-related genes by effectively using both within- and between-omics interactions after handling network imbalance. Availability and implementation The developed MultiNEP framework is implemented in an R package and available at: https://github.com/Karenxzr/MultiNep
S1.2 Choice of denoising thresholds for S E sub-networks
In step2 of the MultiNEP framework, the enhanced disease-specific network S t,sym needs to be further denoised. Because there are 3.3% and 3.5% edges in prostate and breast cancer S 0 g ; 23.0% and 17.7% edges in prostate and breast cancer S 0 m ; and 2.3% and 1.6% edges in prostate and breast cancer S 0 gm , we chose 5% and 30% as denoising thresholds for S t,sym g and S t,sym m , respectively. Since g-m interactions are much less studied, we kept more g-m interactions (15%) in S t,sym gm (Tables S2, S3).
S1.3 Data Processing
DF/HCC Prostate Cancer Cohort For metabolites, we followed the preprocessing pipeline as in Penney et al. [2021] implemented in the R package maplet Chetnik et al. [2021]. Briefly, we started with 222 metabolites detected from three different batches of experiments. We excluded 8 metabolites with more than 50% of missing, leaving us with 214 metabolites. After removing batch effects by correcting data to the run day median and performing quotient normalizations Dieterle et al. [2006], we imputed missing metabolites using K-nearest neighbor method Do et al. [2018]. We also excluded 11 metabolites that are not in the general network S 0 and ended up with 203 metabolites for analysis. For gene expressions, we used the robust multichip average (RMA) algorithm Irizarry [2003] from oligo R package Carvalho and Irizarry [2010] to normalize gene expressions and transformed to log2 scale, and only kept the probe with the largest variance for those with the same gene symbol annotation. We removed batch effects using 'removeBatchEffect' function from the R package limma Ritchie et al. [2015]. We ended with 29,054 genes where 18,009 of them are in the general network S 0 . Note that, MultiNEP does not require gene expression and metabolite data to be normalized.
GSE37751 Breast Cancer Cohort
There are 350 identified and pre-processed metabolites, among which 224 can be mapped in the general metabolite network. We downloaded the processed gene expression data, and transformed into log2 scale. We used the probes with maximum expression variance across samples to represent gene expressions if the gene has multiple probes. There are 23,199 genes with expression profiles, among which 17,202 genes are also in the general network. S1 S1.4 Application using GSE37751 Breast Cancer Cohort A Multi-omics general network A general multi-omics network S 0 was obtained from STRING and STITCH databases. We kept only genes and metabolites that are also in the breast cancer disease omics profiles. Thus, the general network S 0 has 17,202 genes, 224 metabolites, with 10,368,398 g-g interactions, 8,862 m-m interactions and 61,589 g-m interactions (Table S3).
Disease multi-omics profiles The omics profiles of GSE37751 Breast Cancer Cohort include DNA methylation, gene expression, and metabolome of fresh-frozen human breast tumors [Terunuma et al., 2013]. We only included 60 tumor and 47 normal-adjacent breast tissue samples (with 45 matched tumor and normal-adjacent pairs) with both gene expressions and metabolite abundances. Disease-specific similarity matrix E was constructed using 17,202 gene expressions and 224 metabolites of all 107 samples (60 tumor samples and 47 normal-adjacent tissue samples). Disease association scores v were generated using paired t statistics of 45 matched tumor and normal-adjacent pairs.
Signal prioritization We set λ g = 0.05, and λ m = 20 when applying MultiNEP to prioritize candidate breast cancer-related genes and metabolites. The 490 breast cancer (C0678222) related genes from DisGeNET were used as gold standards. We evaluated model performance by comparing the numbers of breast cancer-related genes prioritized by MultiNEP and competing methods within top ranked 1 to 500 candidate genes.
As shown in the top right panel of Figure 4, the general network, DiSNEP and MultiNEP prioritized similar numbers of breast cancer-related genes within top ranked 1 to 500 genes. We investigated reasons for the similar performances in section S1.5. Similar as in prostate cancer cohort, we investigated top ranked 200 genes identified by MultiNEP and DiSNEP. MultiNEP and DiSNEP prioritized 59 and 58 breast cancer-related genes. Of those 54 genes overlapping, 5 were uniquely identified by MultiNEP, and DiSNEP identified 4. We similarly calculated IRS for the 54 overlapping genes, 5 genes uniquely identified by MultiNEP and 4 genes uniquely identified by DiSNEP as in the application using the DF/HCC prostate cancer cohort, and observed similar patterns of results. Specifically, the 54 overlapping genes already have high IRS in S 0 g and S 0 gm , and can be identified by both DiSNEP and MultiNEP using either g-g interactions or g-m interactions. These genes have even higher IRS in S g but low IRS of 0.62 in S 0 gm . Without adjusting for relative contributions of g-g interactions to g-m interactions, DiSNEP can use their strong g-g interactions and overlook their weak g-m interactions to prioritize them with high ranks. Instead, MultiNEP lowered the relative contribution of their strong g-g interactions to their weak g-m interactions, and thus cannot identify them. The 5 genes uniquely identified by MultiNEP have high IRS of 3.27 in S 0 g and higher IRS of 9.36 in S 0 gm , so addressing more on their stronger g-m interactions relative to their g-g interactions can result in higher ranks. Above results again confirmed the ability of MultiNEP to handle network imbalance and boost signal prioritization performance.
Sensitivity analysis using a different general network S 0 For sensitivity analysis, we obtained the general multi-omics network from PathwayCommons [Cerami et al., 2010], which has 17,609 genes, 108 metabolites, 924,601 g-g interactions, 174 m-m interactions, and 5,216 g-m interactions (Table S3). We set λ g = 0.005 and λ m = 200 for MultiNEP when using PathwayCommons S 0 . Similarly, MultiNEP consistently outperforms competing methods and identified more breast cancer-related genes.
We investigated why there is a bigger improvement when using PathwayCommons S 0 . To do so, we examined the 16 prostate cancer-related genes and 20 breast cancer-related genes identified only by MultiNEP (PC), but not by the other five methods (General Net (PC), DiSNEP (PC), General Net (SS), DiSNEP (SS), and MultiNEP (SS)) among top 200 prioritized candidate genes based on DisGeNET. Using the 16 prostate cancer-related gene as an example, as shown in Table S5, these 16 genes have weak IRS in STRING & STITCH S 0 g (IRS=2.22) and S 0 gm (IRS=3.07). The g-g and g-m interactions are not strong enough to help prioritize these genes using either DiSNEP (SS) or MultiNEP (SS). On the contrary, these 16 genes have stronger IRS in PathwayCommons S 0 g (IRS=3.88) and extremely strong IRS in S 0 gm (IRS=24.52). These strong g-m interactions can only be efficiently used by MultiNEP (PC) after giving more weights on g-m interactions. Similar patterns can be observed in the 20 breast cancer-related gene uniquely identified by MultiNEP (PC).
S1.6 MultiNEP performance improvements with different omics profiles
As observed in Figure 4, with PathwayCommons, MultiNEP(SS) outperforms DiSNEP(SS) and General Net(SS) using either prostate cancer or breast cancer data. With STRING&STITCH, all three have similar performance especially when using breast cancer data ( Figure 4). We thus investigated rankings of IRS scores of genes that are also in STRING&STITCH or PathwayCommons, which we separated these genes into prostate/breast cancer-related genes based on the DisGeNET database and other noise genes. We considered rankings of these genes in terms of IRS. Recall that IRS are interaction ratio scores that measure if individual features (genes/metabolites) have stronger/weaker interactions relative to the average. That is, we investigated (1) differences in rankings of cancer-related genes in GeneralNET S 0 and that in DiSNEP S (1,1) Eg , which could reflect how much information disease omics profiles help in enhancing the general network, and could be used to understand differences in performance between GeneralNET and DiSNEP, and (2) differences in rankings of cancer-related genes in DiSNEP S ), which could reflect how much information the reweighting steps bring, and could be used to explain differences in performance between DiSNEP and MultiNEP.
For item (1), the mean differences in rankings of cancer-related genes GeneralNet -DiSNEP are: • for STRING&STITCH and prostate cancer: 3.84 (±1215), this suggests that these cancer-related genes rank higher in terms of IRS in DiSNEP after enhancement using omics data; • for STRING&STITCH and breast cancer: -93.4 (±818), this suggests that these cancer-related genes rank lower in term of IRS in DiSNEP after enhancement using omics data; • for PathwayCommons and prostate cancer: 37.5 (±1251); • for PathwayCommons and breast cancer: -12.1(±988).
These results explain the bigger improvements of DiSNEP over that of GeneralNET in prostate cancer data than that in breast cancer data. When comparing STRING&STITCH and PathwayCommons for prostate cancer data, the change in ranking is bigger on average for PathwayCommons (mean=37.5) than that for STRING&STITCH (mean=3.84), which explains the bigger improvements using PathwayCommons.
These results explain the bigger improvements of MultiNEP over that of DiSNEP in prostate cancer data than that in breast cancer data. When comparing STRING&STITCH and PathwayCommons for prostate cancer data, the change in ranking is much bigger on average for PathwayCommons (mean=182) than that for STRING&STITCH (mean=36.8) Overall, these results explain bigger improvements using PathwayCommons than using STRING&STITCH in general, and bigger improvements for prostate cancer data than that for breast cancer data in general. That also explains that the performance of three methods is similar in Figure 4 top right, that is, for breast cancer data using STRING&STITCH.
S1.7 Statistics of computational times
We use the DF/HCC prostate cancer data as an illustrative example to describe the computational times of MultiNEP and competing methods. S 0 from STRING and STITCH databases with 18,009 genes and 203 metabolites was used. 0.29% * : percentage of edges with non-zero weights out of all possible edges of a network with # nodes † : edges with non-zero weights: median (25 th percentile, 75 th percentile). 3.78% Gene-Metabolite 19,051 7,023 0.34% * : percentage of edges with non-zero weights out of all possible edges of a network with # nodes † : edges with non-zero weights: median (25 th percentile, 75 th percentile).
4.06
Figure S1: Simulation results comparing extended DiSNEP when using a multi-omics network to the original DiSNEP when using a single-omics network. Dashed lines are average numbers of identified true gene signals out of top ranked 1 to 500 prioritized candidate genes (the left panel), and average numbers of identified true metabolite signals out of top ranked 1 to 50 prioritized candidate metabolites (the right panel) using the original DiSNEP with a single-omics network. Solid lines are that when using the extended DiSNEP with a multi-omics network out of top ranked 1-500 genes, and out of top ranked 1-50 metabolites. All numbers are averaged over 100 simulations, with correlations between signal genes and signal metabolites set at ρ = 0.35. Figure S2: Simulation results comparing performance of MultiNEP S 0 that only reweights S 0 to that of MultiNEP that reweights both S 0 and E. Displayed are average numbers of identified true signal genes, signal metabolites, and both (genes and metabolites) out of top ranked 1 to 500 combined features across 100 simulations when correlations between signal genes and signal metabolites were set at ρ = 0.05, 0.2, 0.35, 0.5. We set λ g = 0.05, λ m = 20 for both MultiNEP S 0 and MultiNEP.
|
2023-05-24T06:17:48.975Z
|
2023-05-22T00:00:00.000
|
{
"year": 2023,
"sha1": "644324972c9441ee3b8fc3bbe377859d36f99ca8",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btad333/50418334/btad333.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bdf0e13f7d2c026412c3c373d3d6c5bf0eef796",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Computer Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269421342
|
pes2o/s2orc
|
v3-fos-license
|
An Economic Study of a Honeybee Breeding Project
Honey bee projects are characterized by a low capital required to start the project when compared to that needed for other projects. The capital cycle is relatively fast, as it reaches honey every 4 months, and produces and multiplies the by-products of honey bee breeding project , such as wax, royal jelly, trade in parcels, bee venom, bee glue (propolis), queens, medicines from bee products, pollination of fruit trees, increasing agricultural production, and other by-products. Accordingly, during the current study, the focus was on the feasibility study for the honey beekeeping project, where the focus was on a case study for the financial evaluation of syphilis cell apiary projects in Qalyubia Governorate as one of the projects that facilitate the joining of young graduates, and also as one of the small and micro agricultural investment projects that absorbed not a small amount of young graduates and absorbed a significant amount of unemployment currently existing in Qalyubia Governorate in Egypt, during the agricultural season 2021.-2-22. The problem of the study is that despite the multiplicity of benefits of honey breeding projects, these projects did not receive sufficient attention and awareness of guidance and economic from investors and decision makers. The research aims to seek to raise the economic efficiency of some existing agricultural investment projects in Qalyubia Governorate in Egypt, through financial and economic evaluation, as well as analysis of the sensitivity of honey production projects in light of some assumptions that aim to achieve the maximum possible economic efficiency, in order to give a clear picture to investors about the actual status of the project under study, a honey production project, in order to decide on judging the success of investment in the project so that the investor can invest his money as efficiently as possible. The published and unpublished data were obtained from the official authorities from several sources, the most important of which is the Small Projects Development Authority in Qalyubia Governorate. Many different sources have also been relied on to cover the elements of the research through studies, reports, bulletins and other sources related to the subject of the research. As well as the preliminary data through the number of 25 questionnaire forms as a sample, where a deliberate sample was selected to study the feasibility of honey bee breeding project , a case study of Qalyubia Governorate, as an example of small agricultural projects, and the choice was made for the spread of honey breeding apiaries in it as a place for study, which is that intentional sample that was relied upon with the owners of projects to raise honey bees, in addition to the Internet and specialized references to the subject of the study. The most important results were that this project achieves at the discount rate of 15% a positive net present value estimated at 140541.0868 pounds / year, as well as the ratio of discounted cash flows to discounted costs > 1 and the discounted profitability index for this project was estimated at about 1.44%, which is a value greater than the correct one. It means that each invested pound has generated a net return of 44 piasters, which exceeds the opportunity cost of this project, which is the interest rate on borrowing estimated at about 16%, which indicates that the project has the capabilities and ability to recover fixed capital, production costs (variable) and operating costs (depreciation, maintenance) that were spent on it. Therefore, the study recommends that these types of projects continue to be funded according to this criterion.
Introduction
Honey bee projects are characterized by the low capital required to start compared with that required for other similar projects.Also its capital cycle is relatively fast, as honey production is obtained every 4 months.There are many secondary-products of honey bee breeding project ,such as wax, royal jelly, trade in parcels, Bee venom, bee glue (propolis), malic acids, medicines from bee products, pollination of fruit trees, increasing agricultural production, and others.So, the research focused on the feasibility study of the honey bee breeding project.Where it focus on the financial evaluation of apiary projects with Frankincense hives in Qalyubia Governorate in Egypt, as one of the projects in which it is easy for young graduates to join.Also as one of the small and micro agricultural investment projects that absorbed a significant amount of both the young graduates and the unemployment which existing currently in Qalyubia Governorate during (2021/2022).
Research Problem
Despite the many benefits of honey bee breeding project, these projects have not received sufficient attention, guidance, and economic awareness from investors and decision makers.So, the study asks the following question: Are honey bee projects economically feasible or not from the point of view of the owner (the investor)?
The problem of the study is that the seek to increase the economic efficiency of some existing agricultural investment projects in Qalyubia Governorate, through the financial and economic evaluation of these projects under some assumptions which aimed to achieving the maximum possible economic efficiency to give investors a clear picture of the actual situation of the project under study, which enable the investor to invest his money as efficiently as possible.
Research Objectives
This study aims to evaluate the financial performance of honey bee breeding project,this study focus on discounted standards only, which take the time element into account, calculate the inflation rate, or take the time value of the money unit, by studying the following objectives: Evaluating the investment costs, production costs, maintenance and operation costs of the project. Evaluating revenues and the value of capital assets at the end of the project's lifespan. Calculating the financial evaluation indicators of the project in order to make a decision regarding the extent of the success of investment in the project, as well as analyzing the sensitivity of the project.
Methodolgy
To achieving the objectives and obtain the results, The research relied on the descriptive and quantitative economic analysis, which used in evaluating the commercial and financial profitability of honey bee breeding project.
The standards used in financial evaluation can be classified according to the inclusion of the time element into discounted and non-discounted standards.
Non-discounted criteria: which do not take the time element or the inflation rate in the caculation, the most important of them are: • Pay back period criteria (PBP).
• Accounting Rate of Return on Investment (ARR).Discounted criteria: which take the time element, the inflation rate in the caculation, or take the time value of the money unit.The most important of these criteria are: • Benefit/Cost Ratio criteria (B/C).
Sources of Data
The study relied on published and unpublished data from official authorities from several sources, the most important of which is the Small Enterprise Development Agency in Qalyubia Governorate.Many different sources were also relied upon to cover the research elements through studies related to the research topic, As well as primary data through 25 questionnaires with owners of honey bee breeding project, in addition to the Internet and references specialized in the subject of the study.
The Study Sample
A deliberate sample was chosen as an example of small agricultural projects to study the feasibility of honey bee farming projects in Qalyubia Governorate, which was chosen because of the spread of honey bee breeding project
Definition of Small Projects in Egypt
Small projects in Egypt were suffered from the lack of a clear and specific definition for them, due to the different perceptions of them among the planning, implementation, statistics and financing agencies.With the issuance of Law No. 141 of 2004, called the Small Enterprise Development Law, every company or individual establishment that practices a productive, service, or commercial economic activity whose paid-up capital is not less than fifty thousand pounds and does not exceed one million pounds, and the number of workers in it does not exceed fifty workers.As for micro-enterprises, the law defines them as every company or individual establishment that practices a productive, service, or commercial economic activity and whose paid-up capital is less than fifty thousand pounds.It is noted from the previous definition that the Egyptian legislator used the standard of labor and capital in defining small projects, and this definition applies to the honey production project [1].
Technical and technological feasibility study for the honey bee breeding project.
The technical and technological methods that the project may need or use are identified in this part, and the extent to which these means and methods are consistent with the technical and technological needs of the project and are compatible with the project's circumstances.It also includes studying the availability of technical knowledge of these methods, its cost, the possibility of developing them, and their relationship to the nature of the product (honey).So, we must not overlook the necessity of choosing methods that are compatible with local conditions, most feasible for application and development, and least use of complex technology.
Feasibility 7.1. Commercial and Financial Profitability Criteria for Honey Bee Breeding Project
The financial feasibility criteria of projects, can be use in judging on the extent of acceptance or rejection "success or failure" of a particular project through a set of financial criteria, which are called investment criteria, that can be classified according to whether or not the time element is taken in the calculating it into two types: Simple non-discounted criteria and measures : that do not take the time factor in calculation [2]. Simple discounted criteria , that take the time factor in the calculation and therefore they use the time value of money, and this value varies whenever the interest rate or time period differs.
First: Simple non-discounted criteria and measures Rate of return investment criteria The simple criteria for calculating the payback period Second: Simple discounted criteriaand measures [2] Net Present Value criteria (N.P.V). Benefit cost ratiocriteria (B/C). Capital recovery period criteria as a discounted criteria.
Commercial Profitability Evaluation Criteria under Conditions of Risk and Uncertainty for a Honey Bee Breeding Project
Sometimes the investment decision maker does not have sufficient information about the proposed alternatives, which makes the process of investing in these alternatives characterized by uncertainty and risky investment in them.Risk means unexpected fluctuations in investment returns, and the degree of risk increases as the degree of volatility increases, and it is considered a relative measure of the extent of the volatility of the expected net investment return.As for uncertainty means that the natural situations which may occur in the future and it is impossible to predict the potential returns of them about investment, and it also mean the lack of any sufficient historical information or data for the decision maker to make the investment decision [3]
Economic Evaluation of a Honey Bee Breeding Project
The first thing that man knew of the benefits of bees was honey, but also There are many other bee products that are no less important than honey production as a source of income for beekeepers, such as the production of bee parcels, queens, royal jelly and bee venom.The importance of the last two products in addition to honey for its medicinal and medicinal uses.Despite the above, there is another part that has importance and impact on agricultural field, which is the important role played by the honey bee in pollinating the flowers of agricultural crops, and the impact of this on the qualitative and quantitative increase in the yield of feddan, which has a positive impact on farmers and national income.As the honey bee is considered one of the most important pollinating insects, where about 80% of field and horticultural crops depend on it for pollination, and about fifty crops either depend entirely on bees for pollination or that increases their production.Generally, it can be said that honey bee breeding project can significantly contribute in increasing agricultural production and thus national income.It also helps in creating good job opportunities, starting from manufacturing wooden hives and ending with marketing the project's various products [4] .
Annual Production Capacity of the Project
The project's production capacity is primarily based on establishing an apiary of one hundred hives, which can produce: Annual honey production with good specifications about (1750 kg -2000 kg) Annual production of bee parcels is about (150-250 bee parcels) Annual beeswax production ranges between (300-320 kg) Annual royal jelly production ranges between (750-950 grams).
The Production Capacity of the Project (Annually)
The production capacity of the project is based primarily on establishing an apiary with a capacity of one hundred hives and can be limited to: The annual production of honey with good specifications ranges between (1750 kg -2000 kg) The production of bee packages ranges annually between (150-250 bee packages) The annual production of beeswax ranges between (300-320 kg) The annual production of royal jelly ranges between (750-950-grams).
The Results
The project's main and secondary products and their specifications: First: Honey production: It is considered one of the main objectives of the project Second: Producing bee parcels [5] It is considered one of the important objectives as it results in either an increase in the number of colonies in the apiary or an increase in the project's income by selling them and producing bee parcels.Important requirements, including good experience, especially in the field of queen rearing and dividing good colonies.It was possible to start producing parcels during January and February, especially in the first period, as sources of nectar and pollen are available that help in this.It is also possible to produce parcels during the flowering period of citrus gardens, as this period is characterized by the abundance of parcels that can be hunted, housed, and cared for.Parcels can also be produced during the last period of cotton abundance.Through the project, it is possible to produce 150-250 packages annually .
Third: Production of queen jelly and royal jelly: This type of production can serve the field of parcel production and requires high experience and skill.Fourth: Production of beeswax Producing beeswax and marketing its commercial importance in treatment, industry and other fields with an average annual quantity of about 300 -320 kg of bee wax annually.
Fifth: Creating new job opportunities: Annual revenue (sales): The project consists of one hundred bee colonies and applies the stable beekeeping system to them in a fixed and permanent location.
Value
Capital costs 39334 Operating costs for one session (duration of the course is four months) 19300 Total investment costs of the project 58634 Source: Collected and calculated from the data of the study sample.
Calculating investment costs on the basis of: Capital invested in the project = initial investment + operational costs for one cycle (the duration of the cycle is four months) = Total investment costs for the project = fixed costs + variable costs Total investment costs for the project = 39334 + 19300 = 58634 pounds
The cost of operating one cell per year
The Depreciation = Purchase price of the assets -annual depreciation rate (10% of the asset value) / the number of years in which the asset is expected to be used (5) years [6] Depreciation =(39334-3933.4)/5=7080.12pounds How to choose the appropriate discount rate for honey bee breeding project
Value per pound items
The goal of the discount rate is to remove the effect of time on the cash flows of the project, from the beginning of its implementation to its completion, in case of increased reliance on the interest rate determined by the Central Bank on loans.However, if the project costs are covered by the Project Development Agency For small businesses or project owners, the discount rate is estimated as follows [7] : Owned capital: variable costs for three seasons on average = 57,900 x 3 = 173,700 pounds .Discount Price = (Owned capital ×minimum required rate of return for the entrepreneur+ borrowed capital× interest on loan) / Total Capital Discount rate = (61833×14%) + (40000× 16%) × 100 / 101833= 0.148 = Approx.15% The appropriate discount rate for this project is =15% compared to the opportunity cost available to invest in the community (investing in banks).
The discount rate is determined in light of the cost of available funds or the weighted average cost, i.e. the discount rate is equal to the minimum weighted cost of financing rate.This rate represents the minimum demand of the owners for a return on their funds invested in the project [8].Discounted commercial profitability criteria for a honey bee breeding project.By studying the results of Table 17: It is clear that according to the following discounted commercial profitability criteria: The criterion of the ratio of discounted revenues to discounted costs "the ratio of benefits to costs at a discount rate of 15% Benefit /Cost Ratio (B/C).Benefit / Cost Ratio (B/C) =461496.3458/320955.259=1.438≅1.44%PoundIt represents the ratio between the present value of revenues or benefits and the present value of total costs according to the following equation: Ratio of revenues to costs = present value of revenues / present value of costs By calculating this ratio, we find the answer is one of three answers, and the acceptance or rejection of the project is judged by it.
The first: The ratio of revenues to costs is greater than one.∴The project is accepted and we recommend its implementation.
The second: The ratio of revenues to costs is less than one.∴ The project is rejected and we do not recommend its implementation.Third: The ratio of revenues to costs = 1 ∴ The extent of acceptance or rejection of the project depends on the project owner.Whether he accepts it or rejects it, this is his decision because it will not achieve any economic returns, but sometimes it is accepted to implement such cases in cases when they have social returns.
And according to the criterion The ratio of current benefits to current costs at a discount rate of 15%, as it equals 1.44, which is a value greater than one.It means that every pound invested has generated a net return of 44 piasters.Therefore, we recommend continuing to finance these types of projects in accordance with this standard.
Net Therefore, the rate of income return is calculated with a value greater than the previous values, which is the value of 20%, and the difference between the larger discount rate and the smaller discount rate must not be less than 5%.This means that the project remains feasible as long as the opportunity cost of investment upfront is less than 17.7%.
Results of the economic evaluation of the honey bee breeding project: This project achieves at the discount rate of 15% Table 17: Net positive present value estimated at 140541.0868pounds / year, as well as the ratio of discounted cash flows to discounted costs > 1 The discounted profitability index for this project was estimated at about 1.44%, which is a value greater than the correct one, which means that each pound invested has generated a net return of 44 piasters, which exceeds the opportunity cost of this project, which is the interest rate on borrowing estimated at about 16%, which indicates In addition to achieving a return of 28% on the use of funds invested by the investor (self-financing), and it is required to achieve an economic return of 14%, as a minimum, or borrowed by the Social Fund for Development or (currently the Small Projects Development Authority) at an interest rate of 10%, or in the case of borrowing from the bank, where the interest rate was estimated at 16% for that.This project was able to cover the loan and its cost (interest), it is left with an additional profit for the investor of 28%, which is the difference between the best alternative opportunity of 16%, which is (the bank) and the investment in the project, 44%.Therefore, in light of the current results, the financing of this project can be considered a successful effort by the Social Fund for Development or (currently the Small Enterprise Development Agency).Therefore, we recommend continuing to fund these types of projects in accordance with this Criteria.
Sensitivity Analysis of the Honey Bee Breeding Project by Increasing Outflows at a Rate of 5% Annually
In this part of the study, we address the financial evaluation of the honey bee breeding project under conditions of risk and uncertainty: It is known that the honey bee breeding project, like other agricultural projects, is exposed to a lot of risk and uncertainty.This is done by conducting a sensitivity analysis for the project at the discount rates of 10% and 15% in the event that outflows increase at a rate of 5% annually while inflows remain constant, and also in the case of a decrease in inflows at a rate of 2% while outflows remain constant at the discount rates of 10% and 15% [11].Table 19, 20 shows the project's profitability under the conditions of conducting a sensitivity analysis by increasing outflows at a rate of 5% annually while keeping inflows constant at discount rates of 10% and 15%.The profitability of the project under the conditions of conducting sensitivity analysis by increasing outflows by 5% with constant inflows at a discount rate of 10% -15% The increase in the first year was calculated on the basis of = 141204.Pound/Year It is clear from the results of the sensitivity analysis of the honey beekeeping project the sensitivity of the project to an increase in production costs by 5% Table 19, 20, 17, where the increase in the costs of raw materials used by 5% led to a decrease in the net present value from 140541.0868pounds / year Table 17 to 124493.32 pounds / year Table 20.As well as the decrease in the profitability index from 1.44% to 1.37%, which means a decrease in the profit invested on each pound from 44 piasters to 37 piasters, with a decrease of 7 piasters for each 5% increase in costs.An additional profit for the investor is estimated at 21%, which is the difference between the best alternative opportunity (16%) which is (the bank) and the investment in the project 37%.Thus, in light of the current results, the financing of this project can be considered a successful work by the Small Projects Development Authority for this project, provided that the installments and interest of the loan are paid annually.Sensitivity analysis of a honey bee breeding project when inflows decrease at a rate of 2% annually: In this part of the study, we address the financial evaluation of the project under conditions of risk and uncertainty and conduct a sensitivity analysis with a decrease in inflows at a rate of 2% annually with outflows constant at two discount rates of 10%.and 15%.
It is known that the honey bee breeding project, like other agricultural projects, is exposed to a lot of risk and uncertainty, by conducting a sensitivity analysis for the project at the discount rates of 10% and 15% in the event that outflows increase by 5% annually while inflows remain constant.Likewise, in the case of a decrease in inflows at a rate of 2% while outflows remain constant at the discount rates of 10% and 15%, Table 22, 23 shows the profitability of the project under the conditions of conducting a sensitivity analysis with a decrease in inflows at a rate of 2% annually with outflows constant at My prices are 10% and 15% discount.It is clear from the results of the sensitivity analysis of the honey bee breeding project how sensitive the project is to a 2% decrease in revenue (inflows) with outflows remaining constant, as Table 17, 21 shows, where a 2% decrease in revenue (inflows) with outflows remaining constant led to a net decrease.The current value is from 140,541.09pounds/year to 131,311.16 pounds/year, as well as a decrease in the profitability index from 1.44% to 1.41%, which means a decrease in the profit invested on each pound from 44 piasters to 41 piasters, with a decrease of 3 piasters, which represents 2% in revenues.From the previous results, it is clear that the project has little sensitivity to any decrease in revenues, which means that the project is economically feasible at the present time, but the sensitivity to any future changes will increase.Therefore, this project was able to cover the loan and its cost (interests), and still have an additional profit for the investor of an amount Bingo is 25%, which is the difference between the best alternative opportunity of 16% (the bank) and investing in the project 41%.Therefore, in light of the current results, financing this project can be considered a successful effort by the Small Enterprise Development Agency.Therefore, we recommend continuing to finance these types of projects according to this criterion.Source: Collected and calculated from data in tables 14 -The most important results were that this project achieves at the discount rate of 15% a positive net present value estimated at 140541.0868pounds / year, as well as the ratio of discounted cash flows to discounted costs > 1 and the discounted profitability index for this project was estimated at about 1.44%, which is a value greater than the correct one.
It means that each invested pound has generated a net return of 44 piasters, which exceeds the opportunity cost of this project, which is the interest rate on borrowing estimated at about 16%, which indicates that the project has the capabilities and ability to recover fixed capital, production costs (variable) and operating costs (depreciation, maintenance) that were spent on it.
-In addition to achieving a return of 28% on the use of the money invested by the investor (self-financing), and it is required to achieve an economic return of 14%, as a minimum, or borrowed by the Social Fund for Development or (currently the Small Projects Development Authority) at an interest rate of 10%, or in the case of borrowing from the bank, where the interest rate was estimated at 16%, so this project was able to cover the loan and its cost (interest), and it remains for him to have an additional profit for the investor as much as 28%, which is the difference between the best An alternative opportunity 16%, which is (the bank) and investment in the project 44%.-Thus, in light of the current results, the financing of this project can be considered a successful work by the Small Projects Development Authority.
-Also, the results of the sensitivity analysis of the honey beekeeping project showed the extent of the project's sensitivity to the decrease in revenue (inflows) by 2% with the stability of outflows, as it led to a decrease in revenue (inflows) by 2%.With the stability of outflows to a decrease in the net present value from 140541.09pounds / year to 131311.16pounds / year as well as a decrease in the profitability index from 1.44% to 1.41%.
Which means a decrease in the profit invested on each pound from 44 piasters to 41 piasters With a decrease of 3 piasters representing 2% in revenues.
-From the previous results, it is clear that the project is of little sensitivity to any decrease in revenues, which means that the project is economically feasible at the present time, but it will increase sensitivity to any future changes.
-Therefore, this project was able to cover the loan and its cost (interest), and leave him with an additional profit for the investor estimated at 25%, which is the difference between the best alternative opportunity (16%), which is (the bank) and the investment in the project 41%.
-Thus, in light of the current results, the financing of this project can be considered a successful work by the Social Fund for Development or (currently the Small Projects Development Authority).
Conclusion
The most important results were that this project achieves at the discount rate of 15% a positive net present value estimated at 140541.0868pounds / year, as well as the ratio of discounted cash flows to discounted costs > 1 and the discounted profitability index for this project was estimated at about 1.44%, which is a value greater than the correct one.
It means that each invested pound has generated a net return of 44 piasters, which exceeds the opportunity cost of this project, which is the interest rate on borrowing estimated at about 16%, which indicates that the project has the capabilities and ability to recover fixed capital, production costs (variable) and operating costs (depreciation, maintenance) that were spent on it.
Therefore, the study recommends that these types of projects continue to be funded according to this criterion.
Source: 5 . 6 .
Collected and calculated from the data of the study sample Table-Annual operating costs Production input costs for the year Labor costs for the year( Value per pound) Collected and calculated from the data of the study sample Table-Cost of operating one cell per cycle Operating one cell per year Costs per cycle 100 cells Operating costs per year( Value per pound) Collected and calculated from the data of the study sample.
The decrease in the first year of inflows was calculated on the basis of: 167634 x 2 / 100 = 3352.68pounds.The decrease in the first year of inflows was calculated on the basis of =167634 -3352.68=164281.32 pounds.The decrease in the second year of inflows was calculated on the basis of: 114650 x 2 / 100 = 2293 pounds The decrease in the second year of inflows was calculated on the basis of: 114650 -2293 = 112357 pounds The decrease in the third year of inflows was calculated on the basis of: 120300 x 2 /100 = 2406 pounds The decrease in the third year of inflows was calculated on the basis = 120300 -2406 = 117894 pounds.The decrease in the fourth year of inflows was calculated on the basis of: 125950 x 2 / 100 = 2519 pounds.The decrease in the fourth year of inflows was calculated on the basis of =125950 -2519 =123431 pounds.The decrease in the fifth year of inflows was calculated on the basis of: 156833.4x2 /100= 3136.668pounds The decrease in the fifth year of inflows was calculated on the basis of=156833.4-3136.668=153696.73pounds.By studying Table22, 23, it becomes clear to us that:The internal rate of return on investment = the smaller discount rate (10) + the difference between the two discount rates(5).The present value of the net cash flow at the smaller discount rate/ the absolute difference of the net cash flow at the two discount rates = 152231.98/152231.98+131311.16= 761159.9/ 283543.14= 12.68% Benefit /Cost Ratio (B/C) = PI = Net Present Value (N.P.V) Pound/Year
Table - 2
. Shows capital costs, operating costs and total annual investment costs.Value per pound Collected and calculated from the data of the study sample Source:
Table - 3
. Shows the fixed capital of the honey bee breeding ( Value per pound) Collected and calculated from the data of the study sample Source:
Table - 4
. Shows the operating costs and variable capital for one cycle of four months and three cycles per year.( Value per pound)
cost of operating one cell per cycle Serial
Collected and calculated from the data of the study sample Source:
Table - 7
. Shows the principal of the loan, the interest, the total debt, and the annual installment for the honey bee breeding project (Value per pound) Source:Collected and calculated from the data of the study sample
Table - 8
. Shows the value of the asset.Depreciation rate, depreciation value, annual depreciation premium, residual value.)Value per pound( Collected and calculated from the data of the study sample Source:
Table - 9
. Shows the depreciation items for one cycle and the annual depreciation for the honey bee breeding project( Value per pound).Collected and calculated from the data of the study sample Table-10.Shows the operating costs for both production input costs and labor costs per cell, per cycle, and per year Source:
Table - 11
. Shows the cell productivity per cycle, the number of cells in the apiary, the average apiary production, the number of production cycles per year, and the average annual production (Value per pound) Collected and calculated from the data of the study sample Table-12.Shows the cell productivity per cycle, Number of cells in the apiary, Average production of the apiary, Number of production cycles per year.Collected and calculated from the data of the study sample Table-13.Shows the selling price in pounds, the unit hive revenue for the cycle, the hive revenue per year, the apiary's revenue per cycle, and the average apiary revenue per year(Value per pound)
:
Collected and calculated from the data of the study sample Table-14.showstheincomestatement, revenues, operating costs, depreciation, gross profit, interest, and net profit per year for the honey bee breeding project (Value per pound) Collected and calculated from the previous tables (from Table2to Table13 Source:
)
Table-15.Shows the annual cash inflows and outflows of the honey bee breeding project (Value per pound).Collected and calculated from the previous tables (from Table 2 to Table 13)
Table 14 :
shows the items of total inflows, which include (revenues, loan, capital recovery, residual value (scrap), and the items of total outflows, which include (investment costs, operating costs, debt service, interest, depreciation) and annual net flows.
Table - 16
. Shows the present value of the outflows, inflows and net of the honey beekeeping project at a discount rate of 10%
Years Total inflows ( EGP ) Total Outflows (EGP) Net Benefits or Net Cash Flow ( EGP ) Discount coefficient at 10% discount rate Present value of inflows ( EGP) Present value of outflows ( EGP) Present value of net benefits ( EGP )
[9]sent Value: Net Present Value (NPV) Net Present Value = Present Value of Net Cash Inflows -PresentValue of Net Cash Outflows[9]Net Present Value = 461,496.3458-320,955.259=140,541.0868pounds.This is according to the net present value criterion, as it was found to be equivalent to 140,541.0868pounds, which is a positive value (at a discount rate of 15%).The project is also economically feasible.Internal Rate of Return (IRR) standard:The internal rate of return for a honey bee breeding project: It is the rate at which the present value of cash inflows equals the present value of the initial investment.It is estimated according to the following relationship:Internal rate of return = minimum discount rate + (largest discount factor -smallest discount factor) x net present value of the smallest discount rate x 100 / sum of the net present value of the largest and smallest discount coefficient.
Table - 17
. Shows the present value of the outflow, inflow and net flows of the honey bee breeding project at a discount rate of 15% Collected and calculated from data in tables 14 and 15.Shows the present value of the outflows, inflows and net of the honey bee breeding project at a discount rate of 20% Source:Source: Collected and calculated from data in tables 14 and 15.
Table - 19
. Shows the present value of the outflow, inflow and net flows of the honey bee breeding project at a discount price of 10% after increasing costs by 5% Collected and calculated from data in tables 14 and 15.Shows the present value of the outflow, inflow and net flows of the honey bee breeding project at a discount price of 15% after increasing costs by 5% Collected and calculated from data in tables 14 and 15. Source:Source:
Table - 21
. Shows the present value of the outflow, inflow, and net flows of the honey bee farming project at a 20% discount price after a 5% increase in
costs Years Total inflows (EGP) Total outflows (EGP) Net benefits or net cash flow (EGP) Discount coefficient at a 20 % discount rate (EGP) Present value of inflows (EGP) Present value of outflows (EGP)
Collected and calculated from data in tables 14 and 15.Shows the present value of the outflow, inflow and net flows of the honey bee breeding project at a 10% discount rate after a 2% decrease in revenue Source:Source: Collected and calculated from data in tables 14 and 15.
Table - 23
. Shows the present value of the outflow, inflow and net flows of the honey bee breeding project at a discount rate of 15% after a 2% decrease in revenue Collected and calculated from data in tables 14. Source:
Table - 24
. The present value of the outflows, inflows and net of the honey bee breeding project at a discount rate of 20% after a 2% decrease in revenue
|
2024-04-28T15:18:21.778Z
|
2023-10-31T00:00:00.000
|
{
"year": 2023,
"sha1": "ab7bd0a698b71f10a65424e2a64ac4dfbf9c7479",
"oa_license": "CCBY",
"oa_url": "https://arpgweb.com/pdf-files/jac9(4)553-565.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ab6f635b2a5b899ab70c77b549cc9bdd3a034beb",
"s2fieldsofstudy": [
"Economics",
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
257711507
|
pes2o/s2orc
|
v3-fos-license
|
An image encryption algorithm based on a 3D chaotic Hopfield neural network and random row–column permutation
This study proposes a novel color image encryption algorithm based on a 3D chaotic Hopfield neural network and random row–column permutation. First, a 3D chaotic Hopfield neural network is proposed to produce the random sequence for generating the diffusion and permutation keys. Then, the rows and columns of the original image are randomly arranged according to the permutation key in the permutation process. Three subgraphs are formed by separating the R, G, and B components of the color image in the diffusion process. Each of the three subgraphs is split along the columns to form three parts; the left and middle parts are exchanged. Three diffusion keys are used to encrypt each of the three parts. Finally, the individually encrypted subgraphs are stitched together to obtain the final encrypted image. Simulation results using MATLAB and FPGA and security analysis demonstrate that the encryption scheme has good performance.
Introduction
In recent years, communication technology has made significant progress. At the same time, the security of information distribution has been raised to a new level. Digital images are an important means of multimedia expression [1,2], which are widely used in clinical medicine, astronomy, inspection, and other regions. In conclusion, image information transmission urgently needs a set of new, more stable, not-easy-to-be-cracked image encryption algorithms.
Chaos is a non-linear dynamical phenomenon that exists in a wide variety of natural fields [3][4][5][6], such as biology, meteorology, and economics. Interestingly, chaos is not a pure disorder but rather an ordered state that does not possess periodic changes and other notable symmetrical features. One distinctive feature of chaotic systems is that they are extremely sensitive to the initial values and parameters, and the dynamics and values of the system can vary considerably for different initial values of the same parameters. These characteristics of chaotic systems are well suited to the needs of image encryption algorithms [7,8], and that is the reason why many researchers have applied chaotic systems to image encryption in recent years. Wang et al. [9] proposed a new image-encryption algorithm based on iterating chaotic maps. Using the pseudorandom sequence generated by a group of one-dimensional chaotic maps, Li et al. [10] used a 1-D chaotic tent map to generate a chaos-based key stream for OPEN ACCESS EDITED BY Viet-Thanh Pham, Ton Duc Thang University, Vietnam image encryption. Lai et al. [11] proposed a novel image encryption based on the 2D Salomon map.
Many image-encryption studies now combine chaotic systems with other methods, such as DNA sequences [12][13][14][15] and diffusionpermutation [16][17][18]. Enayatifar et al. [15]proposed a novel imageencryption algorithm based on the deoxyribonucleic acid (DNA) masking hybrid model, genetic algorithm (GA), and logical map. Chai et al. [19]designed an encryption algorithm based on a chaotic system and DNA sequence operations. Liu et al. proposed an image-encryption algorithm based on one-time keys and robust chaotic maps and designed a novel encryption algorithm based on the spatial bit-level permutation and high-dimension chaotic system in Refs. 20, 21, respectively. Chen et al. [22] proposed a complete cryptosystem, which is built by using Bake maps for image permutations, and Diab et al. [23] improved it.
In recent years, some research studies on artificial neural networks and their applications [24][25][26][27][28][29] have been widely discussed. With the creation of the first memristor [30], many researchers have used memristors to simulate synapses [31][32][33] between neurons in the human brain and to analyze the dynamical behavior [34][35][36][37][38] of artificial neuronal networks. The combination of chaotic systems and artificial neural networks has become a hot research topic nowadays [39,40], and due to the nonlinear characteristic of the Hopfield neural network model, this model is capable of generating abundant chaotic behavior and is often used by researchers to simulate the various dynamic behaviors of neurons in the brain [41]. Using the Hopfield chaotic neural network model, a random sequence can be generated, and the more random the generated random sequence, the better the encryption of the image. There are many works in the field of random number generation using chaotic models [42,43]. Wang et al. [44] proposed a novel encryption algorithm based on a new fractional-order chaotic system. Before our work, Wang et al. [45] proposed a new color image encryption, which uses Hopfield chaotic neural networks to generate the self-diffusion chaotic matrix. Chen et al. [46] proposed a threedimensional fractional-order discrete Hopfield neural network. Wu et al. [47] applied the Hopfield chaotic neural network together with the novel hyperchaotic system to propose a new color image encryption algorithm. The purpose of this study is to investigate a simple but efficient image-encryption algorithm based on a chaotic Hopfield neural network model. This study proposed an image encryption based on a 3D chaotic Hopfield neural network and random row-column permutation. In this diffusion process, we separate the RGB components of the color image. Each component is split into three equal parts along the columns and then the middle and left parts are swapped. Three different random sequences are obtained by the proposed chaotic Hopfield neural network to encrypt the three parts of each component. Finally, the R, G, and B components of the ciphertext image are combined into the ciphertext image. Most of the previous image-encryption algorithms [48][49][50][51] either encrypted the RGB components using the same set of random sequences or encrypted the RGB using three different sets of random sequences separately. The association of each element of RGB in the relative position of the image is ignored, which would make the image easier to crack. Therefore, this study splits the RGB subgraphs separately and then uses different encryption sequences for each part of the subgraph. The experimental results are obtained by using MATLAB and FPGA. The extensive security analysis shows that the proposed algorithm improves encryption efficiency and has good security performance.
The rest of this paper is organized as follows. Section 2 describes the Hopfield neural network system. Section 3 presents the imageencryption and -decryption algorithm. Section 4 shows the simulation result of the image encryption and decryption. Section 5 analyzes the safety of this algorithm. Section 6 concludes this paper.
Hopfield neural network systems
The Hopfield neural network was proposed by the American physicist J. Hopfield [52]. In this study, the Hopfield neural network model of three neurons with self-feedback is adopted. We can see that the three neurons have the ability to connect together and influence each other. The Hopfield neural network model is shown as follows: where x i (t) represents the state variable of the i − th neuron. i = 1,2,. . .,n. U i is the activation function, and w [w ij ] nxn is the weight matrix.
In this study, the weight matrix [53] is expressed as follows: The weight matrix w is obtained through constant exploration and verification. The initial states of the system are x 1 (0), x 2 (0), and x 3 (0). The system state changes continuously under the action of the weight matrix and the excitation function. After continuous iteration, the system gradually enters a chaotic state. The 3D chaotic Hopfield neural network of this study can be presented as follows: where x 1 (t), x 2 (t), and x 3 (t) represent the state variables of the three neurons, respectively, and the activation function U i is represented as follows: Equation 3 represents first-order ordinary differential equations. With the forementioned weight matrix (2) The initial values are chosen as follows: x 1 0 ( ) −0.215, x 3 0 ( ) mod sum p′, all′ , 10 + 1530 1000 , where sum represents the function of summing over pixel values, p represents the 3D array of plaintext images, and mod represents the residual function. The phase diagrams of the system are shown in Figure 1, and the initial values of the system are x 1 (0) = −0.123, x 2 (0) = −0.127, and x 3 (0) = 1.530, respectively. The neural network system consisting of three neurons enters a chaotic state with the presence of chaotic attractors.
To further confirm whether the system is in a chaotic state, we take the method of whether there exists a positive Lyapunov exponent [54] to judge. As the weights are determined and the initial values of the system are changed slightly, the Lyapunov exponent is always larger than zero. So, we can consider the proposed Hopfield neural network as a chaotic system [55].
Image-encryption and -decryption algorithms
In this section, the encryption process and decryption process of images are introduced.
Image encryption
Images are made up of pixels in electronics and are divided into color and grayscale images. Each pixel of a grayscale image contains only one pixel value, while every pixel of a color image consists of three pixel values of RGB. The grayscale image can be regarded as a two-dimensional array that contains the horizontal coordinates, vertical coordinates, and pixel value information for every pixel. The color image can be considered a three-dimensional array that contains the horizontal coordinates, vertical coordinates, and the RGB pixel values of every pixel. This study proposes a method to separate the RGB components of the color image and then perform the encryption operation. Equation 6 is used to separate the RGB component of the color image, which is shown as follows: where P represents the original image. PR, PG, and PB represent the R, G, and B components of the original image, respectively. This study proposes an image-encryption algorithm based on a 3D chaotic Hopfield neural network and random row-column permutation. Row-column permutation is the process of changing the position of the pixel without changing the value. The chaotic sequences are generated by the proposed chaotic Hopfield neural network for producing the permutation keys and diffusion keys. In the permutation process, the positions of the ranks of the pixels will be changed according to the permutation keys. Image diffusion is the process of changing the pixel values of RGB with the diffusion keys.
The chaotic sequences are obtained through continuously iterating the proposed 3D chaotic neural network system. The length of the intercepted chaotic sequences depends on the number of pixels in the image. Here, the first 3,000 numbers should be removed from the sequences, which are generated before the chaotic neural network system is stabilized with some error and poor randomness. The intercepted sequences will stay within the interval [0,255] by taking the absolute value, expanding by 10 15 , and taking the remainder. The permutation keys and diffusion keys are both obtained from the chaotic sequences.
The encryption algorithm in this study consists of two main steps: random row-column permutation and image diffusion. Details of the process can be described as follows: First, the rows and columns of the image are distorted through the permutation keys. Next, the RGB components of the image are separated by the method shown in Eq. 6. Each subgraph is formed by dividing the RGB components into three parts along columns and exchanging the middle and left part. The three parts of the subgraphs are encrypted with three different sets of diffusion keys. Finally, the subgraphs are synthesized to yield the encrypted images. The image encryption flowchart is shown in Figure 2
Permutation process
The process of image permutation is to change the position of the image pixels without changing the values. First of all, the chaotic sequences are generated by the 3D chaotic Hopfield neural Frontiers in Physics frontiersin.org network (3) for producing the permutation keys, which are combined with the increasing sequences of length and width of the corresponding image to form 2 × M and 2 × N key pairs, respectively. Then, the permutation process is achieved by the determinant transformation of the original image through the key pairs.
Step 1: The length M and width N of the image are obtained at first. Then, the chaotic sequences x 1 (i) and x 2 (j) (i = 1,2,. . .,M and j = 1,2,. . .,N) are generated through the 3D chaotic Hopfield network system, respectively.
Step 2: The permutation keys are obtained through the chaotic sequences, and the expression of the generated function is as follows: where RandM(i) and RandN(j) (i = 1,2,. . .,M and j = 1,2,. . .,N) represent the permutation keys of the row and column, respectively. The floor is the downward-rounding function. x 1 (i) and x 2 (j) are the chaotic sequences.
Step 3: Duplicate numbers are discarded when the permutation keys are obtained. For each number stored in RandM(i) or RandN(j), the corresponding position variables i or j increases by 1. This step is repeated until i and j have reached the values of M and N, or the chaotic sequences have been taken.
Step 4: The numbers in RandM(i) and RandM(j) are complemented because there is no guarantee that all nonrepeating numbers from 1 to M or 1 to N will be taken; we need to find numbers that do not exist in the arrays and add them to the arrays until they are filled Step 5: The key pairs are formed through the permutation keys and increasing sequences. The generating function of the key pairs is shown as follows: Specifically, Mchange and Nchange are 2 × M and 2 × N arrays, respectively. The first row is an increasing sequence of 1~M or 1Ñ , and the second row is the sequence of RandM(i) or RandM(j), respectively. Ultimately, the mappings are formed by key pairs to perform permutation.
Step 6: The rows and columns of pixels are randomly permutated to get the permuted image. The permutation method can be shown as follows: The outline and main information of the permuted image are obscured after this process. However, there is still some plaintext image information that can be captured by illegal hackers. At the same time, the correlation between the pixel points of the adjacent is still at a high level. To make the encryption system works better, the diffusion process is performed.
Segmentation and diffusion process
The difference between image permutation and image diffusion is that image diffusion needs to change the original pixel value, which will completely distort the information of the whole image. The main information and details of the image are completely invisible, and the cryptographer cannot find any useful information.
Three subgraphs are generated by separating the R, G, and B components of the color image in the diffusion process. Each subgraph is split along the columns to form three equal parts. The primary information of the daily photos or machine-made images is usually centered. So, the left and middle parts are exchanged in this process. Three different chaotic sequences y 1 , y 2 , y 3 are obtained through the 3D chaotic Hopfield neural network system to produce the diffusion keys, which are used to encrypt the three parts, respectively. The experimental results obtained using MATLAB and FPGA prove that our proposed encryption algorithm has a good encryption effect. The detailed steps are as follows: Step 1: First, RGB components of the permuted image are separated to form three subgraphs, which are divided along columns into three parts to exchange the left part and middle part. Three parts of the RGB components are generated by Eq. 10: where j 1 = 1,3. Step 2: The diffusion keys are obtained by Eq. 11: where n 1 = 1,2,. . ., floor N 3 , n 2 1, 2, . . . , N − floor 2N 3 , and n 3 1, 2, . . . , floor 2N 3 − floor N 3 .
Step 3: Three different diffusion keys are used to encrypt the three parts of the subgraph, which is shown as follows: where CLR, CCR, and CRR represent the left, middle, and right parts of the encrypted subgraph R, respectively. bixor(P, k) represents the XOR operation. The pixel value of the cipher image is formed by the XOR operation between the original pixel value and the diffusion key. The information in the plaintext image is completely hidden.
Step 4: Splicing the three encrypted parts of the subgraph together, we get where CR represents the R component of the ciphertext image. The positions of CCR and CLR are exchanged to obtain a better encryption effect.
Frontiers in Physics frontiersin.org Step 5: Step 1 to Step 4 is repeated to realize the encryption of the G and B components.
Step 6: The encrypted RGB components are combined to form the final encrypted image. Furthermore, three two-dimension arrays are merged into one three-dimensional array, which is shown as follows: where C represents the ciphertext images after the diffusion process and cat represents the splicing function.
Image decryption
The image decryption is the inverse process of the image encryption. In this process, the ciphertext image is obtained at first. Then, the R, G, and B components of the ciphertext image are divided into three parts by columns, and then the middle part and left part are exchanged. After that, the inverse diffusion process is performed, and the RGB components are stitched to obtain the complete images. In image encryption, the column-transformed operation happens behind the row-transformed operation in the permutation process. So, the inverse operation is done in the inverse permutation process. Finally, the plaintext image is obtained. The flowchart of the image decryption is shown in Figure 3.
Simulation results
In this study, we selected four color images with a resolution of 512 × 512 for encryption.
Simulation results using MATLAB
The images with a clear outline, dense pixel distribution, uniform color, and uniform light and dark are selected as the test images because they are representative and better reflect the performance of the encryption algorithm. From Figure 4, we can see that the outline of the ciphertext image is invisible, and the pixels are equally distributed. So, it is almost impossible to obtain plaintext image information from the ciphertext image. The decrypted image is exactly the same as the plaintext image. It can be said that the encryption algorithm has excellent encryption performance.
Simulation results in FPGA
In this section, FPGA-based implementation of the proposed image cryptosystem is introduced. We have implemented FPGA debugging by using a Xilinx Zynq-7000 series XC7Z020 FPGA chip and an AN9767 dual-port parallel 14-bit digital-to-analog converter module with a maximum conversion rate of 125MHz, Vivado17.4, Frontiers in Physics frontiersin.org 07 module, which is used to generate the ciphertext image. The image encryption module consists of the image permutation process and the image diffusion process module. The image decryption module is the inverse of the encryption module and decrypts the ciphertext image into a plaintext image. The image display controller module is used to display both plaintext and ciphertext images.
FPGA-based implementation result of the proposed image cryptosystem is shown in Figure 6. From Figures 6A, B, the plaintext image and the encrypted image are shown on the screen. The images on the right in Figure 6A and on the left in Figure 6B are the permutated images, and the image on the right in Figure 6B is the ciphertext image. Figure 6C is the decryption result of the FPGA-based implementation. Experiments have demonstrated no significant difference between the FGPA platform and MATLAB regarding the effectiveness of image encryption and decryption.
Performance analysis
This section is to verify the security and efficiency of the proposed encryption algorithm. The simulation test is performed on a computer using MATLAB R2020b.
Histogram analysis
The histogram can reflect the distribution of the overall pixel values of the image accurately and intuitively. There is only one component of the pixel value in a grayscale image, so the greyscale image has only one histogram. However, the pixel value of the color image consists of R, G, and B components. Therefore, a color image has three histograms, representing the occurrence of R, G, and B pixel values, respectively. The histogram is a two-dimensional statistical map, where the abscissa represents each pixel value in the color image and the ordinate indicates the frequency of each pixel value appearing in the color image. The analysis of the histogram can capture information about the images. The encryption system with high security should make the histograms of the ciphertext image as uniform as possible. Four color images are selected for histogram analysis; the histograms of plaintext images are shown in Figures 4E-H, and The histograms of R, G, and B components of the plaintext images show a mountainous pattern with an uneven distribution of the pixels, while the histograms of the ciphertext images are very uniform and the characteristics of the distribution of the image pixels are well hidden. It is difficult for a cracker to obtain any useful information from the histograms. It can be inferred that this encryption scheme has great security.
Correlation analysis
Correlation analysis reflects the degree of correlation of pixel values at adjacent positions in the image. The size of the correlation coefficient of adjacent pixel values in the ciphertext image can better reflect the effects of the encryption algorithm. The lower the correlation, the better the encryption effects of the ciphertext image obtained by the proposed encryption algorithm. The correlation coefficient of a good color image-encryption algorithm should be close to zero. The correlation analysis equation is as follows: where u i and v i represent the adjacent pixel values in the image and n is the number of pixels sampled. E(u) and E(v) represent the expectation of u and v. cov(u, v) represents the covariance, and r is the correlation coefficient. The correlations should be analyzed in the R, G, and B components separately in the color images, and we need to analyze the four directions of the image: horizontal, vertical, positive-diagonal, and negative angles. Here, 10,000 pixels of the components R, G, and B are randomly taken. If the coordinate point of u i is (x i , y i ), then the adjacent coordinate point in the horizontal direction is set to v i (x i + 1, y i ). Similarly, the adjacent coordinate point we set in the vertical direction is v i (x i , y i + 1), in the positivediagonal direction; we set v i (x i + 1, y i + 1), and the adjacent coordinate points in the opposite angular direction were set as v i (x i − 1, y i + 1).
The correlation coefficient of the plaintext image is almost close to 1, which indicates that the correlation of the pixels in the plaintext image is extremely strong. However, the ciphertext image is almost close to 0, indicating the adjacent pixels in the ciphertext image have almost no correlation. The correlation test in plaintext and ciphertext images is shown in Figure 7, which contains the distributions in those four directions, respectively. The ciphertext image shows the irregular distribution in four directions, and the pixel values around each pixel point are arbitrarily random. However, most of the points in Figures 7A, C, E, G are around a straight line, indicating there is a significant correlation in the plaintext image.
The results of the correlation coefficient in different directions are shown in Table 1. We can see that the proposed scheme has a remarkable performance. Only one of the correlation coefficients obtained by the proposed algorithm is higher than others.
Analysis of information entropy
Information entropy is an index to evaluate the performance of the encryption algorithm. The higher the information entropy index, the better the performance of this encryption algorithm. The information entropy equation is as follows: where P(i) represents the probability of the occurrence of the pixel value of i. The ideal entropy for the R, G, and B components of the color image should be equal to 8. The color Lena graph, Baboon graph, Pepper graph, and plane graph are chosen as the test images, which are encrypted by the proposed encryption algorithm. The information entropy of R, G, and B components are analyzed by Eq. 16. The test results are shown in Table 2.
The table clearly shows the information entropies of the R, G, and B components of the encrypted images Cipher-Lena, Cipher-Baboon, Cipher-Pepper, and Cipher-plane. The entropies obtained by the proposed algorithm are close to the ideal value. From Table 3, we can see that most of the entropies are larger than those obtained by other algorithms. This feature prevents information leakage during the encryption process. So, we can infer that the proposed algorithm is significantly secure.
Analysis of PSNR and MSE
PSNR and MSE are used to describe the difference between the original and encrypted images. The greater the difference between the plaintext image and the ciphertext image, the better the performance of the encryption algorithm. PSNR is the peak signal-to-noise ratio, which is an index of distortion Frontiers in Physics frontiersin.org between the plaintext and ciphertext images. The lower the PSNR value, indicating that the greater the difference between plaintext and ciphertext images, the better the encryption algorithm is.
MSE is the mean squared error, which is used to calculate the cumulative squared error between plaintext and ciphertext images. The larger the MSE, the better the encryption effect. The PSNR and MSE are defined as follows: where P and C represent the plaintext and the ciphertext images, respectively. In addition, (i,j) stands for the position of each pixel.
The comparison of PSNR and MSE among the proposed and other algorithms is shown in Tables 4, 5, respectively. Most of the PSNR and MSE indices of our proposed algorithm are superior compared to those of others. The results show that the proposed encryption scheme has better performance.
Sensitivity analysis
Sensitivity analysis is decrypting the encrypted image with the keys whose initial values are slightly different from the original keys to see if the encrypted image can be decrypted correctly. The Lena plaintext image is encrypted by using the keys. Then, the ciphertext image is decrypted with the pseudo-keys of three very close key values Y 1 , Y 2 , and Y 3 , respectively. The initial values of the pseudo-keys are as follows: Frontiers in Physics frontiersin.org where x 1 (0), x 2 (0), and x 3 (0) are the initial values of the keys. The three aforementioned sets of pseudo-keys with slightly different initial values are used to decrypt the Lena ciphertext image. The results of decrypting are shown in Figure 8. The plaintext image cannot be recovered correctly. Therefore, the encryption system proposed in this study satisfies the requirement of key sensitivity.
Analysis of key space
A good image-encryption algorithm must have the ability to withstand outside attacks. Therefore, the key space must be large enough to ensure the security of the encryption algorithm. The key space of an ideal image encryption is larger than 2 100 .
The computer computational accuracy is about 10 15 , and the compression rate CR is 10 5 . In this study, the key generation process consists of the following: 1) the initial values of x 1 (0), x 2 (0), and x 3 (0) are used for the chaotic Hopfield system iteration and the sampling time point, and 2) chaotic sequences y 1 (t), y 2 (t), and y 3 (t) are also used. So the key space is calculated by Eq. 19: 10 2 × 10 15 × 10 14 × 10 14 × 10 14 × 10 14 × 10 14 × 10 14 × 10 14 ≫ 2 100 . (19) This shows that the encryption algorithm has a large enough key space to resist exhaustive attacks.
Conclusion
This study proposes a color image-encryption algorithm based on random row-column permutation and a 3D chaotic Hopfield neural network. The 3D chaotic Hopfield neural network is used to generate chaotic sequences to ensure the randomness of keys. After the permutation process, three subgraphs are formed by separating the R, G, and B components of the color image, and then, the subgraphs are cut along the columns for swapping the middle part and the left part. Three diffusion keys are produced through the chaotic sequence, and then the three parts of the subgraphs are encrypted separately. In this study, we consider the interrelationship of the pixel values of the RGB components in the plaintext image, so three sets of diffusion keys are used to encrypt the three parts of the split RGB subgraphs. This measure effectively reduces the interconnection of pixel values. Through extensive simulations and security analysis, the simulation results in MATLAB and FPGA show that the encryption algorithm has superior performance and high security.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.
Ethics statement
Ethical review and approval were not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
|
2023-03-24T15:30:32.049Z
|
2023-03-22T00:00:00.000
|
{
"year": 2023,
"sha1": "d0b999ce24192c3a78dcf31fab6fc1a6c8adb805",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2023.1162887/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "9f9fff9259e60331e94e9d737e67b016e74538eb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
202558712
|
pes2o/s2orc
|
v3-fos-license
|
Addressing Algorithmic Bottlenecks in Elastic Machine Learning with Chicle
Distributed machine learning training is one of the most common and important workloads running on data centers today, but it is rarely executed alone. Instead, to reduce costs, computing resources are consolidated and shared by different applications. In this scenario, elasticity and proper load balancing are vital to maximize efficiency, fairness, and utilization. Currently, most distributed training frameworks do not support the aforementioned properties. A few exceptions that do support elasticity, imitate generic distributed frameworks and use micro-tasks. In this paper we illustrate that micro-tasks are problematic for machine learning applications, because they require a high degree of parallelism which hinders the convergence of distributed training at a pure algorithmic level (i.e., ignoring overheads and scalability limitations). To address this, we propose Chicle, a new elastic distributed training framework which exploits the nature of machine learning algorithms to implement elasticity and load balancing without micro-tasks. We use Chicle to train deep neural network as well as generalized linear models, and show that Chicle achieves performance competitive with state of the art rigid frameworks, while efficiently enabling elastic execution and dynamic load balancing.
INTRODUCTION
The ever-growing amounts of data are fueling impressive advances in machine learning (ML), but depend on substantial computational power to train the corresponding models. As a result, many research works focus on addressing scalability of distributed training across multiple machines. State of the art algorithms include Mini-batch SGD (mSGD) (Robbins & Monro, 1951;Kiefer et al., 1952;Rumelhart et al., 1988) and Local SGD (lSGD) (Lin et al., 2018) for deep neural networks (DNNs) as well as Communication-efficient distributed dual Coordinate Ascent (CoCoA) (Jaggi et al., 2014;Smith et al., 2018) for generalized linear models (GLMs).
Less work, however, has focused on efficiency, which is equally (if not more) important because it effectively provides more computational power at the same cost. Indeed, most works on distributed ML assume that they can operate on dedicated clusters, which is rarely the case in practice where ML applications co-inhabit common infrastructure with other applications. In these shared environments, efficiency depends on two properties: elastic execution: dynamically adjusting resource (e.g., CPUs, GPUs, nodes) usage as their availability changes, and load balancing: distributing workload across heterogeneous resources (Ou et al., 2012;Delimitrou & Kozyrakis, 2014) such that faster resources do not have to wait for slower ones. Elastic execution, specifically, enables optimization opportunities for ML applications where scaling-in or -out as training progresses can increase accuracy and reduce training time (Kaufmann et al., 2018).
As of today, most ML distributed frameworks (e.g., Abadi et al. (2016); Paszke et al. (2017)) do not support elastic execution nor load balancing, which makes them inherently inefficient in shared environments and on heterogeneous clusters. Recently, recognizing the importance of elasticity, a number of systems attempt to address elasticity (Zhang et al., 2017;Harlap et al., 2017;Qiao et al., 2018) for ML applications using micro-tasks or similar mechanisms. Microtasks, where work is split up into a large number of short tasks executed as resources become available, have been extensively used in generic distributed application frameworks to address elasticity and load balancing (Zaharia et al., 2010;Ousterhout et al., 2013), so they seem a natural fit for this problem.
In this paper we argue that micro-tasks are ill-suited for ML training because they require a large number of short independent tasks for efficient scheduling. In order to support full system utilization, the number of tasks has to be chosen based on the largest possible degree of parallelism an elastic system could potentially experience. The number arXiv:1909.04885v1 [cs.LG] 11 Sep 2019 Addressing Algorithmic Bottlenecks in Elastic Machine Learning with Chicle of tasks, in turn, constitutes a lower bound on the data parallelism of each update which means that you need to pick the mini-batch size in mSGD 1 or the number of partitions in CoCoA accordingly. This however is not a desirable thing to do from an algorithmic point of view, since it is widely acknowledged that data parallelism comes at the cost of convergence in distributed ML applications. Note that when talking about convergence we refer to epochs to converge, where an epoch refers to one pass through the entire dataset.
Extensive studies of this impact for mSGD have, among others, been conducted by Shallue et al. (2019), Keskar et al. (2016) and Goyal et al. (2017). Figures 1a and 1b also exemplify this. The training of a simple convolutional neural network (CNN) on the CIFAR-10 dataset using mSGD requires 44% more epochs to converge when increasing the batch size from 256 to 512. Similarly, doubling the number of partitions from 16 to 32 for the training of a on the Criteo dataset using CoCoA (Jaggi et al., 2014;Smith et al., 2018) increases the number of epochs to converge by 65%. While mitigation strategies, such as warm-up (Goyal et al., 2017) and layer-wise adaptive rate scaling (You et al., 2017) exist, the fundamental problem remains. Overall, micro-tasks lead to an inherent conflict between the number of tasks to use for scheduling efficiency, where higher is better, and algorithmic ML training efficiency, where lower is better.
Fortunately, as we show in this paper, the iterative nature of ML applications allows implementing load balancing and elasticity without micro-tasks, thus eliminating the above inherent conflict. We realize our ideas in Chicle 2 , an elastic, load balancing distributed framework for iterativeconvergent ML training applications. Chicle combines scheduling flexibility with the efficiency of special-purpose rigid ML training frameworks. Chicle uses uni-tasks and schedules (stateful) data chunks instead of tasks. Each node executes only a single (multi-threaded) task that processes training samples from multiple data chunks within a single execution context. Data chunks can be moved efficiently between tasks to balance load and to scale in and out. This allows Chicle to use the optimal level of data parallelism for the currently used number of resources and combines scheduling with algorithmic efficiency. Conversely, Chicle is able to efficiently adjust the resource allocation based on feedback from the training algorithm and resource availability. The main contributions of our work are: 1) We propose uni-tasks, a new task model that removes the conflict between scheduling and algorithmic efficiency. We implement a prototype thereof in Chicle, a distributed ML framework that enables elastic training and dynamic load balancing in heterogeneous clusters.
2) Our evaluation illustrates that uni-tasks require significantly fewer epochs, and subsequently less time, to converge in elastic and load-balancing scenarios compared to micro-tasks.
Our paper is structured as follows: First, we provide necessary background information on the relationship between data parallelism and convergence for ML training algorithms as well as requirements for elastic execution in §2 followed by a discussion of the main ideas behind uni-tasks ( §3). We continue with a detailed description of Chicle's design and implementation ( §4) and present results of our experimental evaluation ( §5) and conclude ( §6).
BACKGROUND & MOTIVATION
Increasing parallelism for distributed execution of ML training workloads has well-understood tradeoffs. On one hand, ample parallelism results in less work per each independent execution unit (task) which leads to increased overheads (Totoni et al., 2017). On the other hand, ample parallelism allows utilizing many nodes and enables efficient scheduling (Ousterhout et al., 2013), dealing with load imbalances, and supporting elasticity. Elasticity specifically is increasingly important, since to maximize the efficiency, distributed applications are expected to scale-in and -out based on workload demands of themselves and their cohabitants. Indeed, exposing ample parallelism by dividing the problem into many micro-tasks is the standard way to implement elasticity despite the resulting execution overheads. Litz (Qiao et al., 2018), for example, a recent ML elastic framework uses micro-tasks and reports up to 23% of execution overhead.
While these overheads are important, our work is motivated by another tradeoff that is specific to ML applications but not well recognized in the ML systems community: increased data parallelism hinders the convergence of ML training. In contrast to overheads, this problem exists purely at the algorithmic level. Generally, distributed training algorithms require more steps to converge in the face of high paral-lelism (Shallue et al., 2019). The implication for building elastic ML frameworks is that using micro-tasks, i.e., ample parallelism to gain scheduling flexibility, leads to an inherent trade-off in terms of the number of the examples that need to be processed to converge to a solution.
In this section, we motivate our design by illustrating this issue in two different ML algorithms: Mini-batch stochastic gradient descent (SGD), extensively used to train neural networks, and CoCoA, a state-of-the art framework for distributed training of GLMs. Prior to that, we provide some necessary background on elastic scheduling and ML training.
Elasticity and load balancing
Both load balancing and elasticity are necessary to efficiently utilize shared infrastructure. Both are typically implemented using micro-tasks in generic analytics frameworks, such as Spark (Zaharia et al., 2010) and ML frameworks (Qiao et al., 2018;Zhang et al., 2017), where work is divided into a large number of tasks that are distributed among nodes. Tasks, i.e., self-contained, atomic entities of a function and input data, are a common abstraction of work, and represent the scheduling unit.
Under a task scheduling system, a large number of tasks are required to achieve efficiency. To allow elastic scale-out during training, the number of tasks needs to be at least as large as the maximum number of nodes that will be available at any point during training. Furthermore, common practice over-provisions nodes with many tasks per node to allow for efficient load balancing. The Spark tuning guidelines (Spark, 2019), for instance, recommend to use of up to 2-3 tasks per available CPU, while other works propose using millions of tasks (Ousterhout et al., 2013).
Distributed training algorithms
Next, we discuss training in general and introduce two training algorithms that we use in this paper. Most distributed training algorithms iteratively refine a model m on a training dataset D such that m converges towards a state that minimizes or maximizes an objective function. During each iteration i, an updated model m (i) is computed on a randomly chosen subset D ⊆ D: The update function f ∆ is computed in a data parallel manner across K nodes by splitting up D into K disjoint partitions D k ⊆ D.
The computation of f ∆ is self-correcting to a certain degree, i.e., bounded errors are averaged out in subsequent iterations, and can therefore be tolerated. This property is often exploited for ML-specific optimizations, e.g., to mitigate stragglers Cui et al., 2014;Dutta et al., 2018;Ho et al., 2013). The general structure of the algorithms we are considering is depicted in Figure 2: K workers independently work on separate subproblems f ∆,k , each defined on a different partition D k of the data and then combine their results to update a global model m, which forms the basis of the next iteration. During each iteration, a worker processes H × L samples, of which H different sets of L independent samples are processed sequentially. After each set of L samples, a local model update is performed, such that learning on subsequent samples within an iteration can exploit knowledge gained so far. While our approach is applicable to a wide set of distributed ML training algorithms, in this paper, we focus on the following two algorithms.
Local SGD (Lin et al., 2018). A state-of-the-art algorithm and improvement upon mSGD, the de-facto standard for training of neural networks (NNs) and variants thereof.
Here, D refers to the batch and | D| = H × L refers to the batch size hyper-parameter (e.g. | D| = 64). For, H = 1 lSGD degrades to mSGD. The negative effect of data parallelism on the convergence of is a fundamental property of mSGD. An extensive study of this property is presented by Shallue et al. (2019).
CoCoA (Jaggi et al., 2014;Smith et al., 2018). A stateof-the-art distributed framework for the training of GLMs. It is designed to reduce communication and thus processes significantly more samples per iteration than, e.g., mSGD. We use CoCoA with a local stochastic coordinate descent (SCD) solver (Wright, 2015). The structure in Figure 2 is parameterized with L = 1, H = | D|, whereas D = D. The local update function f ∆,k is computed by a local optimizer on partitions D k , with D = K k=1 D k . In a homogeneous setting each node typically processes 1/K-th of the training dataset per iteration. Data parallelism is determined by the number of partitions K. Local optimizers detect correlations within the local dataset without global communication, i.e., the more data is randomly accessible to each optimizer instance, the less epochs are needed for CoCoA to converge. Conversely, if data access is limited in size, as would be the case when using many tasks, or if no random access is possible, convergence suffers. Kaufmann et al. (2018) empirically study the relationship between convergence rate and K and show that by starting with a large K and reducing it after a few iterations, convergence rate per epoch and time can be increased significantly.
Summary. Both algorithms exhibit an inherent trade-off between data parallelism and convergence. Intuitively, a higher degree of parallelism limits the opportunity to learn correlations across samples, and thus hurts convergence. While we focus on two particular methods in this paper, the trade-off between parallelism and convergence is fundamental in parallel stochastic algorithms.
Micro-tasks for distributed training
As exemplified in Figure 1, increasing data parallelism comes at the cost of increasing the total amount of work to achieve a certain training goal. Up until a point, the cost increase is smaller than the gain in potential parallelism, such that overall training time can be reduced by increasing the data parallelism. This, however, is only true if and only if all tasks are executed in parallel. In shared and heterogeneous environments, this is generally not true.
Consider the CIFAR-10 example from Figure 1. For simplicity, we assume perfect linear scaling and zero system overheads. If one wanted to train on up to 256 nodes, at least 256 tasks are required and thus a data parallelism of 256 or higher. According to the data in Figure 1, this requires 10 epochs to converge. Assuming that one epoch in this configuration -where all 256 tasks can run in parallel -requires one second, training completes after 10 seconds. The nature of shared systems is, however, that there are not always enough nodes available to execute all tasks in parallel. For instance, let us assume that only 128 nodes are available during the runtime of the application. Then each epoch with 256 tasks requires two seconds as two tasks have to run back to back on each node, resuling in a total training time of 20s. If one had used a data parallelism of only 128 from the beginning, instead of 256, training would only require eight epochs or 16s, instead of 20s, resulting in a training time reduction of 20%. This example illustrates the difficulty of elastic scaling of ML training using a microtask-based system: In many cases, it is only efficient if the maximal number of nodes (resources) are actually available during most of the runtime. This, however, stands in contrast to the goals of elasticity. This problem is even more pronounced if we also consider load balancing between dif-ferently fast nodes. The number of tasks required to allow for fine-granular work redistribution is disproportionately higher than just for elastic scaling alone.
UNI-TASKS FOR DISTRIBUTED TRAINING
In the previous section, we showed how micro-tasks inhibit the performance of distributed training. In this section we argue that a different execution model, uni-tasks, is better suited for ML training applications. The core idea is very simple: to only use a single task per node. While this in itself is not a new concept, scientific computing has been using MPI that follows this approach for decades, the difficulty is to address the scheduling challenges that are typically addressed by micro-tasks, namely elasticity and load-balancing. Fortunately, we can exploit the iterative nature of ML training to tackle these challenges.
Core concepts. Uni-tasks consists of two main concepts: immobile tasks and mobile data chunks.
1. All training samples are stored across a large set of small fixed-sized (stateful) data chunks that can be moved between tasks by the scheduler. Data chunks can store dense and sparse training data vectors and matrices of variable size.
2. Each node only executes a single task per node (hence the name uni-tasks). Each task has full, random access to all training samples across all data chunks that are local to a task.
Additionally, a contract between the scheduler and the application is defined that regulates ownership of a data chunk.
1. During an iteration, a task owns all task-local data chunks. It can read all and make modifications to data stored in the data chunks, e.g., to update per-sample state (e.g., as needed in CoCoA). During this period, the scheduler does not add or remove data chunks.
2. In-between two iterations, the scheduler owns all data chunks. Tasks must not modify any data chunks and the scheduler is free to add or remove data chunks from any task. Tasks are notified by the scheduler of any data chunk addition or removal.
By moving data chunks between tasks in-between iterations, uni-tasks allows one to add and remove tasks for elastic scaling and to balance load across tasks on heterogeneous clusters. Uni-tasks assumes a correlation between the number of training samples in task-local data chunks and the number of samples processed by each task during each iteration. In contrast to micro-tasks, scheduling granularity is determined by the number of data chunks, not by the number of tasks. The number of data chunks does not constitute a lower bound for the level of data parallelism, as multiple data chunks are processed by the same task, hence the number level of data parallelism can be lower than the number of data chunks. In contrast to MPI, uni-tasks defines a method to shift load between tasks.
In the following paragraphs, we discuss how elasticity and load balancing are addressed for distributed training when using uni-tasks.
Elasticity. Elasticity is necessary to efficiently and fairly utilize resources in shared clusters, to reduce waiting times for job starts, and to react do varying resource demands of applications throughout their runtime. We address elasticity in the uni-tasks setting by spawning new tasks as nodes are added to the application and by terminating them if nodes need to be released. In both cases data chunks are redistributed across all available tasks. In the latter case, however, a prior notification is required such that data chunks can be transferred before the task is terminated. Elastic scaling is only possible in-between iterations.
The application is free to adjust the level of data parallelism during each iteration to any value equal or larger than the number of tasks. For both test applications, we always choose the lowest possible value.
Load balancing. Load balancing is necessary to deal with heterogeneity between cluster nodes as well as between different hardware (e.g., CPUs vs GPUs) that results in runtime differences between tasks that process the same amount of input data.
To address heterogeneity, we exploit the fact ML training algorithms are typically iterative and process a known amount of training samples during each iteration, which allows us to learn how long each task needs to process a training sample. Uni-tasks assumes that the number of training samples processed by each task is a fraction of the total number of training samples across all task-local data chunks, e.g., a task with twice as many training samples as another task also processes twice as many per iteration. This enables the scheduler to influence the runtime by moving data chunks from tasks on slower to tasks on faster nodes until their runtime aligns.
As tasks may process a different number of training samples during each iteration, their model updates need to be weighted differently as well (as proposed in Stich (2018)). We do this by multiplying the model update f ∆,k of task k by D k / D (see Equation 2).
CHICLE DESIGN AND IMPLEMENTATION
Here, we describe how Chicle implements an elastic distributed training framework using uni-tasks.
Overview
Chicle, as shown in Figure 3, is based on a driver/worker design with a central driver (trainer) and multiple workers (solvers) communicating via a RDMA-based RPC mechanism (see §4.3). The driver executes the trainer module, which, in tandem with multiple policy modules, is responsible for coordinating training. Policy modules make scheduling decisions, such as assigning chunks, balancing load, and scaling in and out. Worker processes execute solver modules (uni-tasks) and implement the ML algorithms (e.g., SCD for CoCoA). Crucially, only a single (multi-threaded) worker process is executed per node. Solvers are controlled by the trainer and policy modules, which in turn receive model and state updates as well as metrics (e.g., duality-gap).
Chicle applications need to implement a trainer and solver module, and may optionally implement policy modules to control system behavior during training. For instance, our lSGD implementation uses libtorch (from PyTorch (Paszke et al., 2017)) in the solver for forward and backward propagation steps. The trainer module acts as synchronous parameter server that merges updates from solver instances. A simplified version of the lSGD code is shown in Listing 1.
In the remainder of this section, we elaborate on each module as well as the communication subsystem and in-memory data format of Chicle.
Trainer and solver
The trainer and solver modules represent application code. Trainer modules are the central controlling entity and coordinate individual solver instances in tandem with policy modules. Policy modules can implement complex (reusable) optimizations (e.g., online hyper-parameter tuning), and solver modules implement arbitrary functions for distributed execution. Only a single solver module is executed per node and application, therefore, each solver module can internally spawn threads and use all CPUs or GPUs of a node. Trainer and solver modules periodically synchronize at global barriers, e.g. in-between iterations, but can exchange additional messages at any time.
Communication subsystem
In distributed training, communication can easily become a bottleneck. For example, using CoCoA to train a model for the Criteo dataset (see Table 1), each task has to send/receive ≈16MiB in updates in-between iterations. For that reason, we built our communication subsystem on RDMA. RDMA allows low-overhead, zero-copy, one-sided operations for bulk data transfers, such as model and training (input) data as well as two-sided remote procedure calls (RPCs) using RDMA send/receive.
In-memory chunk data format
To fully exploit RDMA, data is stored in static, consecutive memory regions. The in-memory representation of training (input) data is based on fixed-sized data chunks. Chunks can store sparse or dense training data vectors and matrices. The number of training samples per data chunk can vary depending on their size. Chunks allow to easily move training data subsets between nodes. The chunk size can be tuned to an optimal value depending on dataset and system properties, e.g. to the CPU cache size.
Chicle's in-memory format is application agnostic and simply provides applications a contiguous memory space that can be moved across nodes in-between iterations. For instance, our lSGD implementation stores the backing memory of native PyTorch tensor objects in data chunks wheras for CoCoA, we simply store sparse vectors as well as persample state in a data chunk. Having the ability to store per-sample state in a data chunk is important as it ensures that state and the data it correlates to are always moved together.
One important limitation of Chicle's data chunk is that they must not require any serialization, as one-sided RDMA read operations are used to transfer them. Deserialization is possible. In the case of PyTorch, for instance, we restore tensor objects via the torch::from blob function, which creates a new tensor object backed by the in-chunk data.
Policies
Chicle implements a flexible policy framework which we use to implement vital parts of the system. Policies make decisions based on events and metrics they receive from trainer and solver modules and return proper decisions for them. Each policy module runs in a separate thread and multiple policy modules can run at the same point in time.
Policy modules coordinate with the trainer and can coordinate with each other. Next, we present the most relevant policy modules.
Elastic scaling policy. This module interfaces with the resource manager, e.g., YARN (Vavilapalli et al., 2013), to make resource requests and get resource assignment and revocation notices. Upon receiving a new resource assignment, it registers a new worker (task) and notifies the trainer. After the current iteration, it shifts data chunks from old to new workers. It relies on the rebalancing policy ( §4.5) to ensure proper load balancing. Chicle expects the resource manager to give advance notice before revoking a resource allocation. Upon receiving such a notice, it redistributes data chunks from to-be freed workers to remaining ones in a round robin fashion. As before, it relies on the rebalancing policy to ensure load balance.
Rebalancing policy. The rebalance policy observes iteration runtimes over multiple iterations to learn the per-sample runtime of each task, as described above. Between iterations, solvers are ranked according to their median performance over the last I iterations and chunks moved gradually, across multiple iterations, from slower to faster solvers until performance differences are smaller than the estimated processing time of a single chunk. This policy can also be used to address slowly changing performance of nodes, e.g. ones that are caused by the start/end of long running background jobs and restore balance after scaleing in and out. Its robustness against runtime fluctuations can be adjusted by tweaking I.
We decided against reloading data from a (shared) filesystem as data loading turned out to be more expensive than transferring loaded data between nodes, especially if input files are stored on a shared network filesystem. Moreover, our in-memory format can combine data chunks with the corresponding state, which needs to be transferred between workers anyway.
Other policies. Apart from the above described policies, we have implemented policies for straggler mitigation, global background data shuffling and others.
EVALUATION
Our evaluation shows how Chicle performns in an elastic setting where nodes are added and removed during training and on a heterogeneous cluster where nodes are differently fast. As no other elastic, load-balancing ML training framework is publicly available, we emulate micro-tasks with Chicle. Additionally, we compare Chicle with two state-of-the-art rigid ML training frameworks in non-elastic, nonheterogeneous scenario to establish a performance baseline.
Evaluation setup and methodology
Our test cluster consists of 16+1 nodes. Nodes are equipped with Intel Xeon E5-2630/40/50 v2/3 with 2.4 -2.6GHz and 160 -256GiB RAM. We execute Chicle inside Docker containers. For some heterogeneity experiments, we reduce the CPU frequency of four nodes from 2.6 to 1.2GHz. All nodes are connected by a 56GBit/s Infiniband network via a Mellanox SX6036 switch. During experiments, up to 16 nodes are used for workers and one node for the Chicle driver.
Our test applications are lSGD and CoCoA, using test accuracy as a metric for convergence for the former and the duality-gap (Jaggi et al., 2014;Smith et al., 2018) for the latter. We train on each dataset for ≈20 minutes, after which we terminate the training. Each experiment is repeated five times and average results are presented. Synchronous local SGD. We implemented lSGD (Lin et al., 2018) for Chicle based on libtorch, the C++ backend of PyTorch (Paszke et al., 2017). We train a CNN with relu activation composing of two convolutional layers with maxpooling followed by 3 fully connected layers on the CIFAR-10 and Fashion-MNIST datasets using lSGD. We use L = 8 and H = 16, a momentum of 0.9 and a base learning rate α of 1e-4 for CIFAR-10 and 5e-4 for Fashion-MNIST. According to best practice, we scale the learning rate with the square root of the number of tasks K such that the effective learning rate α = α × √ K. The global batch size (number of samples processed during each iteration across all tasks) is K × L × H. For micro-tasks, we select four values for K = {16, 24, 32, 64}. Using different values of K allows us to assess the trade-off between scheduling and algorithmic efficiency. Here, K remains constant during the training. For uni-tasks K equals number of currently used nodes. As mSGD is a special case of lSGD with H = 1 we trivially also support mSGD, which we use for baseline comparisons with PyTorch.
CoCoA. We implemented CoCoA with a local SCD solver for Chicle based on the original Spark implementation (Smith, 2019). We train a support vector machine (SVM) on the Higgs and Criteo datasets. We use SCD as local solver with L = 1 and H equal to the number of local training samples. The number of tasks K is the same as above. The algorithm parameter σ is set to the the number of tasks, and the regularization coefficient λ to the number of samples × 0.01.
Micro-tasks. As no elastic ML training framework based on micro-tasks (or any other technique) is publicly available and general-purpose frameworks such as Spark do not perform competitively (Dünner et al., 2017), we emulate micro-tasks using Chicle with a constant number of tasks K and measure the convergence rate per epoch. It is possible to do this accurately because in micro-tasks, convergence rate per epoch only depends on the number of tasks but not on the number of nodes or on which node a task is executed on. It does not, however, allow us to directly measure the convergence rate over time for micro-tasks. Instead, we project the latter by assuming an optimal schedule for the number of tasks, nodes and relative node performance. Henceforth, the number of micro-tasks is given in parentheses.
Using Chicle to emulate micro-tasks during elasticity and load balancing experiments has the additional benefit of keeping implementation-specific variables, such as the implementation of the training algorithms (lSGD and CoCoA), the communication subsystem (e.g., RDMA vs. TCP/IP), and other factors constant.
Baseline comparisons
We compare Chicle against Snap ML (Dünner et al., 2018) for CoCoA and PyTorch (Paszke et al., 2017) for mSGD in a non-elastic, non-heterogeneous scenario using the same training algorithms, hyper-parameter values and datasets on the test setup described above. None of the novel functionality of Chicle was used in this experiment. The purpose of this experiment is to show that Chicle does not impair performance in the normal non-elastic, non-heterogeneous case. We measure convergence rate per epoch and over time. Detailed results of this experiment are provided in §A.1 and are summarized here.
Convergence behavior per epoch for mSGD is identical on
Chicle and PyTorch while Chicle requires slightly less time per epoch. Compared to Snap ML, Chicle performed virtually identically for the Higgs dataset but outperformed it for the Criteo dataset due to differences in data partitioning. This experiment confirms that Chicle's baseline performance is on par with that of highly optimized, estab-lished ML training frameworks. In contrast to those, Chicle is able to elastically scale during execution and balance load in heterogeneous clusters. Both aspects are evaluated in the following.
Elastic scaling
In this section, we evaluate Chicle with the elastic scaling policy enabled in two elastic scenarios and compare it to micro-tasks. Specifically, we consider: i) the effect of data parallelism (batch size for lSGD, and number of partitions for CoCoA) on the number of epochs to converge, and ii) the trade-off between scheduling efficiency and convergence under micro-tasks.
Methodology. Our test scenarios consist of gradual scalein from 16 to 2 nodes and scale-out from 2 to 16 nodes. We add (remove) 2 nodes every 20s until the maximum (minimum) number of nodes is reached. During each run, we measure convergence per epoch and project convergence over time using an optimal schedule for uni-tasks and microtasks for each number of nodes. In micro-tasks, elastic scaling works by distributing a fixed number of tasks across more or fewer nodes and not by adjusting the number of tasks. Moreover, the number of nodes is typically not known by the application. Hence, we assume a fixed number of tasks independently of the nodes used. To project the time per iteration, we assume a normalized task runtime (one task, processing 1/16th of the data takes one time unit) and compute the number of task waves necessary for each iteration.
• K micro-tasks on N nodes require K/N task waves, as only N tasks can be executed at the same time. In consequence, each iteration requires 16/K × K/N time units. For instance, K = 32 tasks on N = 14 nodes require 32/14 = 3 task waves and 16/32 × 3 = 1.5 time units per iteration.
• For CoCoA on uni-tasks, load is redistributed such that a single iteration takes 16/N time units. For instance, on 14 nodes, one iteration requires 16/14 = 1.14 time units. For lSGD on uni-tasks, the batch size is adjusted such that each iteration still only requires one time unit. Instead, the number of iterations per epoch increases by 16/N .
Our time projections do not include data transfer overheads. As each task needs to communicate model updates, the total communication volume of micro-tasks is as least as high as that of uni-tasks, hence by ignoring data transfer overheads, we favor micro-tasks.
Results. Figure 4 shows detailed convergenve over time plots for elastic scale-in and out for different data parallelism values. Convergence per epoch results are provided in the appendix ( §A.2). Generally, the higher the data parallelism, the more epochs are needed to converge for micro-tasks, which is consistent with our initial problem statement and previous studies (Shallue et al., 2019). As Figure 4 shows, the increased scheduling efficiency of using more microtasks cannot compensate for the reduced convergence rate per epoch and micro-tasks (16) consistently outperforms other micro-tasks configurations.
Moreover, the convergence rate over time with uni-tasks is equal or higher during scale-in and -out, showing that the ability to adjust the level of data parallelism across a wide range can improve convergence per epoch and over time. 3 This ability is not only beneficial in shared environments but can also be exploited to accelerate the training process in general. Kaufmann et al. (2018) show for CoCoA, that scaling in training at specific points in time can accelerate training by up to 6×. Smith et al. (2017) report that increasing the batch size as alternative to reducing the learning rate once convergence slows down is beneficial for mini-batch SGD. Both cases could be implemented with Chicle.
However, results differ across algorithms and datasets. For lSGD, scale-in as well as scale-out on uni-tasks improves convergence over time compared to the best micro-tasks configuration. In the scale-out case, the global batch size for uni-tasks is smaller in the beginning but equalizes with micro-tasks (16) quickly as nodes are added. In the scale-in case, the global batch size for uni-tasks is the same as for micro-tasks (16) in the beginning but is quickly reduced. As it is smaller for longer, compared to the scale-out case, the convergence benefits over micro-tasks (16) are higher in the scale-in case.
The average maximal test accuracy for uni-tasks is virtually identical to that of micro-tasks (16), which is the best microtasks configuration in all but one case: In the scale-in case for CIFAR-10, uni-tasks achieves an average maximal test accuracy of 65.6% compared to 65.0% for micro-tasks (16).
Results for CoCoA are similar. Scaling in reduces the number of epochs as well as time to converge, as suggested in Kaufmann et al. (2018). After each scale-in step (which can be identified in Figure 4c and Figure 4d) convergence rate improves. The reason for this behavior is that the local SCD solver has access to additional training data and can therefore identify new correlations across training samples locally. Scaling out behaves similarly which is, at first sight, counter intuituve as every task gets to see fewer and fewer training samples as training scales out. However, during scale-out the data chunks that are moved to newly added tasks are picked randomly from each old task which effec- tively shuffles training samples. This also allows the solver to identify new correlations locally while also decreasing the duration of each iteration.
Load balancing
In this section, we compare Chicle, with the load balancing policy enabled, to micro-tasks in a heterogeneous scenario with nodes of different speed. Such a scenario can occur in practice, as compute clusters are often not replaced completely but extended and partially replaced over time using multiple generations of hardware (e.g., CPUs, GPUs) (Delimitrou & Kozyrakis, 2014). Even the same cloud instance type can be backed by different models and generations of hardware (Ou et al., 2012).
In a heterogeneous scenario, faster nodes should perform more of the overall work than slower nodes, such that all nodes finish at the same time for each iteration. In a microtask based system, this is achieved by scheduling more tasks on fast nodes than slow nodes. This, however, requires multiple tasks to be executed per node so that one or more of them can be moved to other nodes. In consequence, no load balancing is possible with micro-tasks (16) on our 16 node test cluster. Chicle balances load by shifting data chunks, of which there are typically hundreds or thousands, from slow nodes to fast nodes and by adjusting the number of samples that individual uni-tasks process in each iteration, such that all tasks finish at the same time, independently of the node performance.
Methodology. We evaluate heterogeneous load balancing in two scenarios: 1) We configure the load balancing policy of Chicle to assume eight fast and eight slow nodes, with the latter being 1.5× slower than the former and measure the number of epochs to converge. This simple scenario allows us to project time to convergence. 2) We execute Chicle with the load balancing policy enabled on our test cluster where the CPU frequency of four nodes has been reduced to increase the level of heterogeneity. We measure the task and iteration runtimes as well as the number of data chunks of each task across the load balancing process to show how Chicle can correctly learn task runtime and balance load in response.
In micro-tasks, load balancing works by balancing fixessize tasks across all nodes and not by adjusting the number training samples per task. Hence, we assume that each task processes the same number of training samples per iteration. To project the time per iteration, we assume a normalized task runtime: One task, processing 1/16th of the data takes one time unit on the fast nodes and 1.5 time units on the slow nodes. We use this to compute the optimal (shortest) schedule for each iteration.
• For micro-tasks, K tasks on eight fast and eight slow nodes, the optimal schedule is max(i × 1.5s, j × 1.0s)×16/K long with i (j) being the number of tasks on each slow (fast) node such that the schedule length is minimal. For instance, with K = 64 tasks, the optimal schedule is max(3 × 1.5s, 5 × 1.0s) × 16/64 = 1.25s per iteration. • For uni-tasks, load is redistributed such that fast nodes process 1.5× as many training samples as slow nodes, resulting in an iteration duration of 1.2s.
As before, our time projections do not include data transfer overheads, which favors micro-tasks.
Results. Figure 5 shows detailed convergenve over time plots for different data parallelism values. Convergence per epoch results are shown in §A.3. Per epoch, Chicle converges as fast as micro-tasks (16). Over time, however, Chicle converges faster than any micro-tasks configuration as it requires as few epochs to converge as micro-tasks (16) but can balance load more effectively than micro-tasks (64), which reduces iteration duration and thus combines algorithmic and scheduling efficiency. For lSGD, the average maximal test accuracy is ≈0.5% lower with uni-tasks than with micro-tasks (16). However, no load balancing is actually possible with the latter. Compared to other micro-task configurations, uni-tasks achieves a similar average maximal test accuracies. For CoCoA, uni-tasks converges virtually identical to micro-tasks (16) per epoch but outperforms it over time due to its ability to balance load more effectively.
Swimlane diagrams in Figure 6 visualize the load balancing process for the Criteo dataset on our test cluster where the CPU frequency of four nodes has been reduced to 1.2GHz to improve the visibility of this process. Results for the other datasets are similar and provided in the appendix ( §A.3).
The top diagram shows task runtimes per node and iteration without load balancing. Here, iteration duration is determined by the four slow nodes. Task runtimes are visualized by horizontal black bars. Bars that start at the same time represent tasks of the same iteration. Space in-between bars represents time during which tasks are inactive, i.e., communicating or waiting for the latest model update from the trainer. The middle diagram shows task runtimes with load balancing enabled. During the first iteration, task runtimes are the same as without load balancing. As load is shifted during subsequent iterations, task runtimes align and iteration durations reduce. The bottom diagram shows the relative workload (not time) of tasks in the middle diagram. It shows how the workload is shifted from slow to fast nodes. The length of the bars represent the number of data chunks for each task and iteration, relative to all other tasks and iterations. After a few iterations, workload and task runtimes stabilize as Chicle has learned the performance of each node and balance load accordingly.
CONCLUSION AND FUTURE WORK
We presented Chicle, a distributed ML training framework based on uni-tasks. Chicle enables efficient elastic scaling and load balancing without incurring overheads that are typical for micro-task systems and can thereby accelerate time to convergence by orders of magnitude in some cases. Our work touches many issues that distinguish distributed ML training from regular distributed applications, such as their sensitivity to data parallelism. Still, many aspects of ML workloads remain unexplored, and we believe there is a lot of potential to further exploit the unique properties of ML algorithms to build more efficient systems.
A.1 Baseline comparisons
We compare Chicle against Snap ML (Dünner et al., 2018) for CoCoA and PyTorch (Paszke et al., 2017) for mSGD in a non-elastic, non-heterogeneous scenario. Neither comparedto framework is able to elastically scale nor balance load. The purpose of this comparison is to show that the elasticity and load balancing capabilities of Chicle and uni-tasks do not come at a cost of performance in the normal case. In consequence, Chicle's elasticity and load balancing policies are also not used during these experiments. Both frameworks are executed with RDMA-enabled MPI communication backends. We measure the convergence per epoch and over time. Each experiment is repeated 5×.
PyTorch. As no lSGD implementation for PyTorch exists, we compared Chicle to PyTorch using mSGD. mSGD is a special case of lSGD with H = 1. Chicle's mSGD training algorithm uses libtorch, the C++ backend of PyTorch, which allows us to rule out the implementation of the training algorithm as source for any potential differences. For both datasets, a learning rate of 0.002 and a momentum of 0.9 is used.
Convergence per epochs is virtually identical to PyTorch. This is expected as both are based on libtorch and therefore use the same training algorithm implementations, CNN and hyper-parameters. Per time, Chicle is slightly faster, which is likely due to overheads introduced by Python, which do not afflict Chicle, at it is natively implemented in C++. The maximal test accuracy that was achieved within the test duration is 65.2% for CIFAR-10 with both frameworks. For Fashion-MNIST, Chicle has a 0.2% lead over PyTorch with 91.4%. Note that we did not tune hyper-parameters for each dataset dataset nor adjust them online, which is why the test accuracy for CIFAR-10 degrades slightly after reaching a peak.
Snap ML. Chicle's CoCoA/SCD implementation for the training of a SVM is based on the original Spark implementation (Smith, 2019 A.2 Elastic scaling Figure 9 shows per-epoch convergence results for the elastic scaling experiments. A.3 Load balancing Figure 10 shows per-epoch convergence results for the load balancing experiments. Figure 11 shows the load balancing process during the first 10 (CoCoA) and 50 (lSGD) iterations.
A.4 Example application
Listing 1 shows a simplified trained and solver module for mSGD on Chicle.
|
2019-09-11T07:37:05.000Z
|
2019-09-11T00:00:00.000
|
{
"year": 2019,
"sha1": "0e78413ad7394ba7cb68a344932cc9b9d1e409f8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0e78413ad7394ba7cb68a344932cc9b9d1e409f8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
75376102
|
pes2o/s2orc
|
v3-fos-license
|
CRANIAL AUTONOMIC SYMPTOMS IN MIGRAINE
Cranial autonomic symptoms (CAS) in patients with migraine and cluster headaches (CH) were characterized and compared in a prospective study of consecutive patients attending a headache clinic at Taipei Veterans General Hospital, Taiwan. CAS items surveyed were conjunctival injection, lacrimation, nasal congestion, rhinorrhoea, eyelid edema, and forehead/facial sweating. Of a total of 884 patients, 786 (625 women/161 men, mean age 40.1 (12.9) years) had migraine and 98 patients (11 women/87 men, mean age 36.2 (10.5) years) had CH. Migraine diagnoses were episodic without aura in 48%, with aura in 5%, chronic in 39%, and probable migraine in 8%. In the CH group, 99% had episodic CH and 1% had chronic CH, a typical low incidence of chronic cases among Asians. CAS occurred in 56% patients with migraine, and the incidence was similar in all migraine subtypes. Forehead/facial sweating in 28% of migraine patients was the commonest CAS, followed by lacrimation in 24%. Migraine patients with CAS compared to those without had higher frequencies of severe migraine, nausea, photophobia and phonophobia, and vomiting. Patients with CH had a higher frequency of CAS than migraine patients. To differentiate migraine with CAS from CH, the characteristic most predictive of migraine was bilateral CAS with either 1) mild to moderate intensity or 2) CAS occurring without headache. Lacrimation was the CAS with highest positive predictive value, specificity, and second highest sensitivity. (Lai T-H, Fuh J-L, Wang S-J. Cranial autonomic symptoms in migraine: characteristics and comparison with cluster headache. J Neurol Neurosurg Psychiatry Oct 2009;80(10):1116-1119). (Respond: Dr S-J Wang, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan 11217. E-mail: sjwang@vghtpe.gov.tw).
CRANIAL AUTONOMIC SYMPTOMS IN MIGRAINE
Cranial autonomic symptoms (CAS) in patients with migraine and cluster headaches (CH) were characterized and compared in a prospective study of consecutive patients attending a headache clinic at Taipei Veterans General Hospital, Taiwan. CAS items surveyed were conjunctival injection, lacrimation, nasal congestion, rhinorrhoea, eyelid edema, and forehead/facial sweating. Of a total of 884 patients, 786 (625 women/161 men, mean age 40.1 (12.9) years) had migraine and 98 patients (11 women/87 men, mean age 36.2 (10.5) years) had CH. Migraine diagnoses were episodic without aura in 48%, with aura in 5%, chronic in 39%, and probable migraine in 8%. In the CH group, 99% had episodic CH and 1% had chronic CH, a typical low incidence of chronic cases among Asians.
CAS occurred in 56% patients with migraine, and the incidence was similar in all migraine subtypes. Forehead/facial sweating in 28% of migraine patients was the commonest CAS, followed by lacrimation in 24%. Migraine patients with CAS compared to those without had higher frequencies of severe migraine, nausea, photophobia and phonophobia, and vomiting. Patients with CH had a higher frequency of CAS than migraine patients. To differentiate migraine with CAS from CH, the characteristic most predictive of migraine was bilateral CAS with either 1) mild to moderate intensity or 2) CAS occurring without headache. Lacrimation was the CAS with highest positive predictive value, specificity, and second highest sensitivity. COMMENT. More than 50% of adults with migraine have cranial autonomic symptoms (CAS). Patients with CAS have more severe migraine often associated with photophobia, nausea and vomiting. Compared to those with cluster headaches, CAS with migraine are usually bilateral rather than unilateral and less severe.
Prof PJ Goadsby, San Francisco, in an editorial commentary, discusses the anatomy and physiology of CAS (J Neurol Neurosurg Psychiatry Oct 2009;80:1057-1058). The trigeminal-autonomic reflex is the basis for the symptoms. The effect is largely lateralized but innervation is also crossed. The pathway can be activated from the brain via connections from hypothalamus to superior salivatory nucleus. Comparing trigeminal autonomic cephalgias (TACS) and migraine, TACs are shorter in duration, sometimes seconds as in SUNCT/SUNA, minutes in paroxysmal hemicrania, and a few hours in cluster headache. In the clinical distinction of cluster headache and migraine, findings pointing to migraine include bilateral pain, attacks longer than 3 hours (>1-2 hours in children), bilateral CAS, bilateral photophobia and phonophobia. Whereas patients with migraine are generally quiet, cluster headache patients are restless. Hemicrania continua response to indomethacin is another differentiating factor. CAS only at the time of headache should help in the distinction from sinus infection. In adults, migraine is more common in women, cluster headache in men.
Cluster headache is uncommon in childhood. Onset is usually in the second and third decade. A retrospective review of cases attending a pediatric neurology clinic in Bristol, UK, between 2000 and 2005 identified 11 patients (7 male, 4 female) with median age of onset of 8.5 years (range 2-14). Median age at diagnosis was 11.5 years (range 7-17). Eight had episodic and 3 had chronic cluster headache. Most had cranial autonomic activation and agitated movement. Maytal J et al modifications of the IHS criteria for pediatric migraine found that decreasing the length of attacks below 2 to 48 hours would increase the sensitivity of diagnosis, but adding associated autonomic symptoms of facial redness or pallor, while improving sensitivity, also decreased the specificity. The addition of CAS while helpful was not recommended. (Neurology 1997;48:602-607). Perhaps more attention to autonomic symptoms and behavior in diagnosis of children with migraine would be warranted.
TOPIRAMATE-INDUCED COUGH IN MIGRAINE PROPHYLAXIS
Three adults who developed intractable cough during topiramate prophylaxis of migraine are reported from the University of Padua and other centers in Italy. Cough developed early during the titration phase at dose levels of 75-100 mg/day, and resolved rapidly after withdrawal. Secondary causes of cough, including GERD, were excluded. The cough was episodic, dry, and very annoying, especially at night. Despite effective prevention of headache with topiramate, treatment was discontinued. Literature review revealed no previous case reports of cough as a side effect of topiramate treatment for migraine. COMMENT. Topiramate is a first-line treatment for migraine prophylaxis in adults. Adverse events in 20-25% of patients may require discontinuation of treatment but are rarely severe. They include weight loss, dizziness, somnolence, paresthesias, impaired concentration and memory, and language difficulties. Cough has not been reported and the mechanism is unexplained. No patient received ACE inhibitors for hypertension, a known cause of dry cough in adults. Pubmed search for cough with topiramate treatment of childhood epilepsy or migraine found no reports.
ISOLATED EPILEPTIFORM EEG DISCHARGES AND AUTISM
The relationship between EEG abnormalities and neuropsychiatric disorders, and their possible clinical significance are reviewed by an investigator at Wayne State University, Detroit, MI, with special attention to the EEG and autism. Approximately one third of children with autistic spectrum disorder (ASD) develop epilepsy. Of 46 consecutive children with autism (34 boys, and 12 girls, mean age 7.8 +/-2.7 years), 35% had epilepsy (Canitano
|
2019-03-13T13:32:00.411Z
|
2009-11-01T00:00:00.000
|
{
"year": 2009,
"sha1": "9f3aa342fdd2316b34ba241f7d2c5fc63dd84dce",
"oa_license": "CCBY",
"oa_url": "http://www.pediatricneurologybriefs.com/articles/10.15844/pedneurbriefs-23-11-6/galley/655/download/",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "65558afb11bd841b8edae1e1d151f89b403947b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11146366
|
pes2o/s2orc
|
v3-fos-license
|
Dietary Diversity on the Swahili Coast: The Fauna from Two Zanzibar Trading Locales
Abstract Occupants of coastal and island eastern Africa—now known as the ‘Swahili coast’—were involved in long‐distance trade with the Indian Ocean world during the later first millennium CE. Such exchanges may be traced via the appearance of non‐native animals in the archaeofaunal record; additionally, this record reveals daily culinary practises of the members of trading communities and can thus shed light on subsistence technologies and social organisation. Yet despite the potential contributions of faunal data to Swahili coast archaeology, few detailed zooarchaeological studies have been conducted. Here, we present an analysis of faunal remains from new excavations at two coastal Zanzibar trading locales: the small settlement of Fukuchani in the north‐west and the larger town of Unguja Ukuu in the south‐west. The occurrences of non‐native fauna at these sites—Asian black rat (Rattus rattus) and domestic chicken (Gallus gallus), as well as domestic cat (Felis catus)—are among the earliest in eastern Africa. The sites contrast with one another in their emphases on wild and domestic fauna: Fukuchani's inhabitants were economically and socially engaged with the wild terrestrial realm, evidenced not only through diet but also through the burial of a cache of wild bovid metatarsals. In contrast, the town of Unguja Ukuu had a domestic economy reliant on caprine herding, alongside more limited chicken keeping, although hunting or trapping of wild fauna also played an important role. Occupants of both sites were focused on a diversity of near‐shore marine resources, with little or no evidence for the kind of venturing into deeper waters that would have required investment in new technologies. Comparisons with contemporaneous sites suggest that some of the patterns at Fukuchani and Unguja Ukuu are not replicated elsewhere. This diversity in early Swahili coast foodways is essential to discussions of the agents engaged in long‐distance maritime trade. © 2017 The Authors International Journal of Osteoarchaeology Published by John Wiley & Sons Ltd.
Introduction
In the mid-first millennium CE, the Swahili coast-an area stretching from southern Somalia to Mozambique, and including offshore islands, the Comoros and north-western Madagascar-became increasingly populated. In coastal and island Kenya and Tanzania (Figure 1), villages and towns emerged out of earlier Iron Age farming societies and developed ties with the broader Indian Ocean interaction sphere through trading links that stretched as far as the Arabian Peninsula, the Indian subcontinent and ultimately south-eastern Asia. By the second millennium CE, a cosmopolitan maritime society had emerged, engaged with the sea and the world beyond eastern Africa, but with origins on the mainland (Fleisher et al., 2015). While second millennium CE 'stone towns' are better known, settlements dating to the Middle Iron Age (MIA; c. 6th-10th centuries CE) are key to the development of maritime trade and Swahili society.
Zooarchaeological data are particularly critical to this endeavour for two reasons. First, faunal remains can help pinpoint the timing and nature of early trade, by tracing Asian taxa that were eventually introduced to eastern Africa, such as zebu cattle (Bos indicus), domestic chicken (Gallus gallus), black rat (Rattus rattus) and house mouse (Mus musculus) (Boivin et al., 2013). Second, a deeper understanding of subsistence strategies helps us envision more accurately the agents of this trade. While elsewhere in the Indian Ocean interaction sphere, trading locales were often large, permanent settlements reliant on food production (Seland, 2014), there is more diversity in early Swahili coast subsistence strategies, as we demonstrate here.
Although some in-depth zooarchaeological analyses have been conducted at Swahili sites-notably in the Lamu and Zanzibar archipelagos (Horton, 1996;Horton, in press), and as part of more recent studies of fishing practises on the eastern African coast (Quintana Morales, 2013)-the more common pattern is a taxonomic list appended to a site report. Fish, central to coastal economies, frequently go understudied. Collection practises vary widely, with earlier projects rarely reporting smaller fish or micromammals. In this paper, we present zooarchaeological data from two trading locales on the main island of the Zanzibar archipelago, where recent re-excavation campaigns M. E. Prendergast et al. prioritised microfaunal recovery and chronological resolution. We discuss assemblage taphonomy, taxonomic diversity and inferences that may be made about fishing and foraging strategies. We contextualise these findings within a broader discussion of early Swahili coast terrestrial and marine foodways.
Background to the study area The Zanzibar archipelago has a tropical climate shaped by the intertropical convergence zone and the Indian Ocean Dipole (Marchant et al., 2007). These forces result in a long rainy season (masika) from March to May and a short rainy season (vuli) from October to December; mean annual rainfall is 1500-1800 mm per year (Punwong et al., 2013). Vegetation is a mix of dry and moist forests, shrubland and grassland (Burgess & Clarke, 2000). Today on the main island (Unguja; here referred to as Zanzibar Island), the landscape has been transformed by tourism and the cultivation of cash crops and other recent imports such as maize. The premodern economy, however, rested on African crops like sorghum (Sorghum bicolor), pearl millet (Pennisetum glaucum) and cowpea (Vigna unguiculata) (Crowther et al., 2016b), with introduced Asian crops such as rice (Oryza sativa) becoming increasingly important in the second millennium CE (Walshaw, 2010). While the island's soils, overlying coral rag, can be thin and poor, there are deep, fertile soils on the western side of the island, where Iron Age settlement took place. Planting of sorghum and millet typically occurs during both short and (to a lesser extent) long rains, while rice, a riskier crop, grows only during the long rains and is highly vulnerable to the timing and nature of these rains (Tanzania, 2016;Walshaw, 2010). Animal husbandry in Zanzibar today relies on three domesticates: cattle (Bos sp.), goat (Capra hircus) and chickens (Tanzania, 2016). Taurine cattle (Bos taurus) and goat had long been present on the African mainland prior to Iron Age occupation of Zanzibar, while chicken and later zebu cattle or indicine-taurine crossbreeds were introduced via maritime exchanges Crowther et al., in press). The earliest evidence for zebu in Zanzibar is found at Kizimkazi in the mid-second millennium CE (Van Neer, 2001). The history of sheep (Ovis aries) on the island is little known, but it remains a minor component of the economy today, and its consumption is reserved for special occasions.
Zanzibar's richest resources, however, lie under water, in mangrove creeks, coral reefs and seagrass beds. On the western side of the island, the reefs are particularly extensive and are easily accessible in the shallow, calm sea. Fishing takes place year-round, although fishing activities are reduced during the strong winds of the south-east monsoon (kusi) from April to September. The primarily small-scale artisanal fishery consists of various traps, nets and hook and line techniques used in nearshore, shallow waters; technology remains largely traditional, involving nonmotorised dhows, small boats and canoes (Jiddawi & Öhman, 2002). Most families combine fishing and farming for household subsistence.
Study sites
In 2011 and 2012, the Sealinks Project undertook excavations at two Zanzibar Island sites with evidence for early trade, as part of a broader investigation of the development of maritime societies in eastern Africa and their connections with the Indian Ocean world (e.g., Crowther et al., 2016a;Crowther et al., 2016b;Crowther et al., in press). Fukuchani (S5°49 0 18″, E39°1 7 0 27″) lies on the north-western coast on a long beach, protected by Tumbatu Island. The site is comprised of at least 10 mounded middens running parallel to the coastline. Since test excavations in 1989 and 1991 (Horton, in press), much of the site has been disturbed. New excavations targeted intact deposits, aiming to improve chronological resolution and recover high-quality botanical and faunal samples. The 2011 campaign revealed remains of a daub structure, local Tana tradition/triangular incised ware (TT/TIW) ceramics, trade wares originating in the Near East, glass and shell beads and abundant bone and shell. Palaeobotanical analyses indicate the presence of sorghum, pearl millet and baobab (Adansonia digitata) (Crowther et al., 2016b). Three Accelerator Mass Spectrometry (AMS) radiocarbon dates on charcoal and bone place this occupation in the seventh to eighth centuries CE.
The larger (c. 17 ha) site of Unguja Ukuu (S6°18 0 0″, E39°29 0 0″) sits on the south-western coast, on a coral rag peninsula stretching between the mangrove-lined Uzi Channel and the resource-rich Menai Bay. This natural harbour location contributed to Unguja Ukuu's growth as a trading port. Excavations in 1984 (Horton, in press) and 1989-1993(Juma, 2004) revealed a wattle-and-daub settlement, with deep midden deposits containing TT/TIW pottery as well as Near Eastern and Chinese trade wares, iron slag, glass and shell beads and bead grinders and rich faunal and shell assemblages. The 2011-2012 excavations at Unguja Ukuu sought to improve chronology and faunal and botanical recovery, within the few areas remaining undisturbed and 623 Dietary Diversity on the Swahili Coast accessible. Artefacts were broadly similar to those of earlier campaigns and included trade goods such as local incense and imported glass beads (Crowther et al., 2015;Wood et al., 2016). Sorghum, pearl millet and baobab were identified among the botanical remains, in addition to small quantities of rice (Crowther et al., 2016b). A Bayesian analysis of 31 AMS radiocarbon dated samples-25 crop seeds, 3 black rat bones and 3 samples of mangrove charcoal-places the main occupation in the 7th-10th centuries CE (Crowther et al., 2016b).
Materials and methods
Three trenches (FK10-FK12; 7 m 2 ) were excavated at Fukuchani and six (UU10-UU15; 25 m 2 ) at Unguja Ukuu, using the single-context method. All sediments were sieved, either via dry sieving (3-mm mesh) during excavation or via flotation (0.3-mm mesh) and wet sieving (1-mm mesh) of sediment samples from each context. Faunal remains were collected at every stage of this process: by hand during excavation and from both dry-sieved and wet-sieved sediments. At both sites, nearly all faunal remains came from midden contexts. At Fukuchani, a house structure was identified, but no fauna was associated with its floor. At Unguja Ukuu, two intact hearths and three disintegrated hearths were excavated; faunal remains were absent except in two of the disintegrated hearths. All faunal remains from both sites (except minor amounts found in the paleobotanical flots) were fully analysed, with the exception of one trench at Unguja Ukuu (UU14), in which all tetrapods were analysed, but fish remains were analysed in 25% of bone-bearing contexts. Mollusc shells from both sites will be reported elsewhere (Faulkner et al., unpublished data).
Faunal analyses took place using the collections of the National Museums of Kenya (Nairobi) and Muséum national d'Histoire naturelle (Paris). Guides were occasionally used (Fischer & Bianchi, 1984;Froese & Pauly, 2012;Smith et al., 1986;Walker, 1985), including for distinguishing between caprines (Zeder & Lapham, 2010) and among several gallinaceous birds (MacDonald, 1992). Given the difficulties of distinguishing indicine, taurine and crossbred cattle remains (Magnavita, 2006), here we identify them as Bos sp. Elsewhere, we discuss the methods and findings of ancient DNA (aDNA) analysis and zooarchaeology by mass spectrometry (ZooMS) collagen fingerprinting, used to confirm or negate attributions to domestic chicken or black rat (Prendergast et al., unpublished data).
For tetrapod remains, the number of identified specimens (NISP) includes specimens that could be identified minimally to element or element group (e.g. limb bone) and to taxon, taxonomic group or size class (Table S1); unidentified fauna (NID) were weighed. Each identified specimen was examined with a 20× lens to identify cut, percussion, abrasion, carnivore and rodent marks and biochemical pitting; burning was also noted. Weathering was scored following Behrensmeyer (1978), and cortical preservation was given an overall rating (good, moderate, poor). Estimates of minimum number of individuals (MNI) were calculated using laterality, size and, where relevant, age. MNI was calculated for all contexts jointly at Fukuchani, while at Unguja Ukuu, MNI was calculated first for all contexts and again separately for two occupational phases distinguishable only in trenches UU11 and UU14.
Fish nomenclature follows Eschmeyer (2014). All elements were considered for identification, and taxonomic attributions were based on the morphological features of each fragment and not through association with other identified specimens. MNI was calculated per context taking into account laterality and size. For Chondrichthyes, MNI was calculated from the presence of at least one vertebra type in each context and not by size, as for bony fishes, because the size of cartilaginous fish vertebrae varies within the individual. The total length (TL) of live individuals was estimated by comparing archaeological bones with similarly sized reference specimens of known lengths. Burning of fish remains was noted based on black, grey and white discolouration, and other modifications such as gnawing and cut marks were recorded if visible to the naked eye.
Assemblage preservation and taphonomy
Preservation at both sites is excellent, with most cortical surfaces intact and visible. This led to high identification rates (Table 1): At Fukuchani, 56% of fish remains and 73% of tetrapod remains (by weight) were identified to taxon, while at Unguja Ukuu, 53% of fish remains and 66% of tetrapod remains were identified. At Unguja Ukuu, there was substantial intrasite variation in terms of fragmentation and thus identifiability: some contexts had abundant fragments of cancellous bone from large carcasses, likely marine tetrapods.
Bone surface modifications are few on tetrapod remains (Table 2, Figure 2), aside from cut marks, which appear on 8% of limb bone specimens. This low rate is unsurprising given that both sites' dominant taxa weigh 624 M. E. Prendergast et al. under 10 kg; at Unguja Ukuu, larger carcasses-of cattle or marine turtle-are cutmarked at higher rates. Tooth marks and percussion marks are rare to absent. Traces of cutting and gnawing were visible on <1% of identified fish specimens at Unguja Ukuu and on none at Fukuchani. Burning affects 1-8% of fish NISP and 2-8% of tetrapod NISP at Unguja Ukuu; one burnt fish specimen was identified at Fukuchani.
The marine economy
Both sites' subsistence strategies focus on the marine environment, with abundant mollusc shells (Faulkner et al., unpublished data), a predominance of fish and fewer marine mammal and turtle remains. At Fukuchani, fish comprise 63% of NISP and 92% of MNI. Although 20 families were identified, 68% of total NISP is made up of four generally associated with coral reefs: emperor fish (Lethrinidae), parrotfish (Scaridae), groupers (Serranidae) and surgeonfish (Acanthuridae) (Tables 3 and S2). Emperor fish and groupers are bottom-feeding, carnivorous fish, while parrotfish and surgeon fish are often found grazing on algae growing over coral substrates (Carpenter & Allen, 1989;Nelson, 1994). Other common families are also found in shallow coastal areas close to the sea bottom around coral reefs and estuaries, such as snappers (Lutjanidae) and grunts (Haemulidae), and along sandy-muddy bottoms, for example, bonefish (Albulidae) (Fischer & Bianchi, 1984). This pattern generally overlaps with findings from Horton's (in press) test excavations (Table S3), but that sample had a narrower taxonomic range and proportionally fewer fish (13% total NISP), possibly due to sampling (Table 4) include dolphin (cf. Tursiops truncatus), possible dugong (Dugong dugon) and sea turtle (Cheloniidae). Cut marks on remains of the latter two taxa indicate exploitation.
At Unguja Ukuu, in the five fully analysed trenches, 87% of NISP are fish (Table 1); by contrast, Horton's (in press) excavations reported 60% of NISP as fish, and Juma (2004) reports the presence of four principal fish families and no quantitative data. The general pattern of coral and estuary habitat exploitation in Horton's report mirrors that of Fukuchani; however, this substantially larger assemblage includes 16 additional families (Tables 3 and S4). Five families M. E. Prendergast et al.
commonly found around coral reefs make up 66% of fish NISP: emperor fish, groupers, jacks (Carangidae), rabbitfish (Siganidae) and parrotfish. The latter two are primarily herbivorous fish found around shallow reefs (Smith et al., 1986). The jacks in the assemblage, dominated by Caranx spp., are typically fast-swimming predators often found over reefs (Nelson, 1994). Other fast-swimming fish are represented by four bones attributed to the tuna family (Scombridae), of which one caudal vertebra belongs to little tuna (Euthynnus affinis) and one left maxilla is likely from little tuna or tuna (cf. Thunnus sp.) ( Figure S1). Little tuna and three species of Thunnus are found along the Tanzanian coastline; little tuna is generally less than 100 cm in total length, whereas Thunnus spp. can reach over 200 cm (Collette & Nauen, 1983). Although scombrids are generally considered oceanic migratory fish, the specimens at Unguja Ukuu are individuals of less than 90 cm in length, more likely to be found closer to the coastline. Cartilaginous fish (Chondrichthyes) were identifiable by a small number of vertebrae (1% fish NISP); it was not possible with available reference specimens to determine the species. The stratigraphy of UU11 and UU14 enables a breakdown of this subassemblage into earlier (c. 650-800 CE) and later (c. 800-1050 CE) phases of occupation, pointing to shifts in marine exploitation ( Figure 3, Table S5). After 800 CE, emperor fish become relatively more abundant and slightly smaller. A comparison of reconstructed fish lengths of Lethrinus spp. from earlier and later periods at Unguja Ukuu shows a higher concentration of smaller sized individuals and slightly shorter mean length in the later Figure S2). The implications of these patterns for fishing strategies are discussed in the succeeding texts. Sea turtle specimens are abundant at Unguja Ukuu (Table 4), and 21% of them are cutmarked (Table 2), three times the assemblage average; a single cutmarked dugong rib was also identified. The true abundance of sea turtle may be underestimated: While confidently identified and probable chelonian remains jointly form 7% of NISP, a large fraction of unidentified specimens are cancellous fragments similar to those of marine fauna. Recording protocols may therefore explain distinct sea turtle frequencies among campaigns (Table S6).
The hunting and trapping economy
At Fukuchani, hunter trappers focused on the island's small bovidssuni (Neotragus moschatus), blue duiker (Philantomba monticola) and Ader's duiker (Cephalophus adersi)-which form more than one third of the total assemblage and 68% of wild terrestrial NISP after excluding indeterminate specimens (Figure 4, Tables 4 and S7). Tree hyrax (Dendrohyrax validus) and less frequently giant pouched rat (Cricetomys gambianus) were also prey taxa, as attested by cut marks. Other taxa present in low quantities include bushpig (Potamochoerus larvatus) and monkeys (Procolobus and Cercopithecinae), as well as two or more small carnivores; it is not certain that all of these entered the assemblage as food. Finally, a remarkable find at Fukuchani is a cache of 13 dwarf bovid (likely suni) metatarsals, from at least seven individuals ( Figure S3). Nearly all were complete, and only one bore a cut mark. This cache was recovered ca. 20-30 cm above a juvenile human burial, clearly placed within the grave fill.
At Unguja Ukuu, wild taxa are relatively less abundant than at Fukuchani, and the spectrum of hunted or trapped fauna is similar, with small bovids dominant (Figure 4, Tables 4 and S8). Additional prey taxa-confirmed as such by cut marks-indicate a forested environment: These include bushpig, giant pouched rat and tree hyrax. Other fauna present, which may not be related to human occupation, included small carnivores, leopard (Panthera pardus), dwarf galago or bushbaby (Galagoides cf. zanzibaricus), bats, shrews and native rodents.
Domestic and commensal animals
There are clear differences between the two sites in terms of the importance of domestic and commensal animals (Table 4). At Fukuchani, just two caprine teeth and one cow (Bos sp.) tooth were identified in the assemblage, in addition to a few nondiagnostic postcranial specimens that could be caprines, based on size; similar trends were observed in Mudida & Horton's (in press) assemblage (Table S6). We identified one possible chicken specimen, but the genetic match was imperfect (p = 0.54); two black rat bones, however, were confirmed via aDNA and ZooMS, attesting to the presence of Asian fauna (Prendergast et al., unpublished data). (Table S6). Chickens are also potentially common; however, this is stated cautiously as many remains were attributed to indeterminate Galliformes or Galliformes-sized birds. Of six tested specimens, just two produced readable aDNA sequences: one matched chicken and the other a different phasianid (Prendergast et al., unpublished data). Six specimens (possibly seven) of domestic dog were identified, as well as more abundant remains of domestic cats, with 120 (possibly 135) specimens belonging to at least four individuals. Cats at Unguja Ukuu were likely attracted to murid rodents (NISP = 57), including Asian black rat and local gerbils and mice.
Shifts in domestic and commensal species are evident in a comparison of the early and late phases in UU11 and UU14 ( Figure 5, Table S9). While the ratio of wild to domestic animals does not change substantially over time, the relative abundance of commensal animals increases. Rodent remains become much more abundant (by both NISP and MNI) in the post-800 CE phase, although cats are similarly abundant in both phases when using MNI. The increased rodent population might indicate an increase in human population density at Unguja Ukuu towards the end of the first millennium CE, as previously suggested by Juma (2004). This is further supported by evidence in the recent excavations for an increase in bone and mollusc shell density in the later phase of occupation (Crowther et al., unpublished data). Among domestic food species, shifts are generally minor: Cattle and especially caprines become relatively less abundant in the later phase, while chickens become slightly more abundant. Pursuit of wild fauna continues to be an important part of the economy, although suni and duiker become slightly less important relative to other wild animals, such as marine turtles.
Dietary diversity in Middle Iron Age farming communities
The faunal remains from Fukuchani and Unguja Ukuu demonstrate the occupants' ties to the Indian Ocean world but also speak to their diverse array of fishing, trapping and animal husbandry strategies. The occupants were farmers of mainly African crops (Crowther et al., 2016b) and likely had their origins in mainland Bantu-speaking communities (Horton, in press). However, these farmers did not rely exclusively on domestic M. E. Prendergast et al.
animals but rather preyed on Zanzibar's wide array of marine and terrestrial resources. Notably, there is nothing in the faunal record at Fukuchani or Unguja Ukuu to suggest adherence to Islamic dietary guidelines, nor are there major differences in terms of the chosen wild resources between these sites and two Zanzibar cave sites thought to be occupied by hunter gatherers (Chami, 2009;Prendergast et al., 2016). This is interesting because there is material culture evidence at Unguja Ukuu-in the form of an incense burner-to suggest that this may have been an early site for the spread of Islam on Zanzibar (Crowther et al., 2015). Evidence from other Zanzibar sites and from elsewhere along the Swahili coast suggests that Islam was wellestablished in the region by the late first millennium CE (Horton, 1996;Horton, in press). Yet exploitation of clearly haram animals-among them bushpig and sea turtle-is evident at multiple Swahili coast sites, well into the second millennium CE (Quintana Morales & Prendergast, in press), and continues today in some areas (Walsh, 2007). The wild terrestrial fauna represented at both Fukuchani and Unguja Ukuu are largely forestdwelling, although suni and blue duiker can also thrive on more open coral rag thickets. Both sites would likely have been within easy reach of woodlands, even though today there is comparatively little near Fukuchani. While bushpig may have been obtained on spear hunts (possibly with the assistance of dogs), most other fauna in both assemblages could be caught in traps, nets or snares made from perishable materials, possibly similar to those recorded ethnographically on Zanzibar (Walsh, 2007). Elsewhere, ethnographic accounts show that burrowers such as giant pouched rats can be hand-caught, an individual pursuit; net hunts for small bovids, by contrast, often involve family groups or whole communities (Lupo & Schmitt, 2002). The setting of traps or snares, left out for days at a time, requires communal trust and ownership of both technologies and territories. Thus, we suggest that resource procurement at Fukuchani and Unguja Ukuu may not have been the work of a small number of hunters but rather engaged numerous community members, even while some were devoted to farming, fishing, shellfish collection and-particularly at Unguja Ukuu-livestock herding and chicken keeping, the latter a village activity that, like trapping, is widely associated with women and children today. Archaeologically, he notes that metapodials are associated with juvenile and some adult burials dating to the 9th-13th centuries CE in south-eastern Congo. While we cannot know the Fukuchani cache's exact meaning, the finding demonstrates that wild bovids, which dominate the diet, also held social significance for the site's occupants.
Comparison with other Swahili coast sites ( Figure 6) suggests that despite the contrasts between Unguja Ukuu and Fukuchani, they are seen to resemble one another in many respects once compared against a broader and highly diverse set of early Swahili coast sites. A principal component analysis was applied to Early Iron Age (EIA; n = 1) or MIA (n = 12) sites with published, quantitative faunal data. Four variables were examined (Table S10): the percentage of fish (using NISP) in the total vertebrate assemblage; the percentage of domestic taxa (using NISP) in the total tetrapod assemblage, excluding marine animals, microfauna and categories for which wild or domestic status could not be ascertained (e.g. Galliformes; Mammal Size 2); richness of the wild terrestrial faunal assemblage, expressed as the number of discrete taxa (NTAXA); and evenness of the wild terrestrial faunal assemblage, using the reciprocal of Simpson's dominance index (1/D).
Through this comparison, we observe that the patterns of faunal exploitation at Fukuchani and Unguja Ukuu-high frequencies of fish, low and moderate numbers of domesticates and a relatively narrow wild assemblage dominated by a handful of taxa (dwarf bovids and hyrax)-are similar to patterns observed at the EIA site of Juani School in the Mafia archipelago (Crowther et al., 2016a) and the MIA mainland coastal site of Mpiji (Chami, 1994). A more extreme version-with strong dominance of fish and few and species-poor wild terrestrial fauna-is found at Ukunju Cave, also in Mafia, although this sample is very small (Crowther et al., 2014). Within Zanzibar, Fukuchani and Unguja Ukuu provide an interesting contrast with the island's known cave sites: the MIA and arguably 'Neolithic' Machaga Cave (Chami, 2001) and the MIA-LIA components at Kuumbi Cave (Prendergast et al., 2016). At these sites, the paucity of marine resources and domesticates aligns them more closely with two sites of the southern Kenyan hinterland, Chombo and Mteza (Helm, 2000); these sites are also notable for their higher species richness, indicating that occupants had access to a wider M. E. Prendergast et al.
variety of terrestrial fauna than those on the islands. A neighbouring Kenyan hinterland site, Mgombani, differs from the other hinterland sites mainly in its abundance of domesticates, a trend also seen at Shanga in the Lamu archipelago (Horton, 1996), Mtwambe Mkuu on Pemba (Mudida & Horton, in press) and Chibuene in Mozambique (Badenhorst et al., 2011). In summary, there is no single picture of early Swahili coast foodways: Coastal sites, logically, demonstrate a clear reliance on marine resources, but beyond this, variation in livestock keeping and hunting strategies may have had much to do with the sites' immediate environments and the choices made by the occupants.
Marine exploitation strategies
Fish bone assemblages from both Fukuchani and Unguja Ukuu represent a diverse set of fishing strategies in near-shore habitats, particularly not only around coral reefs but also in estuarine areas. The taxa identified at both sites-such as emperor fish, groupers, parrotfish and jacks-continue to be caught today with a variety of methods, including basket traps, hand lines and nets (Samoilys et al., 2011). High numbers of reef-associated fish species attest to the importance of the island's fringing reefs as a source of food. A regional comparison of aquatic habitat exploitation demonstrated that coral reefs supplied the majority of fish consumed in past settlements on the offshore islands of Pemba and Zanzibar, whereas estuary fish were more abundant at Shanga and other near-shore or mainland settlements, reflecting the accessibility of aquatic resources around these sites (Quintana Morales, 2013; Quintana Morales & Horton, 2014). The data presented here are consistent with this pattern of adaptability to the local environment.
Although Unguja Ukuu and Fukuchani share similar spectra of fish taxa, there are some key differences between them. Fish are proportionally more important at Unguja Ukuu, and there is a significant component of large, fast-swimming fish at Unguja Ukuu not visible at Fukuchani. The Unguja Ukuu assemblage has some of the earliest examples on the Swahili coast both of tuna specimens and of cartilaginous fish (Chondrichthyes), which may belong to shark, dating to prior to c. 800 CE. Their presence could indicate the use of technologies capable of catching larger, fastmoving predators; long lines, trolling lines and gill nets are commonly used offshore today for this purpose (Samoilys et al., 2011). However, these are minor components of the assemblage (<2% NISP), and it is more likely that these were opportunistic catches associated with fishing practises closer to the reef, such as those used for catching jacks, another group of fastswimming predators that is more prominent at Unguja Ukuu. Data from coastal sites with long chronologies, like Shanga (Horton, 1996) and Chibuene (Badenhorst et al., 2011), as well as diachronic comparisons at a fish as % total vertebrate NISP; domesticates as % total tetrapod NISP, excluding marine animals, microfauna and categories for which wild or domestic status could not be ascertained; richness (NTAXA) and evenness (1/D) of the wild terrestrial faunal assemblage. See Table S10 for references and data. [Colour figure can be viewed at wileyonlinelibrary.com] 633 Dietary Diversity on the Swahili Coast regional scale (Fleisher et al., 2015;Quintana Morales, 2013), point to a substantial increase in outer reef/offshore fishing in the early second millennium CE. This new fishing strategy occurs in the context of increasing social stratification and intensifying overseas trade. The need for more expensive equipment and larger crews to target offshore fish could be related to a shifting social dynamic among fishers, in which a wealthier sponsor owns the boat and equipment and shares a portion of the catch with the fishing crew.
As noted earlier, abundance of emperor fish increases over time at Unguja Ukuu. Mudida & Horton (in press) describe a similar pattern of smaller quantities of emperor fish at earlier period sites in Zanzibar and Pemba, as well as in earlier levels at Shanga. They posit that increasing numbers of emperor fish could signify an increase in net fishing. It is difficult to prove this, given that emperor fish can be caught using a variety of methods, including nets, traps and lines. In our assemblage, we note the later appearance of certain species that are primarily caught with nets: halfbeaks (Hemiramphidae), moonies (Monodactylidae) and boxfish (Ostraciidae) (Samoilys et al., 2011). A slight decrease in mean size of Lethrinus spp. individuals could be linked to an increase in fishing pressure. These patterns lend support to Mudida & Horton's (in press) hypothesis, but the wide range of emperor sizes and the overall diverse array of families continue to represent a mixed set of fishing strategies. Our data add to an emerging regional picture in which first millennium CE fishing strategies varied more across space than over time.
around coral reefs in near-shore aquatic environments, with no substantial evidence for the exploitation of fast-swimming predatory fish beyond the reefs, a strategy that is more apparent in the region's second millennium CE settlements (Fleisher et al., 2015). The forests and coral rag thickets of Zanzibar provided wild game for the occupants of Fukuchani and, to a lesser extent, Unguja Ukuu, where a caprine-based domestic economy also flourished. Procurement of wild resources through trapping or netting likely involved a degree of community cooperation, and a metapodial bone bundle at Fukuchani speaks to the social significance of the wild animal realm. The remaining question is the relationships that existed among people occupying these two sites, as well as the broadly contemporaneous occupations at Machaga and Kuumbi caves (Chami, 2001;Prendergast et al., 2016), where marine and domestic foods are rare to absent. Although all sites share some material culture, particularly TT/TIW ceramics, their occupants clearly had differing subsistence priorities.
Despite abundant material evidence for trade, particularly at Unguja Ukuu, Asian taxa are relatively uncommon at this site (especially in the earliest phase), and even more so at Fukuchani, despite concerted efforts to recover microfauna and identify non-native taxa. This fits with a pattern seen throughout the Swahili coast: that the introduction of Asian taxa was both later than previously suggested and quite gradual (Prendergast et al., unpublished data). The successful exploitation of rich marine resources, and of local wild and domestic animals, may have left little need or desire for the incorporation of new fauna such as domestic chicken. Furthermore, the rarity of large, densely settled communities-Unguja Ukuu and Shanga being exceptional-may have thwarted the spread of shipborne commensals such as black rat or house mouse. The overall emerging picture of MIA Swahili coast lifeways is one not of numerous 'urban' coastal trading ports, but rather of small-scale societies engaging in fishing, hunting or trapping and foraging as pursuits possibly equal or greater in importance to farming and herding. This finding forces us to consider more deeply the agents implicated in long-distance maritime trade and their economic and social identities.
|
2017-07-28T18:42:32.045Z
|
2017-02-09T00:00:00.000
|
{
"year": 2017,
"sha1": "b94c7b2b1cafa8d9132a602a2cfd12def4f5d572",
"oa_license": "CCBYND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/oa.2585",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f6be73982b47c8e41541868458873b8fe8126551",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
270513961
|
pes2o/s2orc
|
v3-fos-license
|
Study of the mechanical properties and propagation mechanisms of non-coplanar and discontinuous joints via numerical simulation experiments
Non-coplanar and discontinuously jointed rock masses are more complex than coplanar and discontinuously jointed rock masses. The mechanical properties and propagation mechanisms of non-coplanar and discontinuous joints were studied via direct shear tests with microscopic numerical simulation experiments. The numerical simulation tests were performed under different normal stresses, joint inclination angles, and shear rates. The numerical experimental results show that the microscale failure of non-coplanar and discontinuously jointed rock masses is mainly caused by tensile cracks. Additionally, when the peak shear stress is reached, the growth rate of cracks increases rapidly, and the number of cracks increases with increasing normal stress. The shear strength of non-coplanar and discontinuously jointed rock masses increases with increasing normal stress. Under the same normal stress, the variation curves of the shear strength of non-coplanar and discontinuously jointed rock masses with respect to the dip angle exhibit an “S”-shaped nonlinear pattern. Rock masses with joint inclination angles of approximately 15° and 65° have minimum and maximum shear strengths, respectively. The joint dip angle has a significant impact on the final failure mode of rock bridges in the rock mass. As the joint dip angle increases, the final failure modes of rock bridges change from “end-to-end” connection to a combination of “head-to-head” and “tail-to-tail” connections. The shear rate has a certain impact on the peak shear stress, but the impact is not significant. The spatial distribution of the tensile force chains changes as shearing progresses, and stress concentration occurs at the tips of the original joints, which is the reason for the development of long tensile cracks in the deeper parts.
important impacts on the penetration failure mode of the discontinuously jointed rock-like materials through direct shear tests.Liu et al. 7 conducted a large number of direct shear tests on coplanar, closed, and discontinuously jointed nodular rock masses to verify that the tangential deformation curve has segmental characteristics and can be divided into four stages: the prefracture stage, stable extension stage, unstable extension stage, and residual stage.Tang et al. 8,9 investigated the peak shear strength of rock joints under various contact conditions and proposed a new shear strength criterion.They also explored the effects of cyclic freeze-thaw cycles and wet-dry cycles on the shear strength of jointed rock masses and found that both environmental factors have a diminishing effect on the peak shear stress of the rock mass.
At present, there are more studies on jointed rock masses in terms of physical experiments or numerical simulation experiments, and there are fewer studies on the final failure mechanism of non-coplanar discontinuously jointed rock masses 1,10,11 .The reason for this difference is that the preparation of non-coplanar discontinuously jointed specimens is more difficult than that of coplanar discontinuously jointed nodular rock masses, physical experiments can only provide an indication of the damage pattern, and it is difficult to analyze the cause of the damage pattern, especially the development of cracks, at small scales.
With the continuous development of computer technology, numerical simulation has become an important auxiliary tool in the field of geotechnical engineering.Numerical simulation can fully reproduce the stresses in a specimen and is economical, efficient and reproducible.In terms of coplanar jointed rock masses, Liu et al. 12 designed direct shear tests of coplanar discontinuous joint models with different connectivity rates and normal stresses, performed numerical simulations using granular flow software, and found that the shear strength of coplanar discontinuously jointed rock masses was mainly provided by rock bridges.Chen et al. 13 studied the effects of different joint undulation angles on the strength and deformation characteristics of jointed rock masses under the same normal stress conditions and found that the joint undulation angle had an important effect on the shear strength and tensile fracture development of rock masses.Yu et al. 14 introduced strength weakening factors in CPM media for simulating weathering leading to rock strength weakening effects, and with the help of particle flow soft PFC 2D , they found that the weakening effect could lead to rapid crack development within the specimen.Zhou et al. [15][16][17][18][19][20] analyzed the mechanical change patterns and damage mechanisms during the direct shear of jointed rock masses from scales up to the macroscale.In terms of discontinuously jointed rock masses, Zhao et al. 21conducted direct shear tests on fabricated model materials and found that the shear stress curves also exhibited obvious segmental characteristics.Jiang et al. 22 performed DEM numerical simulations of noncoplanar and discontinuous nodular rock masses and explained the changes in the mechanical properties of the rock masses through the distribution of interparticle pressure chains.Ghazvinian et al. 23 studied the propagation of tensile cracks in rock-like materials and found that an increase in joint length decreases the tensile strength, while a reduction in ligament length increases the tensile strength.Moreover, the Brazilian test was found to overestimate the tensile strength.Haeri et al. 24 used PFC 2D simulations to reveal the significant impact of rock joint opening on the formation of shear bands and the failure behavior of rock bridges.Through both physical and PFC 2D simulation experiments, Sarfarazi et al. 25,26 conducted research on the impact of joint number, angle, and separation on the shear failure of rock bridges.It was found that joint characteristics significantly influence the failure mode and strength of rock bridges and that the formation of shear bands is closely related to the arrangement of joints.Fu et al. 27 explored the impact of joint overlap on the shear behavior of rock bridges through experiments and numerical simulations and discovered that joint overlap reduces shear strength and leads to a transition in failure mode from progressive failure to brittle failure with increasing normal stress.Chen et al. 28 carried out numerical simulations of nonpenetrating joints under different confining pressure conditions and reported that the development of tensile fractures decreased with increasing confining pressure.Discrete elements can easily handle discontinuous medium mechanics and effectively reflect discontinuous processes such as cracking and separation of the simulated medium 29 .
In this paper, the discrete element software PFC 2D is used to conduct numerical simulations of non-coplanar and discontinuous nodular rock masses under different normal stresses, different nodal dips, and different shear rates to study their small-scale expansion mechanisms.
Determination of the interparticle contact model
When using PFC modeling, the particles are rigidly connected to each other by default.The different contact modes and mechanical properties between the particles are the most critical part of the modeling and determines the basic mechanical properties of the whole model.The linear contact bond model provides a very small linear elastic mechanical contact that does not resist the relative rotation among the contact surfaces of the particles and cannot carry frictional forces.The contact interface of the linear parallel bond model is a bonding interface with a section size that can transfer either force or moment.The interparticle forces are reflected through the contact force chain.When the local stress exceeds the linear parallel bonding strength, the contact will undergo bond rupture, forming microcracks.Therefore, the linear parallel bond model can effectively simulate the mechanical properties of rock materials 30,31 .
For the generation of joints, the SRM (synthetic rock mass) method in PFC 5.0 software can be used 32 ; the generated "joint" is just a line in the rock mass and does not have the physical properties of a real joint.In this case, the program will automatically recognize this line and replace the contact model of the surrounding particles with a smooth joint model and give it mechanical parameters so that it has the shape and properties of a real joint 33,34 .
Consequently, this study utilizes the linear contact bond model to simulate the interactions between particles and the wall, denoted by red lines; the linear parallel bond model to represent the contacts between particles, denoted by orange lines; and the smooth joint model to emulate the joints within the rock mass.The lines generated by the smooth joint model at the interface with the surrounding particles are marked with green lines.The details are shown in Fig. 1.
Determination of mesoscale parameters
This paper employs cement mortar with a mixing ratio of cement:sand:water = 2:3:1 to simulate rock material.The physical and mechanical parameters of the rock-like material are as follows: density, 2.65 kg/m 3 ; compressive strength, 46.86 MPa; tensile strength, 3.51 MPa; elastic modulus, 8.75 GPa; Poisson's ratio, 0.25; cohesive force, 5.25 MPa; and friction angle, 46°.To obtain mechanical properties similar to those of real jointed rock masses, it is necessary to change the mesoscale parameters (particle stiffness ratio, contact modulus, bond strength, friction coefficient, etc.) and use PFC2D software to carry out a series of experiments similar to laboratory tests: uniaxial compression tests, Brazilian splitting tests, direct tensile tests and other tests.
Uniaxial compression tests are used to calibrate the uniaxial compressive strength and elastic modulus of a model.Figure 2 presents the uniaxial compressive stress-strain curves and failure diagrams from the physical experiment and numerical simulation.Figure 2 shows that the uniaxial compressive strengths of the specimens obtained from the physical experiment and numerical simulation test are 50.09MPa and 46.86 MPa, respectively, www.nature.com/scientificreports/which is a difference of 6.45%.The elastic moduli are 8.13 GPa and 8.75 GPa, respectively, with a discrepancy of 7.09%.The Brazilian splitting test is used to check the tensile strength of the specimen.Figure 3 shows the Brazilian tensile stress-strain curves and failure diagrams of the specimens.Figure 3 shows that the tensile strengths of the specimens in the two types of tests are 3.62 MPa and 3.51 MPa, respectively, with a difference of 3.04%.
The parameters obtained from physical experiments and numerical simulations are shown in Table 1.Therefore, the mechanical property results obtained by using PFC2D to establish model specimens are in good agreement with the results of laboratory tests conducted by Liu et al. 35,36 , which ensures that the established numerical model has application value for the study of the mechanical properties of discontinuously jointed rock masses.The final mesoscale mechanical parameters of the PFC simulations are shown in Table 2.
Numerical simulation test program
First, 6 walls are defined by the starting point coordinates, and the range surrounded by the walls is 200 mm × 200 mm.By setting the porosity to 0.02, the particles in the model specimen are uniformly distributed, and the radii of the particles are Rmin = 0.87 mm and Rmax = 1.44 mm in the given range, which are cogenerated into 9143 particles.The #1, #3, and #4 walls form the upper half of the shear box, and the #2, #5, and #6 walls form the lower half of the shear box.The #5 and #6 walls form the lower half of the shear box, and the lower half of the shear box is made stationary during the servo process so that the upper half moves to the right at a constant shear rate.For the internal arrangement of the four non-coplanar joints along a shear surface, the center points of the joints on the shear surface are shown in Fig. 4. Figure 4 is a schematic diagram of the PFC specimen, where the length of a joint l is 30 mm, and the joint dip angle θ is 45°.To study the effects of different normal stresses σ, different joint dips θ and different shear rates v on the strength and joint growth pattern of the discontinuously jointed rock mass, the specimen simulation scheme is shown in Table 3.
Effect of the normal stress
Figure 5 shows the shear stress-shear displacement curves and the specimen failure diagrams obtained under the conditions of five levels of normal stress, a joint dip angle of 45°, a joint length of 30 mm, and a shear rate of 0.06 mm/s in a direct shear test.
When the normal stress is 1.0 MPa, 2.0 MPa, 3.0 MPa, 4.0 MPa, and 5.0 MPa, the corresponding peak shear stresses are 4.258 MPa, 5.049 MPa, 5.46 MPa, 6.406 MPa, and 6.95 MPa, respectively; the corresponding peak displacements (defined as the shear displacement corresponding to the peak shear stress) are 1.08 mm, 1.26 mm, 1.27 mm, 1.48 mm, and 1.57 mm, respectively.The slope of the peak shear stress curve is consistent with that of the peak displacement curve, and both show a nonlinear increasing trend with increasing normal stress (see Fig. 5b).By comparing the specimen failure diagrams under the conditions of five levels of normal stress, it is found that the joint development of the rock mass is different under different normal stresses.When the normal stress is less than 3 MPa, the joints exhibit a significant climbing phenomenon under direct shear, which is specifically manifested by the appearance of large cracks in the area along the line connecting the loading end and the fixed end in the shear direction.In contrast, when the normal stress is greater than or equal to 3 MPa, the climbing phenomenon is not pronounced, and the failure surface is more fragmented.This indicates that under www.nature.com/scientificreports/ the condition of a 45° joint dip angle, an increase in the normal stress suppresses the climbing effect triggered by the direct shear action on the specimen.Figure 6 presents the peak shear stress corresponding to the crack distribution and particle force chain diagrams under the conditions of five levels of normal stress (from left to right, the first diagram is the crack distribution, the second is the compressive force chain, and the third is the tensile force chain).In the crack distribution diagram, both the original and new joints are uniformly represented by red lines.In the first diagram for each level of normal stress, the heads and tails of four joints are marked, where A1, B1, C1, and D1 represent the heads of the first to fourth joints from the left, respectively, while A2, B2, C2, and D2 represent the tails of the first to fourth joints from the left, respectively.In the force chain diagrams, the green lines indicate compressive force chains, the red lines represent tensile force chains, and the black lines denote the original joints.The naming convention is consistent with that of the crack distribution diagram and will not be repeated here.
Here, A2-B1 is defined as the connection between the tail of joint A and the head of joint B, and B2-CC represents the connection between the tail of joint B and the middle of joint C, with other cases being similar.The crack distribution diagrams corresponding to the peak shear stress of the specimens under five levels of normal stress are compared.The following observations were made: (1) When the normal stress is 1 MPa and 2 MPa, the rock bridge between joints exhibits a "head-to-tail" connection pattern of cracks A2-B1, B2-C1, and C2-D1, and at this time, no tensile fissures appear at the lower end of the left joint of the rock mass.(2) When the normal stress is between 3 and 5 MPa, the rock bridge between joints A and B still maintains a "head-totail" failure pattern; in addition to the clear "head-to-tail" (B2-C1, C2-D1) failure paths appearing within the rock bridges between joints B and C and C and D, there are also accompanying failure paths from the end to the middle of the adjacent joint (B2-CC, CC-D1) within these two rock bridges.(3) Taking B2-C1 and C2-D1 as examples, with increasing normal stress, the former shows a trend of change from the "head-to-tail connection of B2-C1" at 3 MPa to the "tail-to-tail" connection of B2-C2 at 5 MPa.In contrast, the latter exhibits a trend from the "head-to-tail" connection of C2-D1 at 3 MPa to the "head-to-head" connection of C1-D1 at 5 MPa.Concurrently, under the three levels of normal stress, tensile fissures extending deep into the specimen originate from the head of joint A.
Taking the center of the specimen as the origin of the coordinates, it can be observed from the tensile force chain and compressive force chain diagrams corresponding to the peak shear stress under five levels of normal stress that the green compressive stress chains are all distributed in a centrally symmetric manner.Under low normal stresses (1 MPa, 2 MPa), the tensile force chains each appear as two separate chains distributed within the rock bridges between joints A and B and C and D, exhibiting a centrally symmetric distribution pattern.However, when the normal stress is between 3 and 5 MPa, the tensile force chains within the rock bridges between joints A and B and C and D dissipate in all three rock masses, concentrating instead within the rock bridge between joints B and C. Additionally, dense regions of tensile force chains are located at almost the ends of joints B and C. In the green areas of B2-C2 and C1-D1 in the corresponding figure, sparse tensile force chains are observed.In conjunction with the crack distribution diagram, it can be inferred that the sparse areas are due to the development of cracks, which in turn validates the cause of the rock bridge's throughgoing failure indirectly.This indicates that as the normal stress increases, the development of tensile force chains at the microscale decreases, while macroscopically, this results in the inhibition of tensile crack propagation.
Overall, the force chain lines are thicker in the corresponding figure, indicating that there is a large pressure between the particles, so greater shear stress is required to cause shear displacement of the model specimen.
When the rock material is subjected to a certain degree of external load, mesoscale cracks will form inside the rock.The generation of such cracks in the numerical simulation of particle flow is characterized by the parameter setting of the contact model.Shear or tensile cracks occur in parallel bond models when the normal bond strength and tangential bond strength is overcome.To study the distribution and development of cracks under different normal stresses, the occurrence and number of cracks were tracked in numerical simulation experiments.Figure 7 shows the stress-displacement curve for a normal stress of 3 MPa and the relationship between the number of cracks developed and the shear displacement under the action of five different normal stresses.
Combining the crack number curves from Fig. 7a,b, it can be observed that cracks begin to develop only after a certain amount of shear displacement has occurred in the specimen, and the number of cracks increases nonlinearly with increasing shear displacement.The fastest growth of the crack number curve occurs during the prepeak and postpeak stages.Additionally, both the total number of cracks within the specimen and the steepest slope of the curve increase with increasing normal stress.By comparing the shear stress-displacement curves in Fig. 5 and the crack number curves in Fig. 7 under the five levels of normal stress tested, it is evident that there is a distinct phased nature to the shear stress-displacement curves under different normal stress conditions, which can be specifically divided into the linear elastic stage, the prepeak stage, the postpeak stage, and the residual deformation stage.
Due to space limitations, this paper takes the shear stress-displacement curve and the crack number curve under the condition of 3 MPa of normal stress (as shown in Fig. 7b) as the subject of analysis.Below 0.71 mm, the shear stress is linearly elastic with respect to the shear displacement, indicating that the particles in the model specimen are compacted without the generation of new cracks (the linear elastic stage).As the shear displacement increases from 0.71 to 1.27 mm, the cracks begin to develop slowly, and the number of cracks remains steadily below 150, indicating that the specimen is subjected to the action of low-level shear force, with the cracks developing stably (the prepeak stage).When the shear displacement reaches approximately 1.28 mm, the shear stress reaches its peak, after which there is a sharp decrease in the shear stress-displacement curve.At this point, the number of developing cracks increases rapidly (the postpeak stage).As the shear displacement continues to increase, the shear curve enters the residual deformation stage.When the shear displacement reaches 0.5 mm, the trend of crack development gradually plateaus, and the number of approaches the maximum.
When the shear displacement reaches 5.0 mm, the distribution of cracks within the model specimen under five different levels of normal stress is shown in Fig. 8.In the specimen block diagram, black lines represent the original joints, red lines represent new joints, and the rest are the particle model.Here, the built-in fragment language function of the PFC is used, where different colors indicate blocks generated at different times.Cracks are mainly distributed on both sides along the shear surface (red is the resulting crack, and the remainder is the particle model; here, the PFC has a fragment language function, and different colors represent blocks generated in different periods).It is evident that under the same shear displacement, the degree of cracking in the shear band area along the line connecting the loading end and the fixed end becomes denser with increasing normal stress.This is also the reason for the increase in the number of blocks along the shear band and the reduction in the volume of large blocks at the microscale.When the normal stress is 5 MPa and the shear displacement reaches 5.0 mm, cracks are more concentrated in the middle of adjacent joints, and a large number of cracks are also generated at the left loading end and the right fixed end.
The proportionate relationship between tensile and shear cracks as a percentage of the total number of cracks under different normal stresses is shown in Fig. 9.The number of tensile cracks decreases with increasing normal
Effect of the inclination angle
Under the conditions of five levels of normal stress (1-5 MPa), the variation in the peak shear stress in the model specimen with respect to the dip angle (0°-90°) is depicted in Fig. 10.All five peak shear stress curves exhibit an "S"-shaped variation with respect to the joint dip angle.The trend of shear strength variation with dip angle (0°-90°) simulated in this paper is similar to the experimental results obtained by Gehle (Reference 37 ).
Regarding the minimum shear strength, when the normal stress increases from 1 to 5 MPa, the shear strength of the rock mass is at its minimum at a joint dip angle of 15°, and the joint dip angle corresponding to the minimum shear strength increases with increasing normal stress.When the normal stress is between 1 and 4 MPa, the shear strength of the rock mass is at its maximum at a joint dip angle of 65°.When the normal stress is between 4 and 5 MPa, the curve exhibits two extreme values, and at a normal stress of 5.0 MPa, the peak shear stress occurs in the special case of a joint dip angle of 45°.
Notably, when the normal stress is greater than or equal to 2 MPa, the curve exhibits a concave downward phenomenon within the 45°-65° range.To avoid randomness, the portion of the curve between 3 and 5 MPa (the gray area in the figure) is selected, and normal stresses of 3.5 MPa, 4.5 MPa, and 5.5 MPa are interpolated to study the variation in shear strength of specimens with joint dip angles from 45° to 70° under five levels of normal stress (see the inset in Fig. 10).The results indicate that as the normal stress increases from 3 to 5.5 MPa, the concave downward phenomenon of the curve within the 45°-65° range becomes pronounced.Moreover, when the normal stress exceeds 5 MPa, the shear strength of the specimen with a joint dip angle of 60° is less than that of the specimen with a joint dip angle of 45°.
This study investigated the failure modes of rock bridges in specimens with different joint dip angles (15°, 30°, 45°, 60°, 75°, and 90°).The variation in the joint dip angle changes the length of the rock bridges.In Fig. 11, the first diagram of each condition is the final failure diagram of the specimen, with different colors representing blocks generated at different times; the second diagram is the final crack distribution diagram, where different colors represent cracks at various stages; and the third diagram is the tensile crack-shear crack distribution diagram.In Fig. 11, the notation for the failure modes and connection patterns of the original joints are consistent with the previous text.
(1) When the joint dip angle is between 15° and 45°, the rock bridges between joints A and B, B and C, and C and D all exhibit a clear failure mode characterized by an "end-to-end" connection (A2-B1, B2-C1, C2-D1) starting from the ends of the original joints and extending toward the adjacent ends of the nearby original joints.According to the third diagram of each condition, the final failure of the rock bridge is predominantly caused by tensile cracks.(2) When the joint dip angle is between 30° and 60°, the crack initiation points shift from the ends of the original joints toward the middle of the model.The final failure mode within the three rock bridges evolves from an "end-to-end" connection pattern (A2-B1, B2-C1 at 30° and 45°) to a connection from joint ends to the middle of adjacent joints (A2-BB, B2-CC, C2-DD at 45° and 60°).(3) When the joint dip angle is 75°, a distinct connection from joint ends to the middle of adjacent joints (D1-CC, C1-BB, and B1-AA at 75°) and a more pronounced "tail-to-tail" connection (A2-B2 and B2-C2 at 75°) composite final failure mode are observed.(4) When the joint dip angle is 90°, the failure connection pattern includes both "head-to-head" and "tail-totail" connections.
Overall, the variation in the joint dip angle leads to a trend where the final failure mode of the rock bridge evolves from an "end-to-end" connection to a combination of a "head-to-head" connection and "tail-to-tail" connection; the crack initiation points develop from the ends of the original joints toward the middle of the joints and then back toward the joint ends.The final failure mode transitions from being predominantly tensile failure to a combination of tensile and shear failure (for specifics, see the density of shear cracks in the final failure path change in Fig. 11).With the normal stress remaining constant, different inclination angles result in varying lengths of joint connection in the rock bridges, and the longer a rock bridge is, the greater the shear stress required for final failure.
Figure 12 presents the shear stress-displacement curve and crack number curve for the specimen with a joint dip angle of 60° (see Fig. 12-1), as well as the crack development and particle force chain diagrams extracted during the direct shear test (see Fig. 12-2, where the first diagram is the crack distribution, the second is the compressive force chain diagram, and the third is the tensile force chain diagram).The shear displacements are 0.0 mm (initial loading stage), 0.82 mm (the starting point of the prepeak stage), 1.93 mm (the point of peak shear stress), 2.86 mm (the starting point of the residual deformation stage), and 5.0 mm (the end point of direct shear).
Combining Figs.12-1 and 12-2, at the initial loading stage (see Fig. 12-2a), the top and bottom of the specimen are subjected to a pressure of 1 MPa, and the particles are filled with compressive force chains.At this point, the compaction stage of the particles occurs, and the distribution of the compressive force chains is uniform.At the beginning of the prepeak stage (Fig. 12-2b), the shear stress-displacement curve transitions from linear to nonlinear, and the number of cracks begins to increase.The compressive force chains start to concentrate at the loading end and the fixed end, connecting through the intermediate rock bridge to form a rectangular compressive force chain (from the upper left to the lower right).At this time, the tensile force chains are perpendicular to the compressive force chains (from the lower left to the upper right).The number of developing cracks is minimal, and most are distributed around the original joints.At the peak shear stress point (Fig. 12-2c), the shear stress-displacement curve decreases rapidly, while the number of developing cracks increases sharply from 300 to 800.A small macroscopic fracture zone is formed at the upper part of the shear plane, and the compressive force chains in the vicinity gradually disappear.The granular force chain diagram shrinks toward the shear surface, the pressure chain at the nodal tip near the loaded and fixed ends becomes larger, and its direction is parallel to the direction of the prefabricated nodal normal.A comparison of the granular force chain diagram with the fracture development diagram shows that under shear pressure, cracks are first produced at the right nodal tip, and as the shear displacement continues to increase, the cracks gradually extend along the edges of the pressure chain to the neighboring nodal sections.It should be noted that the cracks at the nodal tips are formed due to damage under tension.Entering the residual stage (see Fig. 12-2d), the shear stress-displacement curve and the crack number curve gradually tend to stabilize.The particle force chain diagram indicates that the compressive force chains become thinner and pass through the rock bridge, with the overall tensile force chains gradually disappearing.The concentration of local tensile force chains at three points is the cause of the rock mass developing three cracks (see Fig. The crack development diagram indicates that the damage to the model specimen includes both tensile cracks and shear cracks.Therefore, the damage to the rock body with non-coplanar discontinuous joints develops from the initial tensile damage due to tensile stress concentrating at the tip of the joints to the final tensile-shear composite damage.
Effect of the shear displacement rate
In this subsection, the normal stress for the five experimental groups is set to 2 MPa, with a joint dip angle of 60°. Figure 13 presents the shear stress-displacement curves and crack number curves for shear rates ranging from 0.02 to 0.10 mm/s.
Combining Fig. 13a, b, it can be observed that in the linear elastic stage, variations in the shear rate have almost no effect on the shear stress-displacement curve or the crack number curve, with all five curves overlapping, indicating that no new cracks are generated within the specimen.In the prepeak stage, the trends of the curves change in accordance with the variation in shear rate, with all curves exhibiting a double-peak pattern with the peak shear stress occurring at the second peak.Correspondingly, the crack number curve shows a rapid increase before each peak.In the postpeak stage, all five curves rapidly decrease to the residual stage, with minimal differences in the shear displacement experienced during the postpeak phase, and the total number of cracks generated during this stage also does not vary significantly.In the residual stage, the shear stress generally remains stable, and the cracks generated due to frictional wear vary under different shear rates.
Based on Fig. 13, we further deduce the relationship between the peak shear strength of the model specimen and the different shear rates.In Fig. 14, a curve that shows the variation in the particle contact force with the shear rate results is presented.The figure shows that the peak shear strength increases nonlinearly with increasing shear rate.When the shear rate increases from 0.02 to 0.04 mm/s, the shear strength increases by 0.67%, which is the smallest increase at this stage; when the shear rate increases from 0.02 to 0.1 mm/s, the shear strength increases by 6.95%.Overall, the increase in the shear rate is relatively large.
To investigate the intrinsic mechanism of the effect of the shear rate on the peak shear stress, the maximum contact pressure between particles was monitored by the built-in Contact Force Mag module of PFC.As supported by Fig. 14, the particle contact pressure increases with increasing shear rate, and a greater contact pressure increases the degree of interparticle interlocking, which requires a greater shear force for shear damage.
The difference in the results when the shear rate is v = 0.02 versus 0.1 mm/s is most obvious in the elastic phase, and the shear stress curve shows that a high shear rate causes the elastic phase curve to undulate.Entering the residual friction stage, the high-shear-stress curve has no stepwise stress drop but undulates greatly.2. Under different joint dip angles and normal stresses, during the direct shear process, the variation curves of the peak shear stress with increasing dip angle all exhibit an "S"-shaped nonlinear pattern.Its minimum value occurs at a joint dip angle of 15°, while the maximum value occurs at 65°.Moreover, when the normal stress is between 2 and 5.5 MPa, the curve exhibits a concave downward phenomenon within the 45°-65° dip angle range, and at a normal stress of 5.0 MPa, the peak shear stress occurs in the special case of a joint dip angle of 45°. 3. The joint dip angle has a significant impact on the final failure mode of rock bridges in the rock mass.
Specifically, as the joint dip angle increases, the final failure mode of the rock bridges transitions from an "end-to-end" connection to a combination of a "head-to-head" connection and "tail-to-tail" connection.4.Under different shear rates, the trends of the shear stress-displacement curves and the crack number curves are the same.The peak shear stress increases from 5.05 to 5.40 MPa as the shear rate varies from 0.02 to 0.10 mm/s, which is an increase of 6.95%.Overall, the shear rate has a certain impact on the peak shear stress, but the impact is not significant.5.The shear stress-displacement curve exhibits distinct stages.The spatial distribution of the tensile force chains changes as shearing progresses, and stress concentration occurs at the tips of the original joints, which is the reason for the development of long tensile cracks in the deeper parts.
Figure 2 .
Figure 2. Uniaxial compression test stress-strain curves and failure diagrams of the specimens from the physical experiment and numerical simulation.
Figure 5 .
Figure 5. Mechanical behavior during direct shear tests under different normal stresses.
Figure 6 .
Figure 6.Interparticle force chain diagram of different normal stresses at the peak stress.
https://doi.org/10.1038/s41598-024-63576-wstress, while the number of shear cracks decreases.The trend of their relationship explains the specimen failure characteristics mentioned earlier; as the normal stress increases, the development of tensile cracks is suppressed, leading to the development of shear cracks along the shear band.The crack development of the specimen concentrates at the shear band along the line connecting the loading end and the fixed end (for details, see the content of Fig.8in the previous text).
Figure 7 .
Figure 7. Crack number curves under different stresses and the shear stress-displacement curve at 3 MPa.
Figure 8 .
Figure 8. Distribution of cracks under sequential normal stresses.
Figure 9 .
Figure 9.The relationship between the crack development proportion and different normal stresses.
Figure 10 .
Figure 10.Relationship between the joint inclination angle and peak shear strength under five normal stresses.
12-2d and (e) A, B, C) that extend deep into the rock mass.At the end of direct shear (see Fig.12-2e), the shear stress-displacement curve and the crack number curve show minimal changes.The crack distribution diagram and the tensile force chain diagram remain largely unchanged.Since the specimen is still being loaded during the residual deformation stage, the position of the blocks along the shear band changes, and the compressive force chains concentrated on these blocks exhibit variations during this phase.
Figure 11 .
Figure 11.The penetrating failure modes of joints with different inclination angles.
Figure 12 .
Figure 12.Test results for the 60° joint dip angle.
Figure 13 .
Figure 13.Shear stress-displacement curves and crack number curves under a range of shear rates.
Table 1 .
The results of the physical tests and numerical simulations.
Table 3 .
Numerical simulation scheme for a non-coplanar discontinuously jointed rock mass.12-2 Particle force chain and crack development diagrams at different shear displaceme.
|
2024-06-16T06:18:16.883Z
|
2024-06-14T00:00:00.000
|
{
"year": 2024,
"sha1": "6ef8a12942cd5cc92559dc9410dd215642ebbfbc",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "20eec35471c2633c05c523443d28728b3179ab80",
"s2fieldsofstudy": [
"Geology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
305989
|
pes2o/s2orc
|
v3-fos-license
|
Reduced Light Response of Neuronal Firing Activity in the Suprachiasmatic Nucleus and Optic Nerve of Cryptochrome-Deficient Mice
To examine roles of the Cryptochromes (Cry1 and Cry2) in mammalian circadian photoreception, we recorded single-unit neuronal firing activity in the suprachiasmatic nucleus (SCN), a primary circadian oscillator, and optic nerve fibers in vivo after retinal illumination in anesthetized Cry1 and Cry2 double-knockout (Cry-deficient) mice. In wild-type mice, most SCN neurons increased their firing frequency in response to retinal illumination at night, whereas only 17% of SCN neurons responded during the daytime. However, 40% of SCN neurons responded to light during the daytime, and 31% of SCN neurons responded at night in Cry-deficient mice. The magnitude of the photic response in SCN neurons at night was significantly lower (1.3-fold of spontaneous firing) in Cry-deficient mice than in wild-type mice (4.0-fold of spontaneous firing). In the optic nerve near the SCN, no difference in the proportion of light-responsive fibers was observed between daytime and nighttime in both genotypes. However, the response magnitude in the light-activated fibers (ON fibers) was high during the nighttime and low during the daytime in wild-type mice, whereas this day–night difference was not observed in Cry-deficient mice. In addition, we observed day–night differences in the spontaneous firing rates in the SCN in both genotypes and in the fibers of wild-type, but not Cry-deficient mice. We conclude that the low photo response in the SCN of Cry-deficient mice is caused by a circadian gating defect in the retina, suggesting that Cryptochromes are required for appropriate temporal photoreception in mammals.
Introduction
Circadian rhythms are oscillations with daily periodicities in physiological and behavioral functions of organisms. In mammals, the central circadian oscillator is located in the suprachiasmatic nucleus (SCN) of the ventral hypothalamus [1]. The rhythms are generated by a cell-autonomous circadian oscillator that is synchronized with the environment by light through the retinohypothalamic tract; hence, light synchronizes the behavior of the organism with the daily 24-hr light-dark (LD) cycle [2].
Cryptochromes are folate-and flavin-based members of the photolyase family of photopigments that are necessary for normal circadian phase shifting in Arabidopsis and Drosophila [17]. Mammals have two Cryptochrome family members, which are expressed in the inner retina [17,18] as well as many other tissues. Mice lacking Cryptochromes (mCry1 2/2 mCry2 2/2 ) display severe defects in gene induction in the SCN [19,20], but retain normal pupillary light reflex [21] and masking [22]. Photoresponsiveness is markedly depressed in Rd/rd mCry1 2/2 mCry2 2/2 mice, as measured by masking, pupillary light reflex, and light-induced immediate-early gene expression in the SCN [19,21,23]. These studies indicate that both melanopsin and Cryptochromes contribute to nonvisual photoresponses and play important roles in these processes.
Because Cryptochromes also have a central role for these proteins in the molecular clock mechanism [20,24], mCry1 2/2 mCry2 2/2 mice show arrhythmic behavior in constant darkness (DD) [19,24] and cannot be assayed for circadian phase shifting. Therefore, in the present study, we performed extracellular single unit recordings of the neuronal firing activity in the SCN and the optic nerve of anesthetized mCry1 2/2 mCry2 2/2 mice in response to retinal illumination. Several rodent studies have shown that the photic responses of the electrical activities in the SCN are closely correlated with the photic entrainment properties in the locomotor activity rhythms [25,26,27]. The magnitude of the photic responses of electrical activities in the SCN depends on both the circadian phase [27] and the light intensity [25,26,27] in rats and hamsters. Recently, we successfully recorded the photic response in the firing of mouse SCN in vivo, which corresponds to the properties of photic entrainment in the locomotor activity of mice [28]. In the present study, to assay circadian photoreceptions in mCry1 2/2 mCry2 2/2 mice, we compared the light response of neuronal firing activity in the SCN and the optic nerve during the daytime and the nighttime.
Results
For recordings in the SCN and optic fibers, we used 46 wild-type, 69 mCry1 2/2 mCry2 2/2 mice. We carried out a single recording per animal in which 16 neurons in wild-type mice and 26 neurons in mCry1 2/2 mCry2 2/2 mice were recorded in the SCN. The rest of the animals were used for optic nerve fiber recordings.
Temporal differences in spontaneous neuronal firing activity in the SCN and optic nerve fibers Representative photographs and oscilloscope traces for the single unit recordings of the neuronal firing activity in the SCN and the optic nerve fibers of mice are shown in Figure 1A and B. We clearly distinguished spikes between the SCN and the optic nerve by means of the spike form and additionally confirmed the recording site using the histological method after the recording. We first measured the baseline of spontaneous firing activity in the SCN and optic nerve fibers of mice during the daytime (Zeitgeber time [ZT] 4-8) and the nighttime (ZT [14][15][16]. We collected the baseline firing frequency (Hz) for 60 sec without retinal illumination. In the SCN, both wild-type and mCry1 2/2 mCry2 2/2 mice displayed a day-night variation in spontaneous firing activity (2.0360.37 during the daytime vs. 1.0960.21 during the nighttime in wild-type mice and 2.8360.67 during the daytime vs. 1.5460.17 during the nighttime in mCry1 2/2 mCry2 2/2 ; P,0.05 for both genotypes, Student's t-test; Fig. 1C). In the optic nerve, wild-type mice showed a distinct day-night change (10.9961.53 during the daytime vs. 4.1360.72 during the nighttime; P,0.001, Student's t-test; Fig. 1D), whereas mCry1 2/2 mCry2 2/2 mice did not exhibit a day-night difference in spontaneous firing rate (8.7260.12 during the daytime vs. 6.2261.19 during the nighttime; Fig. 1D).
SCN light responsiveness
The spontaneously firing SCN neurons of mice showed three different types of responses to light [28]. The recordings consisted of before, during, and after retinal stimulation and each stage was applied for 60 sec. The first type of response was light-activated, in which the firing frequency increased during light exposure ( Fig. 2A), the second type of response was light-suppressed, in which the firing rate decreased during light exposure (Fig. 2B), and the third type of response was unresponsive in which changes in the firing rate during light exposure were less than 10% of the basal firing rate (Fig. 2C).
We compared the differences in the populations of the three types of responses in the SCN during the daytime and nighttime in wild-type and mCry1 2/2 mCry2 2/2 mice (Table 1). In wild-type mice, the spontaneous activities of six SCN neurons were recorded in the daytime and one (17%) of the SCN neurons was lightactivated whereas the remaining neurons (83%) were unresponsive. During the nighttime, 10 SCN neurons were recorded in wild-type mice and nine (90%) of them were light-activated and one neuron (10%) was light-suppressed. Statistical analysis revealed that a day-night difference in the populations of response types in SCN neurons of wild-type mice (P,0.001, Dunn's test). In contrast, among 26 SCN neurons that were recorded in mCry1 2/2 mCry2 2/2 mice, 16 were recorded during the nighttime. In the nighttime recording, five neurons (31%) were light-activated and 11 neurons (69%) were unresponsive. Among the 10 neurons recorded during the daytime, one neuron (10%) was lightactivated, three neurons (30%) were light-suppressed, and the remaining neurons (60%) were unresponsive. A day-night variation was not observed in the populations of response types in the SCN neurons of mCry1 2/2 mCry2 2/2 mice.
We next examined the magnitude of change in neuronal firing activities by retinal illumination recorded in the SCN of wild-type and mCry1 2/2 mCry2 2/2 mice (Fig. 2D). Because few lightresponsive neurons were observed during the daytime, the recording was carried out only in light-activated neurons during the nighttime. We collected the mean firing rate at each stage of light stimulation: before (60 sec), during (60 sec), and after (60 sec) light stimulations. The magnitude of neuronal light response was calculated by using the following equation:
Magnitude fold ð Þ
Mean firing rate during light stimulation Mean firing rate before and after light stimulation=2 In the SCN, wild-type mice showed a magnitude of 4.0160.79fold whereas mCry1 2/2 mCry2 2/2 mice showed 1.3560.15-fold. The magnitude of the neuronal light response in the SCN of mCry1 2/2 mCry2 2/2 mice during the nighttime was significantly lower than in the wild-type mice (P,0.05, Student's t-test).
Optic nerve fiber light responsiveness
To determine whether the reduced light response in the SCN of mCry1 2/2 mCry2 2/2 mice was caused by retinal defects, we examined the response to retinal illuminations in firing activity in the optic nerve, which the neural light information pathway to the SCN. The recordings and light stimulations were performed using the same procedure as for the SCN recording. We recorded three classes of light responses in optic nerve fibers, which is consistent with Hartline et al. [29] and other reports [30,31]. The first type was the ON fiber that discharges vigorously when the retina is illuminated (Fig. 3A), the second was the OFF fiber that discharges vigorously when the light is turned off (Fig. 3B), and the third was the ON/OFF fiber that responds to both the onset and the termination of light (Fig. 3C).
We compared the differences in the populations of the three fibers in the optic nerve during the daytime and the nighttime in wild-type and mCry1 2/2 mCry2 2/2 mice (Table 2). In wild-type mice, the spontaneous activities of 14 optic nerve fibers were recorded in the daytime and eight (57%) were ON fibers, four (29%) were OFF fibers, and two (14%) were ON/OFF fibers. During the nighttime, 16 fibers were recorded in wild-type mice and nine (56%) were ON fibers and seven (44%) were OFF fibers. In mCry1 2/2 mCry2 2/2 mice, 20 fibers were recorded in the optic nerve during the daytime and twelve (60%) were ON fibers, six (30%) were OFF fibers, and the remaining (10%) were ON/OFF fibers. During the nighttime, 23 fibers were recorded and 14 (61%) were ON fibers, seven (30%) were OFF fibers, and the remaining (9%) were ON/OFF fibers. No day-night variation was observed in the populations of the three classes of fibers in the optic nerve of wild-type or mCry1 2/2 mCry2 2/2 mice. We also examined the magnitude of the neuronal light response recorded in the optic nerve fibers of wild-type and mCry1 2/2 mCry2 2/2 mice (Fig. 3D). The recordings were carried out only in ON fibers during the daytime and the nighttime because OFF and ON/OFF fibers exhibit a specific response to retinal illuminations and thus these fibers could not be assayed in the present study. The magnitudes of the neuronal light response during the daytime and nighttime recordings were 2.6460.60-fold and 13.3664.41fold, respectively, in the ON fibers of wild-type mice. In contrast, the magnitude of the neuronal light response during the daytime and nighttime recordings were 3.2360.57-fold and 3.0260.82 fold, respectively, in mCry1 2/2 mCry2 2/2 mice. The magnitude of the neuronal light response in the ON fibers of wild-type mice during the nighttime was significantly higher than both the daytime value in wild-type mice and the day and night values in mCry1 2/2 mCry2 2/2 mice (P,0.01; Tukey's test). Thus, a daynight difference was observed in the magnitude of the neuronal light response in the ON fibers of wild-type mice but not in mCry1 2/2 mCry2 2/2 mice.
Discussion
In the present study, we investigated the effect of the loss of Cryptochromes on the light response of neuronal firing activity in the SCN. We revealed that mCry1 2/2 mCry2 2/2 mice have a reduced firing activity response to retinal illumination in nighttime recordings and did not exhibit a day-night variation in frequency of the response types in the SCN. We also determined that mCry1 2/2 mCry2 2/2 mice have decreased optic nerve fiber photosensitivity of optic nerve fibers during the night. These results suggest that Cryptochromes play a key role in the sensitivity of circadian photoreception in mammals.
Our previous study demonstrated that the photo response of firing activity in the mouse SCN had phase-dependent manner and showed a light-intensity relationship [28]. These data indicate that the electrophysiological properties of mice SCN are similar to those of rats and hamsters [25,26,27] and may correspond to properties of the light-induced phase shifting in locomotor activity of mice. In the present results, the SCN of wild-type mice showed a day-night difference in the populations of light responsive neurons. On the other hand, a day-night variation was not observed in the proportion of response types in the SCN of mCry1 2/2 mCry2 2/2 mice. The magnitudes of the neuronal light responses in the SCN of mCry1 2/2 mCry2 2/2 mice were significantly lower than those of wild-type mice during the night. mCry1 2/2 mCry2 2/2 mice displayed a severe defect in c-fos induction in the SCN [19,20] and no day-night difference in c-fos induction in the SCN [32]. These data are consistent with the present results, suggesting that circadian photoreception sensitivity is reduced in the SCN of mCry1 2/2 mCry2 2/2 mice.
Because Cryptochromes have a transcriptional regulatory function in the molecular clock mechanism, mCry1 2/2 mCry2 2/2 mice do not exhibit circadian rhythms of locomotor activity in DD [19,24], suggesting that the SCN lacks to the central clock function in mCry1 2/2 mCry2 2/2 mice. Although it appears that the loss of the day-night difference in the photo response in the SCN of these animals causes the lack of SCN function, we consider this model unlikely because some evidence for partial clock function in the absence of Cryptochromes has been reported. For example, anticipatory wheel-running activity, where mice exhibit increased locomotor activity in the hours just prior to lights off, has been reported in mCry1 2/2 mCry2 2/2 mice [20,22]. Under normal LD conditions, a single peaks in circadian multiunit electrical activities were detected in the in vitro SCN slices from these animals [33]. Our present results also show a temporal difference in the spontaneous neuronal of the SCN in mCry1 2/2 mCry2 2/2 mice. These data suggest that mCry1 2/2 mCry2 2/2 mice maintained under LD conditions have normal clock functions of the SCN through the duration of one circadian cycle. Because we recorded the neuronal firing activity of these animals under LD conditions, the SCN showed normal clock functions in Cry-deficient mice in the present study.
Our data demonstrate that the light response of ON fibers also exhibited a day-night difference in wild-type mice, although the day-night variation was not observed in the populations of optic nerve fibers in wild-type or mCry1 2/2 mCry2 2/2 mice. Circadian rhythms have been detected in numerous invertebrate visual systems but are not unique to them [34]. A circadian clock in the brain of Limulus transmits efferent optic nerve activity to the lateral eyes and increases their sensitivity at night [35] enabling them to see nearly as well at night as they do during the day. Circadian rhythms have been also detected in functions such as photoreceptor disk shedding [36] and retina light sensitivity of the retina in rats [37] and humans [38]. Therefore, it is possible that the daynight difference in the sensitivity of ON fibers is relevant to circadian photoreceptions. The observation that the ON fibers of mCry1 2/2 mCry2 2/2 mice lack these day-night variations may lead to the finding that Cryptochromes play a role in the retina for mammalian circadian photoreception. The discovery of intrinsically photoresponsive retinal ganglion cells (ipRGCs) has given non-visual phototransduction an anatomical basis [39]. Berson and colleagues used retrograde dye tracing from the circadian pacemaking cells in the rat SCN to define direct retinohypothalamic-projecting ganglion cells, and patch-clamp recording showed that these cells are found to be intrinsically photosensitive whereas non-retinohypothalamic projecting retinal ganglion cells had no intrinsic photosensitivity [40]. Although we could not determine whether all recorded optic nerve fibers were ipRGCs in the present study, Cryptochrome mRNAs are expressed in the retinal ganglion cells of mice [8]. Melanopsin is also expressed nearly exclusively in the ,1000 ipRGCs of the rodent retina [41]. The mammalian retina contains an intrinsic circadian clock that controls melatonin synthesis and many other retinal functions [42]. In addition, retinal ganglion cells express Period (Per) 1 and 2, Clock, and Bmal1, as well as Cry1 and 2, which are core molecular components of circadian clocks and their expression is necessary for circadian rhythmicity [43]. Furthermore, studies using real-time reporting of the PER2::LUC fusion protein revealed that clock gene rhythms persist for .25 days in cultured mouse retinas [43,44]. These data suggest that the ipRGCs of the mammalian retina contain functionally autonomous circadian clocks. In the present study, mCry1 2/2 mCry2 2/2 mice did not exhibit a day-night difference in spontaneous firing rate of the optic fibers and in the magnitude of neuronal light response in ON fibers. There results indicate that the low photosensitivity in the optic fibers is caused by the loss of Cryptochromes in the retina.
In summary, this study provides several findings regarding the neuronal light response in the SCN and optic fibers of Cry-deficient mice: (1) A day-night variation in the populations of response types in SCN neurons was observed in wild-type mice but not in mCry1 2/2 mCry2 2/2 mice. (2) The magnitude of the neuronal light response in the SCN of mCry1 2/2 mCry2 2/2 mice during nighttime was significantly lower than in wild-type mice. (3) A day-night difference was observed in the magnitude of neuronal light response in the ON fibers of wild-type mice but not in mCry1 2/2 mCry2 2/2 mice. These findings indicate that CRY deletion leads to the low photo response of the SCN and optic nerve fibers during the nighttime. In addition, we observed a daynight difference in the spontaneous firing rates in the optic fibers in wild-type mice but not in mCry1 2/2 mCry2 2/2 mice, suggesting that CRY deletion also disrupts the circadian rhythms of the neural system in the retina. We conclude that the low photo response in the SCN of Cry-deficient mice is caused by a circadian gating defect in the retina, which suggests that Cryptochromes are required for appropriate temporal photoreception in mammals.
Animals
The mCry1 2/2 mCry2 2/2 mice (originally from the colony of Dr. T. Todo [Kyoto University, Kyoto, Japan]) and wild-type mice of a similar mixed background were generated as described previously [19,20]. Genotyping was carried out by PCR using two sets of primers that amplified the wild-type or the disrupted gene for each of the Cryptochrome genes. Animals were maintained under controlled air conditions (room temperature, 2461uC, and humidity, 50%65%) with food and water available ad libitum. Animals were housed under a LD cycle of 12 hr of light and 12 hr of darkness with a light intensity of 200-300 lux until the beginning of the experiment. All animal housing and experimental procedures were carried out in accordance with the guidelines of the Japanese Physiological Society and approved by the Institutional Animal Care and Use Committee of the Graduate School of Biomedical Sciences Nagasaki University (approval ID#: 0206090168).
Preparation
Male mCry1 2/2 mCry2 2/2 mice and wild-type mice ranging from 3 to 5 months of age were used in the experiments. The experiments were carried out during the daytime, ZT 4-8 (ZT12 is defined as the time of lights-off), and nighttime, ZT 14-16. The mice studied during the daytime were transferred to constant darkness from ZT 12 on the previous day in order to prevent light adaptation in animals and to create light conditions similar to the nighttime recording. On the day of the experiment, mice were transferred to the experimental room after their eyes were covered with blindfolds. The surgery described below, which occurred before the electrical recording, was performed under dim red light (,10 lux). The mice were anesthetized with 20% urethane solution (initial dose 2 g/kg, i.p.). Thereafter, the mice were placed in a stereotaxic instrument (Narishige, Tokyo, Japan) with the incisor bar set 22 mm below the ear bar and cranial surgery was performed. The coordinates for the SCN in mice were 0.4 mm anterior to bregma, 0.1 mm lateral to the midline, and 5.0-5.5 mm below the dural surface. The pupils of the animals were dilated 30 min before the recordings by application of 1% atropine sulfate to the cornea.
Electrophysiological recordings
Electrophysiological experiments were performed as previously described [28]. Extracellular single unit recordings were performed with a glass micropipette electrode (10-20 MV) filled with 2% Chicago Sky Blue (SIGMA, St. Louis, MO) in 0.5 M NaCl. The potentials were amplified, processed through a bandpass analog filter (100 Hz-3 KHz), and fed into a personal computer with an AD converter (PCI-6024E; National Instruments, Austin, TX). The frequency of neuronal single-unit firing was counted with customized software programmed by LabVIEW (National Instruments).
When spontaneous firings were recorded, we continued recording for 10 min without illumination to ensure stability of the firing activity. The stimulation was carried out after the variation of the mean spontaneous firing rate in 60 sec settled down to a rate that was within 10% of the value of the previous minutes. An increase or decrease in the firing activity was determined for the light stimulation of 60 sec in duration by a mean frequency change of more than 10% relative to the mean firing rate for the 60 sec prior to the stimulation.
Light stimulation
A photic stimulation pulse was applied with an assembled light source of 6 blue-green high-intensity light-emitting diodes (lmax: 500 nm, E1L51-KC0A2-02; Toyota Gosei, Kasugai, Japan) to the eye of the animal with pupils dilated. The intensity of the light stimulation at the plane of the eye was 1.0610 15 photons?cm 22 ?s 21 . Light intensity was measured using a United Detector Technologies photometer (model S371; Hawthorne, CA).
Identification of the electrode position
At the end of the recording, a small negative current (3-5 mA; 3-5 min) was passed through the microelectrode to mark the recording site. The brain was removed and fixed overnight with 4% paraformaldehyde in phosphate-buffered saline. The brain was sliced (100 mm thick) with a micro slicer (Dosaka EM, Kyoto, Japan) and slices were counterstained with cresyl violet to verify the location of the SCN or optic nerve.
Data analysis and statistics
Differences between proportions of responsive neurons or fibers during the daytime and the nighttime in wild-type and mCry1 2/2 mCry2 2/2 mice were analyzed using the non-parametric Kruskal-Wallis test followed by Dunn's post-hoc analyses. In other cases, Student's t-tests were used to examine the difference between two groups and one-way ANOVA with post-hoc Tukey's test was used to compare multiple groups. All results are presented as the mean 6 SEM and were considered significant at P,0.05.
|
2014-10-01T00:00:00.000Z
|
2011-12-21T00:00:00.000
|
{
"year": 2011,
"sha1": "29721381712ced90d95d35d4ead5c7a665f9f581",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0028726&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5184c3977b51569fd698b742c9a68305ec12683",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
217594352
|
pes2o/s2orc
|
v3-fos-license
|
FAK Inhibition Induces Glioblastoma Cell Senescence-Like State through p62 and p27
Focal adhesion kinase (FAK) is a central component of focal adhesions that regulate cancer cell proliferation and migration. Here, we studied the effects of FAK inhibition in glioblastoma (GBM), a fast growing brain tumor that has a poor prognosis. Treating GBM cells with the FAK inhibitor PF-573228 induced a proliferative arrest and increased cell size. PF-573228 also reduced the growth of GBM neurospheres. These effects were associated with increased p27/CDKN1B levels and β-galactosidase activity, compatible with acquisition of senescence. Interestingly, FAK inhibition repressed the expression of the autophagy cargo receptor p62/SQSTM-1. Moreover, depleting p62 in GBM cells also induced a senescent-like phenotype through transcriptional upregulation of p27. Our results indicate that FAK inhibition arrests GBM cell proliferation, resulting in cell senescence, and pinpoint p62 as being key to this process. These findings highlight the possible therapeutic value of targeting FAK in GBM.
Introduction
Glioblastoma (GBM) is the most common malignant primary brain tumor. At present, it remains an incurable disease. This is due to its fast growth and infiltration into the brain parenchyma, which results in tumor recurrence. Consequently, novel therapies are needed to improve upon the current standard treatment, which consists of surgery, radiotherapy, and chemotherapy with temozolomide.
Focal adhesion kinase (FAK/PTK2) is a cytoplasmic tyrosine kinase that regulates the signaling cascades emanating from integrins and growth factor receptors. FAK modulates fundamental roles in cancer cells, such as cell proliferation, migration, and survival, and in the tumor microenvironment, for example, inducing angiogenesis [1,2]. The crucial involvement of FAK in cancer cell migration, invasion, and metastasis is well established [3]. By linking integrin signaling to the actin cytoskeleton, FAK controls the formation and disassembly of focal adhesions and cell migration dynamics [2,[4][5][6].
Active FAK phosphorylated at Y397, a residue targeted and dephosphorylated by Phosphatase and Tensin homolog (PTEN) [7], is observed in GBM cell lines [8]. FAK activation promotes cell proliferation by accelerating G1 to S phase transition through increased expression of cyclin-D1/CCND1 and reducing p21/CDKN1A cyclin-dependent kinase (cdk) inhibitor levels [1,9,10]. Moreover, PTEN negatively regulates G1/S transition. This occurs by inhibiting the expression of S-phase kinase-associated protein-2 (SKP2), a component of the ubiquitin ligase Skp1, Cul1 and F-box (SCF) complex, and thus preventing the degradation of the cdk inhibitor p27/CDKN1B [11]. Thus, loss of PTEN in GBM supports FAK activation and resistance to apoptosis induced by the lack of cell-matrix contact.
Targeting of FAK has been considered in preclinical and clinical oncological trials [2,12]. Here, we used PF-573228, an inhibitor of the catalytic activity of FAK [2,13], to investigate its effects on GBM cell proliferation. FAK inhibition reduced GBM cell proliferation of adherent and GBM neurosphere cultures. Interestingly, PF-573228 increased p27/CDKN1B levels and β-galactosidase activity and decreased p62/SQSTM-1 expression. We also found that p62-depleted cells transcriptionally upregulate p27. Therefore, p62 appeared negatively regulated in senescence-like cell cycle arrest.
FAK Inhibition Reshapes GBM Cell Morphology, Increasing Cellular and Nuclear Size
We firstly analyzed the levels of total and active phosphorylated Y397 (PY397) FAK in cell lysates of different GBM cell lines. Active FAK was detected under basal conditions in the four GBM cell lines tested ( Figure 1A). Cell lysates of three GBMs and two low-grade astrocytomas were also studied and FAK levels compared to those of mouse embryonic fibroblast (MEF) cell lysates. Total and active FAK levels were lower in GBMs compared with grade II gliomas or MEF (used as a control) ( Figure S1A). This is consistent with in silico analysis of FAK/PTK2 mRNA levels that confirmed a lower expression in GBM compared with astrocytoma biopsies ( Figure S1B). (A172, U251-MG, U87-MG, and T98G) were analyzed for PY397 FAK and total FAK. β-actin was used as a loading control. GBM cell lines display active PY397 FAK, with U251-MG and U87-MG showing the highest levels. (B) U251-MG and U87-MG cell lysates (control or treated with PF-573228 10 µm) were analyzed for active and total FAK. β-actin was used as a loading control. FAK inhibitor effectively reduced PY397 FAK levels. (C) Glial Fibrillary Acidic Protein (GFAP), βIII-tubulin, and Lamin B1 immunostainings performed in U251-MG cells (after 4-5 days of treatment with PF-573228 10 µm). Cytoskeleton remodeling accompanied by cell body enlargement and lobulated/enlarged nuclei is revealed by Lamin B1 immunostaining. Bars = 28 µm.
For the rest of the study, we used the GBM cell lines U251-MG and U87-MG, which displayed the highest levels of active FAK. Treatment of cells with PF-573228 (10 µm) for 24 hours resulted in the reduction of FAK activity, evidenced by decreased levels of PY397 FAK ( Figure 1B), and severely altered their morphology ( Figure S1D). Similar results were obtained with another FAK inhibitor, Defactinib (VS-6063/PF-04554878), at 5 µm (Figure S1 C and D). We confirmed a striking remodeling of the cytoskeleton (revealed by Glial Fibrillary Acidic Protein (GFAP) and βIII-tubulin immunostainings; Figure 1C) and increased cell size following treatment with PF-573228. Furthermore, Lamin B1 immunostaining highlighted larger lobulated nuclei following FAK inhibition ( Figure 1C).
FAK Inhibition Reduces GBM Cell Proliferation
Next, we studied whether FAK inhibition affected GBM cell proliferation. We firstly performed WST-1 viability assays in GBM cells treated with different concentrations of PF-573228 (from 5 to 40 µm) for 24 hours. The results showed a significant decrease in cell viability from 10 µm in U87-MG cells and at 40 µm in U251-MG ( Figure 2A). We also performed clonogenic assays to evaluate the capacity of cells to proliferate into clones. Cells grown in the presence of PF-573228 formed about 70% fewer cell colonies than untreated cells ( Figure 2B,C). Again, cells treated with PF-573228 appeared strikingly flatter and larger than control cells ( Figure 2D). WST-1 and clonogenic assays can reflect changes in both cell proliferation and survival. We did not observe significant cell death in GBM cells treated with FAK inhibitors. These results, therefore, suggest that FAK inhibition reduced cell proliferation.
To specifically address the question of FAK inhibition affecting cell proliferation, we performed immunostaining against Ki67, a marker expressed by proliferative cells. Ki67 protein levels vary along the cell cycle, being higher in the G2/M phase and lower in the G0/G1 phase [14]. We counted the number of cells showing high (Ki67++), medium (Ki67+), or low (Ki67−) immunoreactivity for Ki67 after four days of treatment with PF-573228. We found a decrease of~25% in the number of Ki67+ cells and an increase of~30% of Ki67− cells in both U87-MG and U251-MG cell lines ( Figures 3A,B). At the same time, we observed a dramatic decrease in the mean number of cells/field after the four days of treatment (92% and 72% decrease compared with the control in U87-MG and U251-MG cell lines, respectively; Figure 3C). Finally, we investigated the effect of FAK inhibition on the growth of neurospheres (NSs), a model culture reflecting the proliferation of stem-cell-like cells. U87-MG derived NS grown for seven days in the presence of 10 µm PF-573228 showed a median diameter 23% shorter than that of the control NS ( Figure 4). We concluded that PF-573228 stops cell proliferation of different GBM cultures and increases Ki67− cells, possibly reflecting G0/G1 phase cells [14].
PF-573228 Increases p27 Protein Levels and β-Galactosidase Activity
Having observed the induction of cell cycle arrest and increased cell size, we decided to study the possible acquisition of a senescent phenotype following FAK inhibition. Firstly, we performed senescent associated β-galactosidase (SA-β-gal) staining on GBM cells, either control or treated with PF-573228, which reveals the increased activity of lysosomes at acidic pH typical of senescent cells [15]. Cultures treated with PF-573228 for four days showed a marked increase in the percentage of SA-β-gal-positive cells versus control cultures (58% and 44% increase in U87-MG cells and U251-MG, respectively; Figure 5A).
Analysis of Cdk inhibitors helps to identify senescent cells [16,17]. We, therefore, measured the mRNA levels of p21/CDKN1A/CIP1 and p27/CDKN1B/KIP1 in GBM cells after two or four days of treatment with PF-573228. The results showed no significant differences between control and treated cells ( Figure 5B). However, p27 protein levels increased following treatment with PF-573228 in U87-MG and U251-MG cells ( Figure 5C). These results suggest that PF-573228 stops GBM cell proliferation by stabilizing p27 protein.
SKP2 is a ubiquitin ligase that targets p27 [18] and is regulated by FAK [19,20]. We investigated whether SKP2 was modulated by PF-573228. We confirmed reduced SKP2 levels in parallel with increased p27 levels after PF-573228 treatment ( Figure 5D). Therefore, treatment with PF-573228 increases both SA-β-gal staining and p27 protein in GBM cells. and p21/CDKN1A mRNA levels did not change significantly between control cells or cells treated with PF-573228 (10 µm, two or four days; n = 3). (C) p27 protein levels were analyzed in control cells and cells treated PF-573228 for two or four days. β-actin was used as loading control. Quantification of p27 normalized to β-actin indicates that p27 significantly increases after two days of treatment with PF-573228, and after two and four days in U87-MG and U251-MG cells (* p < 0.05; n ≥ 4). (D) SKP2 protein levels were analyzed in control cells or cells treated PF-573228 for two or four days. β-actin was used as loading control. Plot represents SKP2 levels normalized vs. control, which decrease in PF-573228-treated cells (*** p < 0.001; n = 4).
p62/SQSTM-1 Expression is Reduced upon FAK Inhibition
P62/SQSTM-1 links autophagy and activation of different signaling pathways during tumorigenesis [21]. Its best-known role in autophagy is that of a cargo receptor of autophagosomes, although it can also be a substrate. Moreover, p62 is phosphorylated by Cdk1 to achieve optimal transition through mitosis [22]. PF-573228 significantly reduced p62 protein levels compared with untreated cells ( Figure 6A). This result could be explained by its degradation through autophagy or by transcriptional repression. To clarify these possibilities, the mRNA levels of p62 were measured by real-time qPCR. Interestingly, we observed that p62 expression decreases upon PF-573228 treatment ( Figure 6B). We confirmed this finding using Defactinib, which increased the percentage of SA-β-gal-positive GBM cells in parallel to decreasing p62 expression ( Figure S2), similar to PF-573228. These findings suggest that the decay of p62 associates with the proliferative arrest promoted by FAK inhibition, consistent with a pro-neoplastic role of p62 [22][23][24]. p62 mRNA levels decrease after FAK inhibition (** p < 0.01; *** p < 0.001; n = 4).
Discussion
We investigated the effects of FAK inhibition in GBM cell proliferation. Our results in cell viability, clonogenic, and Ki67 immunostaining experiments indicate that the cell cycle is arrested upon treatment with PF-573228. The proliferative arrest occurs through increased p27 protein levels (p27 or p21 mRNA levels remain unaltered) and phenotypically correlates with a flattened cell body and SA-β-gal positivity, suggesting senescence entry. Interestingly, p62 is repressed by FAK inhibitors, PF-573228 and Defactinib. This finding prompted us to analyze p62-depleted cells, which transcriptionally upregulate p27 and increase SA-β-gal activity. Our results, therefore, indicate that p62 downregulation is associated with senescent phenotypes. In fact, p62 has recently been associated with longevity and its absence to aging in C. elegans [25]. We propose that FAK inhibition may be a valid strategy to counteract GBM progression through senescence deregulation.
Senescence is a cell cycle arrest program controlled by different Cdk inhibitors depending on the senescence trigger. It has been linked to Lamin B1 loss [26] (occurring through autophagy in oncogenic senescence [27]) and to a secretory phenotype [16,28]. It is related to an enlarged flat morphology and hypertrophy [29] supported by extensive cytoskeletal changes [30]. Importantly, senescent cells appear to contain smaller focal adhesion contacts with hypophosphorylated FAK, which could account for their impaired proliferation and migration capacities [30]. The GBM cell proliferative arrest observed upon pharmacological inhibition of FAK appears critically regulated by p27, a Cdk inhibitor involved in therapy-induced senescence (TIS) [16,31]. However, p27 has also been associated with cell quiescence [28]. Analysis of additional senescent markers like those involved in a secretory phenotype or in TIS awaits future research and should help clarifying the observed cell phenotype. Senescence-like induction through the p27 pathway may be a consequence of defects in the p53 and p16 pro-senescence pathways in GBM and could be exploited to overcome the PTEN loss in this tumor [11,32].
FAK inhibitors reduce cell proliferation, induce apoptosis, and slow GBM growth in vitro [8] and in vivo [33]. Specifically, PF-573228 promotes proliferative arrest through decreased CyclinB1 and Lamin A/C, and induces cancer cell senescence [34]. The effects of PF-573228 in GBM cell proliferation were linked to increasing numbers of Ki67-negative cells and to the stabilization of p27, as a result of SKP2 downregulation [11,19,32]. These results are consistent with the finding that the inactive Y397F FAK mutant reduces proliferation by reducing the levels of cyclins (D1 and E) and increasing those of p27 and p21 [9]. In contrast, the effects of PF-573228 seem independent of the p53-p21 axis [16] in GBM. We did not observe apparent cell death upon PF-573228 treatment. Yet, effects on apoptosis in U87-MG cells cannot be ruled out as they were described for other FAK inhibitors [8,33]. Finally, the pluripotency gene Nanog that regulates the proliferation of glioma stem cells [35,36] is activated through phosphorylation by FAK [37], and its inhibition could explain the effects of PF-573228 on neurosphere growth.
Autophagy is a catabolic process allowing the degradation of proteins and damaged organelles. The relationship between autophagy and senescence is complex. While autophagy induction supports cell quiescence [38], impaired autophagy is considered a senescence driver [28,29,38]. Thus, decreased selective autophagy inhibits the degradation of proteins required in senescence such as GATA4 [39]. Previously, FAK depletion was linked to autophagy through the targeting of active Src to autophagosomes [40]. Here, we studied the adaptor protein p62, a central autophagy player and signaling modulator [21], upon FAK inhibition. P62 levels decrease following FAK inhibition, both by PF-573228 and Defactinib, resulting in a cell cycle arrest compatible with cell senescence. P62 is overexpressed in cancer, including GBM [41]. Indeed, several studies presented tumorigenic roles for p62 [23,24,42,43], while p62 knockdown reduced Ki67 immunostaining and esophageal carcinoma growth [44]. In addition, p62 phosphorylated by Cdk1 is involved in the control of mitosis [22]. P62 upregulates SKP2, at both the mRNA and protein levels, through PKCiota and the proteasome system [44,45]. Furthermore, the p62/SKP2 axis promotes p21 and p27 degradation [45]. Importantly, we found that p62 gene silencing upregulates p27 expression, triggering a senescent phenotype. Salazar et al. also observed senescence of vascular smooth cells upon silencing p62 [46]. Decreased p62 would lead to reduced amounts of SKP2, resulting in the stabilization of p27 in addition to the regulation of p27 transcripts demonstrated here. P62 nuclear functions are ill-defined in spite of its nuclear localization signals [21] and nuclear shuttling [47]. Thus, the mechanisms by which p62 can modulate p27 expression remain unidentified. Our findings concerning the pharmacological inhibition of FAK with PF-573228 or the silencing of p62 highlight the importance of p62 in cell senescence through p27 ( Figure 7E). How FAK inhibitors can regulate p62 expression remains unclear. Identified binding sides on the p62 promoter including those for AP-1 or NRF2 [21] could be potentially involved. Further studies are needed to clarify the integration of p62 in FAK signaling. Collectively, we observed a proliferative arrest indicative of senescence linked to p62 repression after FAK inhibition. Our findings could be exploited by targeting FAK alone or in combination with temozolomide [33]. In addition, FAK inhibitors could be combined with senolytic agents in order to eliminate senescent-like cancer cells, which have been linked to inflammation and recurrence, to reduce GBM progression. Nevertheless, the implementation of preclinical models is the necessary next step to validate FAK as a valuable chemotherapeutic target in GBM. While in vitro data show that FAK inhibitors have an interesting profile against GBM, an important caveat is their bioavailability in the brain. Blood-brain barrier (BBB) permeability and brain efflux index are unknown for these compounds, so whether they can achieve clinically relevant concentrations in GBM tumors is a conundrum. Although the BBB is disrupted in GBMs, in some tumor regions, it can be intact and effectively preclude drug delivery [48]. The BBB permeability to chemotherapeutics in GBM is an active area of research, and different strategies are being investigated in order to enhance drug delivery [49]. Thus, the translationality of the findings reported here needs to be tested in vivo in GBM models, using patient-derived GBM cells and carefully monitoring the effective penetration of the FAK inhibitor into the brain.
Cell Culture
GBM cell lines were obtained from American Tissue Culture Collection (ATCC) and maintained in minimal essential medium (Thermo Fisher Scientific, Waltham, MA, USA; 21090022) containing 10% heat-inactivated fetal bovine serum (FBS; Thermo Fisher Scientific 10270098), penicillin/streptomycin (Thermo Fisher Scientific 15140-122), L-glutamine (Thermo Fisher Scientific, 25030-081), 1% non-essential aminoacids (Thermo Fisher Scientific 11140-035). U87-MG (ATCC), and U251-MG cell lines were authenticated by short tandem repeat profiling (Stab Vida, Portugal) following purification of genomic DNA using the Maxwell16 Tissue DNA kit (Promega, Madison, Wisconsin; AS1030). Cell lines were grown in mycoplasma-free rooms and mycoplasma testing was performed by PCR (primers used were forward: GGCGAATGGGTGAGTAACACG and reverse: CGGATAACGCTTGCGACCTATG). Cells that tested positive were either discarded or treated with Plasmocin (Thermo Fisher Scientific ant-mpt-1). Cell lines were passaged for 20-25 passages. Primary GBM cell cultures were isolated as previously described [41] from surgical biopsies obtained from Hospital Arnau de Vilanova of Lleida (Spain), following approval by the review board of the IRBLleida Biobank and of the ethical committee of the University of Lleida (code 235/CEIC/2019).
Immunoblot Analysis
Cells were washed with PBS and lysed in Tris 62.5 mM, pH 6.8, and 2% sodium dodecyl sulfate (SDS). Cell lysates were separated by SDS-polyacrylamide gel electrophoresis and gels were transferred to a polyvinylidene difluoride (PVDF) membrane (Merck Millipore IPVH00010). Membranes were cut to probe different antibodies on the same membrane. Membranes were blocked with 5% milk and incubated overnight with primary antibodies. Blots were developed using enhanced chemiluminiscence (ECL Western Blotting Substrate, Fisher Thermo Scientific 32106) or Immobilon Forte Western Horse Radish Peroxidase substrate (Merck-Millipore WBLUF0100). Band intensity was measured using ImageJ software and normalized against β-actin.
shRNA-Induced Gene Silencing by Lentiviral Infection
Lentiviral-based vectors pLKO.1-puro were used for RNA interference-mediated gene silencing, containing short hairpin RNAs (shRNAs) scrambled (against the sequence 50 CAACAAGATGAAGAGCACCAA 30) or against human p62/SQSTM1 (Mission RNA, Merck Sigma-Aldrich, TRCN0000007237). Lentiviral particles were produced in HEK293T (human embryonic kidney) cells for 72 h upon transfection with shRNA vectors, together with psPAX2 and pMD2G plasmids using polyethylenimine. Medium was then centrifuged at 2,500 rpm and filtered through a 0.45 mm membrane. Cells were incubated with medium containing lentiviral particles together with polybrene (1.75 mg/mL) for 24 h. Medium was replaced after 24 h and cells cultured for seven days to allow the knockdown. Puromycin (2 µg/mL) was added to media to select for resistant cells and refreshed after three days.
Immunohistochemistry
Cells were plated on Poly-D-Lysine (PDL)-coated (25 µg/mL) coverslips and treated with PF-573228 (10 µm). Treatments were performed for 2 or 4-5 days (refreshing treatments at day 2), as indicated. Cells were fixed using 4% Paraformaldehyde (20 min, room temperature (RT)); washed with phosphate buffered saline (PBS); permeabilized with Triton X-100 0.2% for 4 min; and blocked in 5% FBS, 5% horse serum, and 0.2% glycine in PBS. Cells were incubated with primary antibodies (overnight, 4 • C) and subsequently washed and incubated with Alexa Fluor 488 or 594 secondary antibodies (Thermo Fisher Scientific) and Hoechst. Coverslips were mounted on Mowiol and images were obtained using an inverted Olympus IX70 microscope (10×, 0.3 numerical aperture (NA); 20×, 0.4 NA; 32×, 0.4 NA) equipped with epifluorescence optics and a camera (Olympus OM-4 Ti). DPM Manager Software was used to manage the pictures. Ki67 immunoreactivity was quantified using ImageJ by counting the cells according to the intensity of the Ki67 immunostaining and compared with the total number of nuclei stained by Hoechst.
Senescence-Associated β-Galactosidase
Cells were washed in PBS (pH 7.4), fixed for 4 min with 0.5% glutaraldehyde in PBS, and washed with 2 mM MgCl 2 in PBS solution. Cells were then incubated with fresh senescence-associated stain solution (20 mg/mL X-Gal, 5 mM K 3 Fe(CN) 6 , 5 mM K4Fe(CN) 6 , and 2 mM MgCl 2 in PBS (pH 6.0)) for 6-8 h at 37 • C. Cell nuclei were counterstained with Hoechst, pictured, and counted. Plots represent the % of SA-β-gal positive cells compared with the total number of cells stained by Hoechst (>100 control cells/field and 20-60 treated cells/field from five fields were counted from at least three independent experiments).
Statistical Analyses and Bioinformatics
Statistical significance was assessed by performing one-way analysis of variance (ANOVA) test (as indicated) or Student's t-test. Asterisks represent different significance levels (* p < 0.05; ** p < 0.01; and *** p < 0.001). Experiments are represented as mean ± SEM (n ≥ 3). Expression analysis of FAK mRNA levels in non-tumoral, astrocytoma, and GBM samples was performed using Gliovis [50].
|
2020-04-30T09:06:01.195Z
|
2020-04-27T00:00:00.000
|
{
"year": 2020,
"sha1": "5de312f581838cb601f7bec280766e320f854762",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/12/5/1086/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "709bd195a58a8f7ce9804be4912fe217c6b34397",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
251019141
|
pes2o/s2orc
|
v3-fos-license
|
Antiviral activity of ouabain against a Brazilian Zika virus strain
Zika virus (ZIKV) is an emerging arbovirus associated with neurological disorders. Currently, no specific vaccines or antivirals are available to treat the ZIKV infection. Ouabain, a cardiotonic steroid known as Na+/K+-ATPase inhibitor, has been previously described as an immunomodulatory substance by our group. Here, we evaluated for the first time the antiviral activity of this promising substance against a Brazilian ZIKV strain. Vero cells were treated with different concentrations of ouabain before and after the infection with ZIKV. The antiviral effect was evaluated by the TCID50 method and RT-qPCR. Ouabain presented a dose-dependent inhibitory effect against ZIKV, mainly when added post infection. The reduction of infectious virus was accompanied by a decrease in ZIKV RNA levels, suggesting that the mechanism of ZIKV inhibition by ouabain occurred at the replication step. In addition, our in silico data demonstrated a conformational stability and favorable binding free energy of ouabain in the biding sites of the NS5-RdRp and NS3-helicase proteins, which could be related to its mechanism of action. Taken together, these data demonstrate the antiviral activity of ouabain against a Brazilian ZIKV strain and evidence the potential of cardiotonic steroids as promising antiviral agents.
Zika virus (ZIKV) was first isolated in 1947 at the Zika forest in Uganda 1 . Since its identification, several cases of ZIKV infection in humans had been reported, but the virus only drew worldwide attention during the 2015-2016 epidemic in Brazil, when the association between ZIKV and microcephaly in neonates was first reported. This emerging virus is responsible to rash disease and also severe neurological manifestations in adults and neonates, such as Guillain-Barre syndrome in adults and congenital Zika syndrome (CZS) 2 . CZS is characterized by microcephaly, ocular anomalies, congenital contractures and other neurological lesions 3 . Since its emergence in the Americas, ZIKV has spread rapidly with the presence reported in 87 countries 2 .
ZIKV is an arthropod-borne virus (arbovirus) in the genus Flavivirus and the family Flaviviridae, which includes several other arboviruses of clinical importance (e.g., dengue virus [DENV], yellow fever virus [YFV] and West Nile virus [WNV]). Like the other flaviviruses, ZIKV is a positive-sense single-stranded RNA (+ SS RNA) virus with a genome size of approximately 11 kilobases. The RNA is translated into a single polyprotein encoding three structural proteins (capsid [C], precursor membrane [prM]/membrane [M] and envelope [E]) and seven nonstructural proteins (NS1, NS2A, NS2B, NS3, NS4A, NS4B and NS5). The structural proteins form the virus particle and mediate the initial steps of virus-host interaction, whereas the non-structural proteins assist in replication and packaging of the genome as well as evasion of immune defense mechanisms 4 . Several ZIKV proteins such as the envelope (E) protein, NS2-NS3 protease, NS3 helicase, NS5 methiltransferase and NS5 RNA-dependent RNA-polymerase (RdRp) have been implicated as potential targets for antiviral drugs. Molecular docking, a method that predicts interaction between proteins, has been widely used in the field of drug screening for in silico identification of antiviral candidates and also for postulating the mechanism of action of discovered antivirals 5 .
To date, no specific vaccines or drugs are available to treat ZIKV infection 2 . Moreover, the virus is still circulating in several regions of the world and could potentially cause new outbreaks; including in Brazil, where the African lineage was recently reported in the country in addition to the Asian lineage responsible for the Results Cytotoxicity and antiviral activity of ouabain in Vero cells. To assess the cytotoxicity and antiviral activity of ouabain against ZIKV, we determined its IC 50 , CC 50 , CC 20 and SI for Vero cells upon treatment with various drug concentrations. As shown in Fig. 1 and Table 1, we found that its CC 20 and CC 50 values were at a nanomolar concentration, 20 and 68.9 nM, respectively. Thus, 20 nM (CC 20 ) was defined as the maximal nontoxic concentration for antiviral screening 21 . The IC 50 value of ouabain was also found at nanomolar concentration in both treatments (Pre-treatment-IC 50 = 2.33 nM; Post-treatment-IC 50= 1.92). In addition, we calculated the SI of ouabain based on the effective drug concentration that results in the 50% virus inhibition (IC 50 ) and the drug concentration that leads to 50% cytotoxicity (CC 50 ). The SI (CC 50 /IC 50 ) values found were 29.5 and 35.8, pre-treatment and post-treatment, respectively, demonstrating that the antiviral effects of ouabain are not related to cytotoxicity.
Ouabain treatment reduces ZIKV replication. As shown in Fig. 2, ouabain was capable of reducing live virus titer when compared to the control in both assays ( Fig. 2A,B). This reduction was observed in all concentrations tested with a percentage decrease of 61.2%, 81.8%, 89.7% and 98% in the pre-treatment assay and 93.7%, 96.3%, 99% and 99.3% in the post-treatment at concentrations of 2.5, 5, 10 and 20 nM, respectively (Fig. 2C,D). The highest concentration in post infection assay (Fig. 2B) decreased virus titer with a reduction to 2.2 log 10 units in comparison to the control, which is similar to the reduction achieved with positive control 21 , 6MMPr (2.5 log 10 , 99,7% reduction). In addition, this concentration also prevented morphological changes associated with the progression of infection (shrinkage and clumping) that characterizes the ZIKV cytopathic effect, as can be seen in Fig. 2E.
Ouabain treatment reduces ZIKV RNA copy numbers. As shown in Fig. 3A, ouabain pre-treatment did not interfere with the RNA copy numbers in comparison to the control group. On the other hand, posttreatment of ouabain was effective in reducing these RNA levels (Fig. 3B). This reduction was observed in all concentrations tested with a percentage decrease by 65.6%, 71.9%, 71.3% and 78.1% at concentrations of 2.5, 5, 10 and 20 nM, respectively.
Ouabain did not impair the initial step of the virus life cycle. To determine whether ouabain interferes with the entry of the virus into the cell, a time-of-addition experiment was performed. As shown in Fig. 4A, ouabain did not present direct virucidal activity on ZIKV particles. Incubation of virus with ouabain in different concentrations (2.5, 5, 10 and 20 nM) prior to the cells had no effect on viral titers (Fig. 4A), demonstrating that the antiviral activity of this cardiotonic steroid is not attributable to inactivation of the virus.
Next, we investigated the effect of ouabain against virus attachment to the host cell. Our data showed that ouabain did not interfere with adsorption or internalization of the virus. Different ouabain concentrations (2.5, 5, 10 and 20 nM) had no effect on viral titer when compared to the control group in both assays (Fig. 4B,C).
Binding pose analysis and relative binding affinity between ouabain and the main drug targets of ZIKV. In order to assess the ability of ouabain to interact with non-structural ZIKV proteins, we performed a molecular docking procedure followed by molecular dynamics simulations. The following NSPs of ZIKV were used: NS3-Helicase, NS5-Mtase, and NS5-RdRp. As first step, we looked for druggable binding cavities using the chemical information of small-molecules that were co-crystallized with these receptors, and we found a total of 10, 2 and 2 binding sites in NS3-Helicase, NS5-Mtase, and NS5-RdRp, respectively. A representation of these cavities is depicted in Fig. 5.
Upon identification of the druggable binding cavities, it was possible to test whether ouabain can interact with the same region of these proteins. Therefore, we used this biological information to guide our molecular docking protocol to predict the ouabain binding pose. Table 2 presents the docking scores (E docking ) for ouabain at the different binding sites. Molecular docking pointed out that ouabain interacts with high affinity at the binding site 1 of NS5-Mtase, followed by the NS5-Mtase binding site 0 and NS3-Helicase binding site 7. The GA of the GOLD software was not able to produce binding poses of the ouabain in the NS5-RdRp binding site 0, perhaps, Figure 1. Cytotoxicity of ouabain. Vero cells were treated with different ouabain concentrations (n = 5). After 120 h of incubation at 37 °C, the CPE score was evaluated using an inverted microscope (A) and the CPE score was obtained (B). After that, the MTT solution (1 mg/mL) was added to each well and then the microplate was incubated for 4 h. The optical density was determined by spectrophotometry at 540 nm (C). MTT values were presented as mean ± standard deviation of three independent experiments. CPE values correspond to the CPE score means for replicates. www.nature.com/scientificreports/ docking step were subjected to molecular dynamics simulations to assess the stability of ouabain, as well as to rescoring the relative binding affinity using the MM/PBSA method and semi-empirical quantum calculations. Figure 6 shown the RMSD profiles of the NSP-ouabain complexes obtained from the trajectory of MD simulation. We observed that the backbone of NS3-Helicase, NS5-Mtase, and NS5-RdRp did not undergo conformation changes along the time when in complex with ouabain at different binding sites. Further, considering the last 5 ns of MD simulations, it was possible to see that the ouabain achieved a conformational stability in the binding sites 0, 1, 5, and 7 of NS3-helicase. Regarding the NS5-Mtase/ouabain and NS5-RdRp/ouabain complexes, the starting geometry of the ouabain at the binding pocket 1 of the Mtase and RdRp domains (NS5) underwent conformational changes at the begining of the MD simulation. However, after 5 ns of MD simulation, the ouabain segment reacheds a stable conformation. Values were presented as mean ± standard deviation of three independent experiments and analyzed by one-way analysis of variance (ANOVA) followed by Tukey post-test. *P < 0.05, **P < 0.01, ***P< 0.001, ****P < 0.0001 significant in relation to the control. www.nature.com/scientificreports/ The relative binding affinity predicted by MM/PBSA method showed that the ouabain has a high affinity for the binding site 1 of NS5-RdRp, ∆G bind = − 10.58 kcal mol −1 , followed by NS3-Helicase binding site 5, ∆G bind = − 9.48 kcal mol −1 (Table 2). Similarly, the calculation of the enthalpy of binding (∆H bind ) showed that ouabain has a high affinity for the binding site 1 of NS5-RdRp, ∆H bind = − 121.76 kcal mol −1 , followed by NS3-Helicase, ∆H bind = − 117.30 kcal mol −1 ( Table 2). The Pearson correlation coefficient (R 2 ) between the relative affinities calculated by the MM/PBSA method and semi-empirical quantum calculations was 0.77. Figure 7 showed the occurrence of hydrogen bonds between ouabain and the binding sites that achieved ∆G bind ≤ − 4.0 kcal mol −1 ( Table 2). At the binding site 5 of NS3-helicase protein, a hydrogen bond between the oxygen and nitrogen backbone atoms of Leu507 and the ouabain seemed to play an important role for the ouabain stability (Figs. 7A, 8A). Also, the oxygen backbone atom of Ala517 formed a hydrogen bond with ouabain. Furthermore, considering an occurrence percentage greater than 30% as a "hot spot" for hydrogen bond with ouabain, the following residues of NS3-Helicase can be highlighted: Glu231 and Asn417 (Fig. 7A). Regarding the NS5 protein, the NS5-Mtase domain did not form strong hydrogen bonds with ouabain along the MD simulation (Figs. 7B, 8B). In contrast, the residues Glu460, Ser663, Asp6651, and Ser712 from the NS5-RdRp domain form strong hydrogen bonds with ouabain, being Glu186 the most important one (Figs. 7B, 8C).
Discussion
The explosive epidemics of ZIKV in Brazil and other Latin America countries in 2015-2016 and the recent Indian outbreak highlight the potential of ZIKV for rapid spread in the population. Despite the severe consequences of ZIKV infection, no specific vaccines or antivirals are currently available to treat the associated diseases 2 . Cardiotonic steroids, which are used in patients with congestive heart failure, have been investigated to treat other diseases, such as cancer 22 and viral infection 15 . In this work, we demonstrate the potential of the cardiotonic steroid ouabain against a Brazilian ZIKV strain isolated during the 2015 epidemic in Brazil 23 .
Ouabain has antiviral activity against several RNA and DNA viruses, such as cytomegalovirus 17 , herpes simplex virus 24 , influenza virus 19 , human immunodeficiency virus 16 , Japanese encephalitis virus 25 and coronaviruses 18 , including the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) 26 . By the time of writing this manuscript, the effect of ouabain against two ZIKV strains (ZIKV H/PF/2013 from French Polynesia and ZIKV MRS from Martinique) was described by Guo et al. 20 . Therefore, here we report our independent findings of ouabain inhibition of a Brazilian ZIKV strain and discuss similarities and differences between these studies.
Our data showed that ouabain was capable to reduce viral titer at nanomolar concentrations, concurring with several previous studies [16][17][18][19] . This inhibitory effect was observed in both pre-treatment and post-treatment assays. However, the treatment of ouabain after the infection was more effective to reduce ZIKV titer with a decreased up to 2.2 log 10 units (99.3%) at the highest concentration, similar to the positive control 6MMPr, a drug previously identified by our group as a potent ZIKV inhibitor 21 . Additionally, we evaluated the RNA copy numbers and observed that ouabain was capable to reduce viral RNA levels. This effect on viral RNA was seen only in the post-treatment assay, corroborating the results obtained with virus titration. Notably, these data indicate that this substance shows effective antiviral activity against ZIKV mostly when administered post infection, in agreement with the data published by Guo et al. 20 . Although the IC 50 value in both studies was at a nanomolar range, we reported the ouabain effect using a concentration approximately 50-fold lower than showed by them 20 , demonstrating the potential of ouabain against ZIKV at lower concentrations. The Brazilian ZIKV strain (ZIKV/ After 120 h, the supernatant was collected and the RNA copy numbers were quantified by reverse transcription quantitative PCR (RT-qPCR). The RNA copy numbers were calculated using a standard curve. Values were presented as mean ± standard deviation of three independent experiments and analyzed by one-way analysis of variance (ANOVA) followed by Tukey post-test. **P < 0.01, ***P< 0.001 significant in relation to the control. 20 . However, although the level of genetic similarity, strains could present differences in the percentages of infected cells, RNA accumulation and/or viral progeny, as previously described 28,29 . In this way, the differences between the ZIKV strains used in both studies may have accounted for the observed variation on ouabain concentration. Ouabain has been shown to interfere both in the entry (e.g., against coronaviruses) 18 and in the replication stage (e.g., against influenza virus) 19 of the virus life cycle. To get more insight into the antiviral effect against ZIKV, we analyzed whether the action of ouabain could be related to the entry stage. We found that this substance is not capable to block the virus adsorption or internalization steps, nor it can directly inactivate the virus (virucidal activity). Together, these data suggest that ouabain may not interfere in the entry, but seems to act in the post entry stage of the ZIKV life cycle. This hypothesis is supported by previous results where the substance showed strong inhibitory effects when added post infection. In addition, the replication step appears to be the main mechanism by which cardiotonic steroids demonstrate their antiviral activity, as reported by Amarelle and www.nature.com/scientificreports/ www.nature.com/scientificreports/ Lecuona 15 . Here, we found that ouabain was capable of reducing ZIKV RNA copy numbers, suggesting an action of ouabain at the replication stage and this was corroborated by Guo et al. 20 .
The antiviral effect of cardiotonic steroids has been attributed to binding to their receptor, Na + /K + -ATPase 18-20 . This binding can result in changes in the intracellular concentrations of sodium, potassium, and calcium and/or also trigger signaling transduction pathways 7 . It has been reported that these both mechanisms are involved in the activity of steroids cardiotonic on viral replication 15 . However, Mastrangelo and colleagues demonstrated that ouabain could also have a direct action on viral protein 30 . To answer this hypothesis, we performed a molecular docking procedure followed by molecular dynamics simulations to predict the interaction between ouabain and the main drug targets in ZIKV proteome (NS3 Helicase, NS5 Mtase and NS5 RdRp). Our data indicated www.nature.com/scientificreports/ that ouabain possibly interacts in the binding site 5 of NS3-Helicase, ∆G bind MM/PBSA = − 9.48 kcal mol −1 . Furthermore, ouabain also showed favorable binding energies (ΔG bind MM/PBSA < − 4 kcal/mol) in the binding sites close to important segments of the enzyme, such as the loop regions related to ATP hydrolysis (P-loop) and RNA binding (RNA binding domain) (Fig. 8A). Kumar et al. 31 evaluated the antiviral potential of the polyphenol EGCG (PubChem ID: 65064) in the NS3-helicase of ZIKV. Molecular docking and DM simulation showed that EGCG form hydrogen bonds at the ATPase site, mainly with N417, as well as in the RNA binding site. In vitro assays showed an inhibition of NTPase activity with IC 50 of 295.7 nM. The hydrogen bond analysis performed in our study showed that ouabain interacts strongly with the same residue, N417, near to the P-loop. Therefore, it is possible that ouabain is inhibiting ZIKV viral replication through an allosteric effect of the NS3-helicase enzyme. Additionally, our data corroborate the findings made by Mastrangelo et al. 30 , who found that ouabain presented a ∆G for NS3 helicase domains of West Nile Virus (WNV) ranging from − 11.5 to − 9.5 kcal mol −1 .
In addition, the authors demonstrated that ouabain inhibited the dsNRA unwinding activity of WNV helicase in vitro, evidencing ouabain as a potential flavivirus helicase binding compound 30 .
Moreover, our in silico data also indicated that ouabain possibly interacts with the RdRp domain of the NS5 protein, ∆G bind MM/PBSA = − 10.58 kcal mol −1 ( Table 2); ∆H bind = − 121.76 kcal mol −1 ( Table 2). Pattnaik et al. 32 by using molecular docking approach showed that the TPB compound (PubChem ID: 1619825) attach in the NS5-RdRp active site and forms hydrogen bonds with residues D535 and D665. Cell-based assays showed that TPB had IC 50 = 94 nM. Our in silico results showed that ouabain also interacts close to the active site of RdRp (site 1), with residues E460, D665, and D666. These residues (D535, D665, and D666) are highly conserved in the active site of RdRps of flaviviruses and play an important role in the polymerization of the RNA strand. In this way, our results suggest that ouabain may be preventing the replication of ZIKV through the allosteric effect of the RdRp enzyme. Therefore, the stability and longer residence time in the binding sites of NS3-helicase and NS5 RdRp could be related to the inhibition of the activity of these virus proteins by ouabain. The NS3 helicase provides chemical energy to unwind viral RNA replication intermediates. This facilitates the replication of the viral genome, promoted in concert with the NS5 RdRp 31,33 . Therefore, these proteins play essential role in the viral life cycle and their inhibition results in deficiency of production of viral particles 5 . Depth biochemical studies characterizing the interaction of ouabain with individual ZIKV proteins are warranted and will shed light on its mechanisms of action against viruses.
The in silico data reveal new insights about the targets of ouabain during the ZIKV infection. Although the classical explanation for the multiple effects of cardiac steroids has been through the Na + /K + -ATPase, other targets cannot be excluded. The works of Valente et al. 34 and Alonso et al. 35 demonstrated the ability of fluorescent analog OUABDP easily reaches the cytoplasm of Ma104 cells and HeLa cells, respectively. The Valente's work raises the possibility that this substance may conceivably reach the cytoplasm and probably binding to steroid receptors or, alternatively, to its own intracellular (yet unknown) receptor. In addition, the Alonso's work observed that ouabain binds to Na + /K + -ATPase and the complex formed is internalized and co-localizes later on with the mitochondria, indicating the existence of a mitochondrial binding site for the ouabain-Na + /K + -ATPase complex. Together, these works suggested that ouabain could have binding sites inside the cell, however further studies are need to better understand this real possibility.
Drug repositioning, a strategy for identifying new uses for approved drugs, is increasingly becoming an attractive proposition due to the reduce drug development cost and time 36 . Cardiotonic steroids, used in patients with congestive heart failure, has been suggested to present other therapeutic effects. Ouabain has been described as a potential candidate to treat viral infections, due its capacity of targeting cell host proteins, which help to minimize resistance and also its effectiveness against a broad spectrum of virus species 15 . Here, we demonstrated for the first time the antiviral activity of this cardiotonic steroid against a Brazilian ZIKV strain, making this compound an attractive candidate for further in vivo studies aimed at finding effective antivirals against ZIKV.
In addition to the action as drugs, cardiotonic steroids were identified in human fluids and tissues by the end of the past century 8 . The endogenous ouabain has been widely studied for its role in physiology. It has known that ouabain levels in the plasma vary from the picomolar to the nanomolar range, nevertheless there are physiological situations, such as pregnancy and intensive exercise, that increased these levels 37 . Moreover, endogenous ouabain has been described as an immunomodulatory substance [10][11][12][13][14] and as a modulator of neuroinflammation 38 . Considering that ZIKV infection causes intense inflammatory response in the brain and ouabain present antiinflammatory and antiviral activity, a question arises: could endogenous ouabain be able to inhibit or modulates ZIKV infection in humans? Further studies are needed to understand whether this hormone could be acting as physiological antiviral agent.
Conclusions
In conclusion, our findings demonstrated the antiviral activity of ouabain against a Brazilian ZIKV strain for the first time, concurring the previous data with other ZIKV lineages. Moreover, our data in silico revealed new insights about the targets of ouabain during the ZIKV infection. Finally, this study contributes to confirm the effectiveness of cardiotonic steroids as promising antiviral agents.
Materials and methods
In vitro experiments. Cells www.nature.com/scientificreports/ study 23 . The PE243 strain was propagated and titrated on Vero cells by fifty-percent tissue culture infection dose (TCID 50 ) method 39 using cytopathic effect (CPE) as the readout.
Viral titration. Vero cells were cultivated in 96 well plates at the density of 1 × 10 4 cells/well at 37 °C in a 5% CO 2 incubator one day prior to titration. Supernatants from antiviral assays were tenfold serially diluted in DMEM. The diluted supernatant was then added to the cells, which were further incubated for 5 days at 37 °C and 5% CO 2 . The cytopathic effect was evaluated on an inverted optical microscope and the reduction of viral titer was expressed as log 10 TCID 50 /mL. (Thermo Scientific). Finally, the copy of RNA (molecules/μl) was calculated as described by Faye et al. 42 .
RT-qPCR for
Antiviral assays. Pre and post-treatment. Vero cells were seeded in 48-well plates one day prior to infection at a density of 2.5 × 10 4 cells/well. The cells were incubated with the maximum non-toxic concentration of ouabain (20 nM and its decreasing dilutions (10, 5 and 2.5 nM) before (pre-treatment) and after (post-treatment) 1 h of ZIKV PE243 strain at a multiplicity of infection (MOI) of 0.1. Then, cells were incubated at 37 °C in 5% CO 2 for 120 h. Controls included mock, infected non-treated cells and as positive control, a drug previously identified by our group as a potent ZIKV inhibitor was used, the thiopurine nucleoside analogue 6-methylmercaptopurine riboside (6MMPr, 60.5 µM) 21 . At 120 h post infection (hpi), the cytopathic effect (CPE) was evaluated in both assays (pre and post-treatment) using an inverted microscope (AE2000 binocular microscope, Motic, Hong Kong) and pictures were taken using a smartphone. The cell supernatant was harvested and was stored at − 80 °C until analysis by TCID 50 and RT-qPCR.
Virus inactivation assay. To analyze the inactivation activity of ouabain, a viral suspension containing ZIKV PE243 strain at a MOI of 0.1. was incubated with equal volume of the different concentrations of ouabain for 1 h at 37 °C, as previously described by Moghaddam et al. 43 . Then, viral titration was performed by the TCID 50 method.
Anti-adsorption activity. Briefly, Vero cells were cultivated in 48 well plates at the density of 2.5 × 10 4 cells/ well one day prior to the assay. Cells were infected with ZIKV (MOI: 0.1) in the presence or absence of different concentrations of ouabain and incubated at 4 °C (permitting virus binding but not entry) for 1 h for virus adsorption, as previously described 43 Theoretical methods. The relative binding affinity between ouabain and non-structural proteins (NSP) of ZIKV was predicted using molecular docking and molecular dynamics simulations in three main steps: (i) the X-ray structures of three molecular targets of ZIKV that were co-crystallized with small-molecules were downloaded from the Protein Data Bank (PDB); (ii) the center of mass of the small-molecules in each binding sites of the receptors was used to determine the location of the binding pockets in each target, and then, the binding pose of the ouabain was predicted using the genetic algorithm (GA) of the GOLD software 44 ; at last (iii) a postdocking analysis was carried out by means of molecular dynamics in order to refine the binding poses, as well as to re-scoring the relative binding affinity.
Preparation of the receptors. According to the work of Nandi et al. 5 , the following NSPs of ZIKV are considered the main targets for drug development: (i) NS3 Helicase, (ii) NS5 Methyltransferase (Mtase) and (iii) NS5 RNAdependent RNA polymerase (RdRp). There are many X-ray structures of these receptors in the PDB, however, to the aim of this study we downloaded only geometries that were co-crystallized with small-molecules: NS3-Helicase (5RHX, 5K8T, 5RHV, 5RHP, 5RHM, 5RHR, 5RHN, 5RHG, 5RHK, 5RHQ, 5RHI, 5RHJ, 5RHL, 5RHU, 5RHW, 5RHO, 5RHS, 5RHT) 45 ; NS5-Mtase (5WZ2 46 ) and NS5-RdRp (5WZ3 46 ). Using the PyMOL software 47 , the structures corresponding to each target were aligned considering the backbone atoms and the center of mass (COM) of the co-crystallized ligands were computed. Also, for the NS3-Helicase structures, we calculated the RMSD between the backbone of the structures, and verified that such structures present a low rate of conformational variation (RMSD < 1). In this case, we used the resolution of the structure as a criterion of choice. In summary, the receptors used for the molecular docking were: 5RHQ (NS3-Helicase), 5WZ2 (NS5-Mtase), and 5WZ3 (NS5-RdRp). Then, the selected structures for each molecular target were saved to an individual PDB file, and then, inspected for gaps in its backbone, whereas ligands, waters and ions were removed. The MODELLER software was used to insert missing residues. Further, the remaining structure was submitted locally to the PDB-2PQR software 48 in order to predicts the residues protonation at the pH 7.4 and hydrogens were properly added.
Preparation of the ligand. The 3D structure of ouabain was extracted from PDB 3A3Y 49 to another file in PDB format. Then, using the OpenBabel package, the structure was converted to an individual MOL2 file and the partial charge of atoms determined by Gasteiger's scheme.
Molecular docking. The genetic algorithm of the GOLD software 44 (version 5.8.1) was used in standard mode to predict the ouabain binding pose in the binding pockets of the selected structures. The location of these pockets was determined considering the Euclidean distance between the centers of mass obtained in the "Preparation of the receptors". In general, Euclidean distances ≃ 0 comprises to small-molecules that were co-crystallized in the same binding pocket. In this case, an average of the COM coordinates was considered instead of individual coordinates. These parameters were passed to the "Point" function of the GOLD configuration file, considering as active residues only those within cutoff radius of 12 Å.
Molecular dynamics simulations. The NSP-ouabain complexes obtained in the molecular docking step were subjected to molecular dynamics (MD) simulations. For this, the geometries of the NS3-Helicase, NS5-Mtase and NS5-RdRp were parameterized according to the FF14SB force field 50 by using the tLeap module available in the AMBERTOOLS package. The ouabain structure, in turn, was parameterized according to the generalized AMBER force field (GAFF) 51 ; the partial charge of the atoms was calculated by means of ANTECHAMBER module (also included in AMBERTOOLS package) using the AM1-BCC method. Further, the NSP-ouabain complexes were inserted in a cubic water box of 20 Å containing TIP3P waters and ions (0.15 M NaCl). MD simulation was performed by using the NAMD program 52 (version 2.13) with the following configuration parameters: (i) periodic boundary conditions; (ii) restriction of vibration for covalent bonds involving hydrogen atoms, HOH angles and the OH bond distance of TIP3P water molecules (SHAKE algorithm); (iii) time step equal to 2 fs; (iv) electrostatic interaction cutoff of 12 Å for all steps of the simulations and (v) Particle Mesh Ewald (PME) method was used for long range electrostatic interaction. The starting geometry was submitted to minimization, heating (from 0 to 310 K) and pressurization steps. Then, the resulting geometry was submitted to the equilibrium step (NPT ensemble) during 10 ns; temperature and pressure were maintained constant along the simulation using the Langevin's barostat and termostat with 1 atm and 310 K, respectively; frames were captured every 5 ps. The MD trajectories analysis were carried out using the CPPTRAJ 53 software.
|
2022-07-25T06:16:07.361Z
|
2022-07-23T00:00:00.000
|
{
"year": 2022,
"sha1": "913f001d63e74b8e6bdebc7a49868e45190645ee",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b8e8331bcf483a57d5a4f87f329c57d1ec46ec07",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235353022
|
pes2o/s2orc
|
v3-fos-license
|
The Signed Cumulative Distribution Transform for 1-D Signal Analysis and Classification
This paper presents a new mathematical signal transform that is especially suitable for decoding information related to non-rigid signal displacements. We provide a measure theoretic framework to extend the existing Cumulative Distribution Transform [ACHA 45 (2018), no. 3, 616-641] to arbitrary (signed) signals on $\overline{\mathbb{R}}$. We present both forward (analysis) and inverse (synthesis) formulas for the transform, and describe several of its properties including translation, scaling, convexity, linear separability and others. Finally, we describe a metric in transform space, and demonstrate the application of the transform in classifying (detecting) signals under random displacements.
Introduction
Mathematical transforms for representing signals and images are useful tools in datascience, engineering, physics, and mathematics. They often render certain problems easier to solve in transform space. Fourier transforms [6] for example, render convolution operations into multiplications in Fourier transform space, thereby simplifying the solution of linear shift-invariant systems. They are also well-suited for the detection and analysis of signals that are linear combinations of pure frequencies. Wavelet transforms, on the other hand, are well-suited for detecting and analyzing signal transients at different resolutions. In wavelet space domain, they provide sparse representations of signals and images for compression and communication [2]. Though useful in many areas of mathematics, physics and engineering, most mathematical transformation methods (e.g. Fourier, Wavelet) are linear, and thus often fail to deal with the non-linearities present in modern data science applications related to signal parameter estimation and learning-based data classification. There are several exceptions to this shortcoming. For example, the scattering transform is non-linear and has been successfully applied to machine learning applications [10].
Inspired by earlier work on transport metrics [13], the Cumulative Distribution Transform (CDT) was introduced for a class of positive, piece-wise continuous, normalized functions [30]. The CDT can be described as follows: let s 0 and s denote two 1-dimensional piece-wise continuous functions with domains Ω 0 and Ω, respectively, such that Ω 0 s 0 (t)dt = Ω s(t)dt = 1, and such that s 0 , s > 0 in their respective domains. We can then relate s and s 0 by computing a function s : Ω 0 → Ω that matches their cumulative integrals: The continuous function s(t) is called the Cumulative Distribution Transform (CDT) of s with respect to (some fixed) reference function s 0 . Equation (1) defines a mapping between, positive, piece-wise continuous, normalized functions and the set of nondecreasing, one-to-one functions from Ω and Ω 0 . The mapping is invertible on its range, and can be defined in differential form as: , for a.e. in Ω.
Like other transforms, it is the properties of the CDT that make it useful for signal and image data analysis. For example for the CDT, it can be shown that for translation s(t − τ ) → s(y) + τ , and for scaling as(at) → s(y)/a. In fact, an important and unique property of the CDT is that it can represent rigid and non-rigid displacements of the independent variable (in this case t) as modifications of the dependent variable in transform space.
In addition, if we consider a convex set H of invertible mapping functions, then the set of signals {f (t) s(f (t)) : f −1 ∈ H} forms a convex set in CDT space [32]. This property allows one to solve nonlinear, non-convex, signal estimation problems, in a straightforward way, using linear least squares regression [36]. This property also allows one to solve nonlinear classification problems using linear classifiers in signal transform space [37].
The CDT can be related to the optimal transport theory of Monge [30] and has been applied in data analysis, processing and classification problems. Its use is particularly well-suited to mine information present in signals or images when these are produced by physical or biological phenomena related to mass transport. For these cases, popular machine learning methods can be successfully used in transform space for modeling transport modes of variations in signals and images. The CDT and other similar transforms are collectively called transport transforms because of their connections to Wasserstein distances and optimal transport theory (see below) [13,30,22,24,23]. They have been used in numerous data science applications, ranging from classification of accelerometer recordings [30], cancer detection [15,19], drug discovery [14], knee osteoarthritis prognosis from MRIs [35], Brain image analysis [28], inverse problems [17,34], optical communications [29], particle physics [33], parametric signal estimation [36], and numerous other applications [26].
To illustrate some of the properties of the CDT, consider the task of building a hand signal interpretation system from image data. Sample hand signal images are shown in Figure 1, adapted from [30]. The goal is to build a data classification method that can automatically and accurately assign a label (sign) to a given image. Images are preprocessed so as to extract an edge map, and the X and Y projections of the edge maps are computed (middle of first row in Figure 1). The CDT (modified by subtracting the identity function) of the X, Y projections are computed and shown in the right panel of the figure. The test data (both in signal and CDT domain) can be projected onto the most discriminant 2D subspace, computed with the P-LDA technique [9] on training data (middle row). From this it can be seen that the test data become more clearly linearly separable when represented in CDT domain. This visual impression is confirmed by classification results, shown in the bottom table, demonstrating the test data performance using three different linear classification methods. 1 1.1. Related work. The signal transformation described in this paper is related to the optimal transport metric (Wasserstein distance), as will be detailed below in section 3.2. Data analysis techniques that are based on optimal transport have gained popularity in the data science community. Machine learning methods based on optimal transport optimization have been developed in [25,31]. Optimal transport methods have also been used in image processing for image alignment [5], image simulation [7], and domain adaptation [20]. Extensions of the optimal transport (Wasserstein) metric to unbalanced distributions have been proposed in [27,16]. Inspired by [13] authors in [38] proposed a linear optimal transport metric for arbitrary (signed, unbalanced) signals. We also note a generalization of the Wasserstein distance to signed signals, similar to the one described in section 3, has been previously described in [21,12].
1.2.
Contributions. Although useful in many settings, the CDT framework described above has the limitation that the signals themselves must be positive for the entirety of their domain. While not prohibitive in certain settings [36], this limitation can hinder the application of the transform to signed functions that can be used to model more general signals and data.
Here we extend the CDT, originally designed for positive probability density functions, to general finite signed measures with no requirements on their total mass. For this reason, we name the new 1D signal transform introduced here as the signed cumulative distribution transform (SCDT). We define mathematical formulas for both the forward (analysis) and the inverse (synthesis) transformations. We describe some of the properties of the SCDT, including translation, scaling and composition. Through the use of an extended generative model, we also describe necessary and sufficient conditions whereby signal classes will be convex (and thus separable by a linear classifier). Finally, we define a distance based on this transform (a version of the distance appears earlier in [21,12]), and demonstrate its application in signal data analysis and classification tasks. Python source code implementing the new transform is available through the PyTransKit package [40].
Cumulative Distribution Transforms for Measures
In this section we extend the Cumulative Distribution Transform (CDT) to Radon measures on the extended real line R. Because we use concepts related to the theory of transport, we start with an extension of CDT to probability measures. We then extend the CDT to include non-negative finite measures, and finally signed measures. enables data representation that facilitates learning. Hand signal images are preprocessed for edge map extraction and their respective X and Y projections are computed. The X,Y projections are then transformed using the CDT. Linear classification methods are then applied to the data in CDT space, as well as in original (projection) space for comparison. The middle row displays the 2D linear discriminant embedding [9] of test data in original signal space and transform space. Test (held out from training) data in transform space is clearly more convex and linearly separable than data in original signal space. This is confirmed by test accuracy results of 3 different linear classifiers (bottom row).
In the cumulative distribution transform for measures, a reference measure µ 0 is fixed, and the transform of a measure ν relative to this fixed reference measure µ 0 will be a non-decreasing function on R denoted by ν.
2.1. The cumulative distribution transform on probability measures. Given any probability measure η on R, its Cumulative Distribution Function (CDF) is the function F η : R → R given by The function F η is non-decreasing and right continuous on R, and may not be invertible. However, a generalized inverse for such a function (in fact for any function) can be defined (see e.g., [11,41]). In The monotone generalized inverse F † has some remarkable properties. In particular, for any function F , F † is a non-decreasing function on R, and if F is continuous and strictly increasing, then F † and its standard inverse coincide (see Appendix 6). Since the monotone generalized inverses are functions on R, our measures will be defined on R instead of R. Those measures are characterized by their CDFs (which are functions on R) according to Proposition 6.1.
Given a reference measure µ 0 , the cumulative distribution transform ν of a measure ν with respect to µ 0 is defined as where F µ 0 and F ν are the CDFs of the respective measures. Under the assumption that µ 0 does not give mass to atoms, the function (4) is the solution of the 1D optimal transport problem of Monge (see for example [41,4]). In fact, the measure ν can be recovered by the push-forward ν # µ 0 of the measure µ 0 by the function ν in (4) (see Theorem 2.2), i.e., for any Borel measurable set E ⊂ R If the reference measure µ 0 and the target measure ν are continuous with respect to the Lebesgue measure with densities s 0 and s, respectively, then equation (1) can be rewritten as follows F µ 0 = F ν ( ν) Therefore, equation (4) extends the definition of the CDT for functions as described by (1) to the case of probability measures on R.
Writing (4) in operator notation as ν = T µ 0 (ν), we define the CDT operator by where P(R) is the set of probability measures on R and is the set of non-decreasing functions a.e. with respect to µ 0 . The next theorem states that the operator T µ 0 above is a bijection and hence can be viewed as a transform operator.
Theorem 2.2. If µ 0 is a probability measure on R that does not give mass to atoms, then the operator T µ 0 : A measure η defined on a Borel measurable subset Ω ⊆ R can be considered as a measure on R by extending it by zero. Equivalently, this extension can be written as the push-forward ι # η of η by the inclusion map ι : Ω → R (ι # η ∈ P(R)). Thus, using this extension, we consider the set of probability measures P(Ω) defined on Ω as a subset of P(R), i.e., P(Ω) ⊆ P(R). Analogously, we will say that any measure η ∈ P(R) satisfying η(Ω) = 1 and η(Ω c ) = 0 belongs to P(Ω) by considering its restriction. From these considerations, we obtain the following corollary of Theorem 2.2.
2.2.
Transform for positive finite measures with arbitrary total mass. The CDT for probability measures can be extended naturally to the case where the reference and the target are finite and positive Borel measures M(R). Specifically, let µ 0 , ν be two finite positive measures such that µ 0 is non-trivial, that is If M α denotes the scaling function by the factor α (i.e., M α (x) = αx), then the transform of ν with respect to the reference µ 0 is defined to be and where When ν = 0, the non-decreasing function ν * is simply the CDT of the probability measure 1 ν ν with respect to the reference 1 µ 0 µ 0 (see (4)), while the number ν is to keep track of the total mass of ν. The next theorem shows that (9) gives rise to a bijection and hence can be thought of as a transform. We abuse language and still call this transform the CDT since it will be clear from the context which transform is being used. Abusing notation by using T µ 0 again to denote the operator defined by T µ 0 (ν) = ν we get the following bijection between the space of finite Borel measures M(R) and the set where N µ 0 is as in (7), and R + = (0, ∞).
Theorem 2.5. If µ 0 is a non-trivial finite positive measure on R that does not give mass to atoms, then the operator T µ 0 is a bijection from M(R) to T µ 0 , whose inverse is given by and the inverse of (0, 0) is the zero measure.
Corollary 2.6. Let Ω 0 , Ω be two Borel sets in R and µ 0 be a non-trivial finite positive Borel measure on Ω 0 that does not give mass to atoms. Then the CDT with respect to µ 0 is a bijection from M(Ω) (the set of positive finite measures on Ω) to T µ 0 (Ω).
As in Remark 2.4, instead of N µ 0 (Ω) in the first component of the image of the transform, we could consider directly the set N µ 0 (Ω 0 , Ω) by considering restrictions: (ν ) | Ω 0 of ν in (9).
Transform for signed measures.
To define the transform on signed measures, we use the Jordan decomposition of a signed measure [8]. Specifically, let ν be a signed, finite Borel measure on R, with Jordan decomposition ν = ν + − ν − , then the image T µ 0 (ν) of ν with respect to a non-trivial positive measure µ 0 is defined as where T µ 0 (ν ± ) are the CDTs of positive finite measures defined in (9). In particular, (10)). Denoting the set of finite signed measures on R by SM(R), and defining where f # µ 0 ⊥ g # µ 0 denotes that f # µ 0 and g # µ 0 are mutually singular, we have the following theorem.
Theorem 2.7. If µ 0 is a non-trivial finite positive measure on R that does not give mass to atoms, then the operator T µ 0 : SM(R) → I µ 0 given in (12) is a bijection. Hence, T µ 0 is a transform. Moreover, the inverse transform is given by and the inverses of (f, r, 0, 0), (0, 0, g, s), and (0, 0, 0, 0) are the measure f # , and the zero measure, respectively.
We call the operator T µ 0 given in (12) the Signed Cumulative Distribution Transform (SCDT).
As in previous subsections, restricting the measures to Borel sets Ω 0 and Ω for µ 0 and ν respectively, and defining we obtain the following bijection result.
Corollary 2.8. Let Ω 0 , Ω be two Borel sets in R and µ 0 be a non-trivial finite positive Borel measure on Ω 0 that does not give mass to atoms. Then the restriction of the transform to SM(Ω) (the set of signed finite measures on Ω) is a bijection onto I µ 0 (Ω).
Properties and Applications
There are several properties of the CDT for positive PDFs that makes it a useful tool. The new transforms derived in this paper retain some of these useful properties. In particular, the computational example below, shows how these transforms can be useful in data classification.
3.1. Properties of the SCDT. In this section we list two of these properties: 1) the composition of property and 2) the convexification property.
The composition property relates the transform of a measure η to that of a measure ν when their cumulations are related by a composition of functions. This property is useful for applications in which a set of signals is generated by a signal template that is modified by a transport-like phenomenon. Translations and scalings are examples of such classes [30] (see Figures 2 and 3).
Proposition 3.1. (Composition property) Consider a finite positive reference measure µ 0 which does not give mass to atoms. Let ν be a signed measure. Assume that g : R → R is a strictly increasing surjection, and η a signed measure such that F η (x) = F ν (g(x)). Then η ± = ν ± and the SCDT of η with respect to µ 0 is given by Corollary 3.3. (Dilation) If µ 0 and ν are as above, g : R → R is a dilation by a ∈ (0, ∞), i.e. g(x) = x a , then for η ∈ SM(R) such that F η (x) = F ν ( x a ), the SCDT of η with respect to µ 0 is given by It is also worth noting that in certain applications data sets can be rendered convex in the transform domain. For example, sets generated by translations of a template signal can have a complex geometry in the signal domain. However, it has a very simple convex structure in the transform domain as depicted in Figure 4. This convexification property is useful in classification problems since two disjoint convex data sets can be separated by a linear classifier. The following convexity property is a generalization of the convexity property that was proved in [32] for signals that consist of normalized, compactly supported, non-negative Lebesgue measurable functions.
We remark that the set S ν,H can be interpreted as an algebraic generative model for signal data. Here ν specifies a measure (signal) that can be considered as a template for its class, which is denoted by S ν,H . The elements of S ν,H are formed by the action of functions in H, as in Proposition 3.4. For example, H can be the set of all possible translations, or positive scalings. If H is such that H −1 is convex, then the set S ν,H is also convex. As explained in [32], H −1 is convex if S ν,H is a convex group (numerous examples are described in [32]). 3.2. Metric. The Wasserstein distance in the space P(Ω) is intimately related to the L 2 distance in the transport transform domain [30,41,4]. This relation is useful in some applications since it render certain optimization problems involving the Wasserstein distance into standard least squares optimization problems. In this section, we will recall the definition of the Wasserstein distance for probability measures and its relation to the L 2 distance in the transform domain. We will extend this result using the generalized transport transforms of the previous section.
Definition 3.5 (Wasserstein distance).
Let Ω ⊆ R be a Borel set endowed with the Euclidean distance. Let P 2 (Ω) denote the subset of all probability measure on Ω with finite second moment where Π(ν, η) denotes the collection of all measures on Ω × Ω with marginals ν and η on the first and second factors respectively.
Proposition 3.6. Let Ω ⊆ R be a Borel set, and µ 0 ∈ P 2 (Ω) a reference measure that does not give mass to atoms. If ν, η ∈ P 2 (Ω), then In particular, the transport transform T µ 0 is an isometry between P 2 (Ω) with the Wasserstein metric and the image of the transport transform N µ 0 (Ω) endowed with the L 2 (µ 0 ) metric.
3.3.
Metric for M 2 (Ω) and SM 2 (Ω). Let M 2 (Ω) and SM 2 (Ω) be the subsets of M(Ω) and SM(Ω), respectively, of measures having finite second moment. For µ 0 ∈ P 2 (Ω) that does not give mass to atoms, consider the Cartesian product L 2 (µ 0 ) × R endowed with the norm For ν, η ∈ M 2 (Ω) we define the following distance function: In particular, from Proposition 3.6 it follows that for non-trivial measures ν, η ∈ M 2 (Ω), (18) coincides with where d W 2 is the Wasserstein distance defined in Equation (16). This identity and the definition of the transform when ν or η are the zero measures, imply that D W 2 does not depend on the choice of the reference µ 0 . In particular, if ν is non-trivial and η is the zero measure which holds by applying Lemma 5.1 in the same way it is done in the proof of Proposition 3.6. Using the Definition (18), T µ 0 becomes an isometry from (M 2 (Ω), D W 2 ) to (T µ 0 (Ω), · L 2 (µ 0 )×R) ).
Analogously, by considering the space (L 2 (µ 0 ) × R) 2 endowed with we define which endows SM 2 (Ω) with a distance on and which is independent from the choice of the reference µ 0 . Indeed, using the Hahn-Jordan decomposition for ν, η ∈ SM 2 (Ω), (20) is exactly 3.4. Applications. The CDT and SCDT have many applications in signal and image analysis which can be broadly categorized into signal estimation and detection (classification) problems. With regards to signal estimation, in [36] the authors applied the CDT to estimating parameters (time delay, frequency, chirp) pertaining to a measured signal. In that work, the CDT was used to 'linearize' the problem so that a global optimal estimate for signal parameters can be estimated in CDT space using a simple linear least squares technique. Here we show how the SCDT can be utilized to similarly facilitate the machine learning of classifiers by 'linearizing' the problem in transform space, as illustrated in Figure 4. To that end, we utilize the following property of the SCDT.
Let H, S ν,H and S η,H be as defined in Proposition 3.4. The sets S ν,H and S η,H can be interpreted as algebraic generative models for signal data. The theorem stated below becomes an easy consequence of Proposition 3.4 and the Hahn Banach separation theorem.
Theorem 3.7. Let µ 0 ∈ P 2 (R) be a reference measure that does not give mass to atoms, ν, η ∈ SM 2 (Ω) be two signed measures, and H −1 be a convex set of strictly increasing bijections on R. If D ν , D η are two non-empty, finite sets drawn from two disjoint generative models S ν,H and S η,H , respectively, then the corresponding sets D ν , D η in SCDT space are linearly separable.
The theorem above states that so long as data is generated according to the algebraic generative model defined in Proposition 3.4, a training procedure using a linear classifier, using data in SCDT space, is a well-posed problem in the sense that there is a guarantee that a solution exists, although it may not be unique. The theorem does not state how to compute such linear function, rather it states that there will exist a linear classifier that will separate D ν , D η so long as S ν,H and S η,H are disjoint.
To demonstrate the ability of the SCDT to render signal classes linearly separable we consider the problem of distinguishing signals of the kind demonstrated in Figure 5. Let signals σ 1 , σ 2 , σ 3 be associated with the measures dν 1 = σ 1 (t)dt, dν 2 = σ 2 (t)dt, dν 3 = σ 3 (t)dt, and let S ν 1 ,H , S ν 2 ,H , and S ν 3 ,H represent corresponding signal classes generated using the set of diffeomorphisms H = {h(t) = at + b : a, b ∈ R, a > 0}. In short, three prototype signals are defined as a Gabor wave, a Sawtooth wave, and a Square wave, all multiplied by a Gaussian window function, respectively. These prototypes are randomly translated and scaled. For the computer simulations shown below, t ∈ [−0.5, 5], and a and b are uniformly distributed in [0.75, 2] and [−0.25, 0.25] respectively. A total of N = 500 sample signals are generated with 250 used for training and 250 for testing. Randomly distributed Gaussian noise, with standard deviation of 0.02 was added to each signal. The Fisher Linear Discriminant Analysis [1] computed using the sklearn python package [39], shows that classification accuracy on the test set using the data in original signal space is 32%, while the test set accuracy of the same classification algorithm applied to signals in SCDT space is 99%. The projections of the test set for both native signal space and SCDT space are shown in Figure 6 below. From these figures, and from the test set classification accuracy, we can see the SCDT significantly enhances the ability of a linear classifier to operate correctly.
Summary and Conclusions
This paper extends the Cumulative Distribution Transform [30] to signed measures of arbitrary mass, permitting the application of the technique to arbitrary onedimensional signals. This extension significantly broadens the number of potential applications of the transform. The idea is based on viewing 1-D signals (measured data) as measures, and matching the measure corresponding to the signal to be transformed to a chosen reference measure. This matching is obtained using a push-forward of the reference measure by a function derived from the cumulations of the reference and signal measures. The operator that produces the push-forward function is what we call the Signed Cumulative Ditribution Transform (SCDT). Signed measures are handled using the Jordan decomposition, where the positive and negative portions are handled separately and independently. Theorem 2.7 shows that the mapping is bijective from the space of signed finite measures to the transform space. As such, the signed cumulative distribution transform described in this paper can be viewed as a mathematical signal representation method, with analytical forward and inverse operations, for arbitrary 1-D signals.
Following earlier work on the CDT [30], we also described several properties of the newly introduced SCDT. Proposition 3.1 states that for g(t) a strictly increasing surjective function, the SCDT of a signal (measure) η satisfying F η = F ν • g will be related to the SCDT of ν via (η ± ) , η = g † • (ν ± ) , ν ± . Simple corollaries of the proposition include the signal translation and scaling properties 3.2, 3.3, describing that transformations g(t) that shift the signal along the independent variable (t in this case) become transformations that shift the signal in the dependent variable in transform space. Proposition 3.1 (composition) and corollaries 3.2 (translation) and 3.3 (scaling/dilation) relate to the analysis of signals under rigid and non-rigid deformations (e.g., deformations in the independent variable time or space). Proposition 3.4 describes a generative model for classes of signals under the presence of deformations and describes the necessary and sufficient conditions for such classes to be convex in SCDT space. Section 3.3 describes a metric for 1D signals using the SCDT and Theorem 3.7 utilizes it, in combination with Proposition 3.4 and the Hahn-Banach separation theorem to establish sufficient conditions for linear separability of such signal classes in SCDT spaces. Finally, a computational example application of the technique to classifying signals under random translation and dilations using a simple linear classifier is shown.
The definition of the SCDT given in equations (12) and (13) is one of several possibilities. In the definition used here, we consider a positive, non trivial, reference measure µ 0 to which the positive and negative components of the Jordan decomposition of the signal measure are matched. Another possibility would be to use a reference measure that also admits a decomposition µ 0 = µ + 0 − µ − 0 , and then replace equation (12) with a similar version that matches the corresponding positive and negative parts: with the range of the transform being The properties of the transform (e.g. invertibility, composition, translation, dilation, convexity, etc.) under this alternative definition would remain the same, although the proofs would be slightly altered. Naturally, this alternative definition would require both µ + 0 and µ − 0 to be non-trivial. In summary, this paper presents a new tool for representing arbitrary signals by matching their corresponding measures to a reference measure. As such, it enables the extraction and analysis of information related to rigid and non-rigid deformations of the signal, which are difficult to decode especially in nonlinear estimation and classification problems. Future work will include exploring the application of SCDT to a variety of signal estimation and classification problems, extension of the transform presented here to higher dimensional signals, as well as sampling, reconstruction, compression, and approximation problems.
Proofs of results
This section contains the proofs of our results. Some of the proofs rely on certain properties of the monotone generalized inverse whose properties and proofs are relegated to the appendix.
Proof. Note that F µ is continuous, as µ does not give mass to atoms. So, for all y ∈ [0, 1] the set 1] as Borel measures, since they coincide on every interval [0, a], with a ∈ [0, 1]. As a consequence, for all l ∈ [0, 1], Lemma 5.2. Let µ, ν be two probability measures on R, and assume that µ does not give mass to atoms. Then, ψ = F † ν • F µ is non-decreasing and satisfies ψ # µ = ν. Proof. By Proposition 6.8(i), the function F † ν is non-decreasing. Thus, ψ : R → R is non-decreasing since it is a composition of two non-decreasing functions. To show 1] from Lemma 5.1, and properties of the push-forward operator, we obtain the desired result. Indeed, for y ∈ R, using Proposition 6.9 we get, Lemma 5.3. Let µ, ν be two probability measures on R, and assume that µ does not give mass to atoms. If φ : R → R is a non-decreasing function such that ν = φ # µ, Let φ be any non-decreasing function such that ν = φ # µ and assume, (possibly modifying φ on a countable set) that φ is right-continuous. Let T := {s ∈ supp µ : (s, s ) ∩ supp µ = ∅ for some s > s} and notice that T is at most countable (since we can index T with a family of pairwise disjoint open intervals). We claim φ ≥ F † ν •F µ on supp µ−T . Indeed, for s ∈ supp µ−T and s > s we have where last inequality follows from the fact that (s, s ) ∩ supp µ = ∅. Thus, we have that F ν (φ(s )) > F µ (s). Using the fact that F ν and F µ are non-decreasing and right continuous, we can apply Proposition 6.4(i) to get φ(s ) ≥ F † ν • F µ (s), a.e.-µ. Taking s → s we obtain φ(s) ≥ F † ν • F µ (s) for every s ∈ supp µ − T . In particular, using Lemma 5.2 we get For surjectivity, if φ : R → R is a non-decreasing µ 0 -a.e function, then the push-forward ν := φ # µ 0 of µ 0 by φ is a probability measure on R. By Lemma 5.3, for any probability measure ν, the transform ν = F † ν • F µ 0 is a unique non-decreasing µ 0 − a.e. function which satisfies ν # µ 0 = ν. Therefore, φ = ν µ 0 -a.e., i.e. φ lies in the image of the transform.
In order to prove surjectivity, consider the pair (f, r) where f : R → R is a function non-decreasing µ 0 -a.e. and r is a positive real number. Let ν := rf # µ 0 µ 0 , then ν is a finite positive Borel measure with ν = r since f # µ 0 µ 0 is a probability measure. In addition, and by Lemma 5.3 it is the unique non-decreasing µ 0 -a.e. function that satisfies Thus, f = ν * µ 0 -a.e., and the inverse transform is given by If (f, r) = (0, 0), then, from Definition 9, it is the transform of the zero measure.
Proof of Corollary 2.6. Corollary 2.6 follows from Theorem 2.5 as in the proof of Corollary 2.3.
Proofs of Theorem 2.7. Given a signed measure ν, T µ 0 (ν) is well-defined since there exists only one pair of mutually singular positive measures, ν + and ν − , such that where the operators T µ 0 should be understood from the context). Thus, injectivity follows by applying Theorem 2.2 separately on the positive and negative part.
Proof of Corollary 2.8. The proof of Corollary 2.8 follows from Theorem 2.7 as in Corollary 2.6.
5.2.
Proofs of Section 3.1. The following Lemma is useful for the proofs of this section.
Proof. Throughout this proof, we will use the fact that if ν and η satisfy the hypothesis of the Lemma 5.4, then, F η (x) = F ν (g(x)) if and only if g # η = ν. In particular, under the hypothesis of the Lemma, η(R) = ν(R).
We adopt a proof similar to the one in [32].
Proof of Proposition 3.4. Given ν ∈ SM(R) we consider S ν,H as is (15). By the definition of S ν,H , Proposition 3.1 and Theorem 2.7 we have that Assume that H −1 is convex and fix ν ∈ SM(R). Let η h , η g be two arbitrary elements in S ν,H , that is, they are defined by F η h = F ν •h and F ηg = F ν •g for h, g ∈ H (here we are using the characterization of measures according Proposition 6.1 from the Appendix). For any α ∈ [0, 1], applying Proposition 3.1 we have In addition α ν ± + (1 − α) ν ± = ν ± . Thus, S ν,H is convex. For the converse statement, let h −1 , g −1 ∈ H −1 and α ∈ [0, 1]. For ν ∈ SM(R), assuming that S ν,H is convex we have that Thus, by the characterization of S ν,H given by (23) we obtain that αh −1 + (1 − α)g −1 coincides with a function in H −1 on the range of (ν ± ) * . Taking the family of target measures {ν T } T ∈R ∪ {δ ∞ , δ −∞ }, where δ ±∞ are Dirac measures centered at ±∞, and ν T is defined by ∀E ⊂ R Borel subset, and assuming S ν T ,H convex for every T ∈ R, by Corollary 3.3 we can conclude that Proposition 3.6 and its proof are well-known [41]. We include them in this paper for readability and completeness.
Proof of Proposition 3.6. It is well known that (cf. [18,4]) Then, by a change of variables and using Lemma 5.1, we obtain Proof of Theorem 3.7. Since D ν , D η are finite subsets of a normed space L 2 (µ 0 )×R 2 , their convex hulls conv D ν , and conv D η are compact. In addition, since D ν , D η are subsets of the convex sets S ν,H and S η,H , then conv D ν , conv D η are also subsets of S ν,H and S η,H . Finally, since S ν,H and S η,H are disjoint and T µ 0 is one to one, we get that conv D ν , and conv D η are disjoint non-empty convex and compact sets. By the Hahn-Banach Separation Theorem, they can be separated by a linear functional f . In particular, f separates D ν , D η .
6. Appendix This is an extension of the so called Lebesgue-Stieltjes Measure.
Proof. Direct part: • F µ is non-decreasing because given For the converse part, let F : R → R be a right-continuous, non-decreasing function satisfying (25). Denoting Then, T is non-decreasing, and therefore it is a measurable function. Notice that for and, for each x ∈ R, we obtain Notice that The uniqueness of the measure µ is a consequence of the Carathéodory Extension Theorem, which asserts that any finite measure on an algebra A extends in a unique way to a measure on the σ-algebra generated by A. Indeed, the equation implies that there is only one extension to the algebra of sets generated by {−∞} and half-open intervals (a, b] with a, b ∈ R, and so there is only one extension to the σ-algebra generated by these sets, which is B(R).
6.2.
Monotone Generalised Inverse. In this section, we introduce the monotone generalized inverse for functions defined on the extended real line R, and we provide some of its relevant properties. In particular, the monotone generalized inverse of any function is always non-decreasing, and if a function F is continuous and strictly increasing then its monotone generalized inverse F † and its standard inverse F −1 coincide. This inverse and its properties are essential in defining and studying the transport transform on R. It has already been introduced in connection to transport theory but only for functions on the real line [41], and some its properties are well-known [11]. However, we also need other properties that we derive in this appendix.
Definition 6.2. For a function F : R → R, the monotone generalized inverse of F is the function F † : R → R defined as In particular (since inf ∅ = ∞), F † (∞) = ∞.
The following properties of F † are used in this paper. • If F is non-decreasing function on R, then Proposition 6.4. For any function F : Proof. Since F (x) > y, (i) follows from the definition of F † , and (ii) is the contrapositive of (i). Proof.
(i) Since F (x) = y and F is non-decreasing, x < s for any s ∈ U = {s : F (s) > y}. Thus, (i) follows from definition of F † . (ii) If F (x) < y. Then, since F is non-decreasing, x < s for any s such that F (s) > y. Thus, x ≤ F † (y). (iii) From (i) and (ii) imply that if F (x) ≤ y, then x ≤ F † (y), which is the contrapositive of (iii).
Proof. Since F is cumulative distribution of a probability measure ν on R, it is nondecreasing and right continuous. We then note that (29) and the conclusion of the proposition follows from inequalities (28) and (29).
|
2021-06-07T01:15:53.935Z
|
2021-06-03T00:00:00.000
|
{
"year": 2021,
"sha1": "9f89318dc0dc4b04a67fc186b61e42ef69e28ade",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9f89318dc0dc4b04a67fc186b61e42ef69e28ade",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
238793104
|
pes2o/s2orc
|
v3-fos-license
|
Everyday life of the people's teachers of the russian empire in the second half of 19th - early 20th centuries
This article examines the features of folk teachers' everyday life in the Russian Empire during the second half of the 19th - early 20th centuries. The study is based on such concepts as everyday life, lifestyle, standard of living, and working conditions. As the part of the study of folk teachers' everyday life, the author focused on the consideration of the material, legal status, living and working conditions, and professional opportunities provided to teachers. Besides, the reasons for the frequent layoffs of national teachers were studied. Based on a deep analysis of historical sources and literature, the author comes to the conclusion that by the end of the XIX - the beginning of the XX centuries most of the teaching positions in public schools began to be occupied by female teachers, and attempts were made to raise their material and legal status at the legislative level. Despite such attempts, folk teachers in the Russian Empire had a low professional status in contrast to their male colleagues during the second half of the XIX - early XX centuries. This was due to the fact that during this period woman were not full members of society yet and the processes of women's emancipation in Russia proceeded much more slowly than in other countries.
Introduction
Nowadays, history has entered a new stage of its development, in particular, its methodological basis has changed, new approaches and trends have begun to appear in history.
These include historical anthropology, the history of everyday life, the history of mentality and others. The development of history within the framework of new trends presupposes its consideration from the point of view of a multidisciplinary approach when history penetrates into other disciplines.
Methods
The work was carried out considering the principles of scientific objectivity and historicism. The article uses the following methods of scientific knowledge. The historical and genetic method made it possible to trace the evolution of folk teacher legal status. In particular, the paper gives a characteristic of the social and legal status development among folk teachers, which went along with the reform in the Russian Empire. The use of historical retrospection method allows you to restore an objective picture of historical reality. This method is especially important when they work on a poorly studied problem, as well as when they use unpublished sources.
Results and Discussion
The concept of a lifestyle comes to the fore as the part of the study of folk teachers' Considering the lifestyle of teachers in a historical context, we take the following manifestations as the basis: 1) standard of living, including living conditions, and the level of material well-being; 2) quality of life, including the forms of free time; 3) daily work, including working conditions; 4) legal status, including the legal possibilities of teachers.
During the pre-revolutionary period, the gender aspect played a significant role in Russian society. The opportunities for women were severely limited, the women's movement in our country did not develop as it did in Europe. Women were less likely to go to the professional level, because female education in Russia was developed very poorly. The bulk of young girls who were educated in a girls' gymnasium or a diocesan school could become folk teachers. Russian teachers were generally subdivided into male and female teachers, male and female assistants. This also revealed the gender aspect of the education system in Russia.
The profession of a teacher provided an opportunity to gain some independence for young girls, but one cannot say that teachers needed it so much. As Chekhov noted, the teachers who got married, left their place of service more often for various reasons (ZUBKOV, 2010).
In general, it should be noted that female teachers had more difficult to endure the hardships of priests, teachers of the law, volost clerks and others often forced female teachers to leave their place of service. Male teachers also had a hard time, but the main reason for the dismissal from the position of a male teacher was low salaries, since a man was also the head of the family.
During the second half of the 19th century, the female teacher numbers increased, and a corresponding decrease of male teacher number took place in the Russian Empire. This was because women began to acquire the professional level, and men were looking for a more profitable job. For example, there were 76.4% of women in rural schools, and 82.9% in urban schools in the zemstvo schools of the Vyatka district during the academic year 1887-1888.
But at the same time, they discussed the fact of inconvenience to keep female teachers who got married, which resulted in frequent dismissals of such teachers. In particular, this happened in the Vyatka province. In 1903, the trustee of the Vilna school district asked the minister's permission to introduce the following provisions: first, "to declare circularly to female employees in the educational institutions that they would lose the right to continue their service without hindrance as teachers or class wardens; secondly, to oblige such persons to sign for the appropriate authorities to issue the orders for filling in vacancies" (The Russian state historical archive). The Deputy Minister P. Markov replied to this petition, that he did not approve: to accept that teachers and supervisors of the Ministry of Public Education were dismissed from service when they got married as a general rule, because marriage cannot deprive a teacher of the rights acquired by her education. If, in a separate case, family responsibilities prevent any teacher from successfully fulfilling her former duties during the service, then the issue of leaving her at a gymnasium should be resolved for each case separately (RGIA).
Let's consider the features of the everyday life of the teachers in the Russian Empire and the difficulties they faced.
Housing conditions were one of the indicators of folk teacher living conditions. More often the teachers had to rent premises, since there were constant movements and there was no guarantee that a teacher would stay in one place for a long time.
The apartments of the folk teachers were often located directly in some school building, which, naturally, limited the teachers greatly. The data on the public education of the Kazan province presents the characteristics of the most inconvenient apartments. "The teacher's apartment is located in the school, but it is inconvenient because there is no separation from the classroom, as it is placed in the classroom itself" (Knyagorsk school of the Mamadysh district of the Kazan province). "The apartment is located at the school and is separated from the classroom only by a board partition; It is wet, cold, the air is always spoiled in it" (Public education in the Kazan province).
Female teachers, as well as male teachers, had to put up with such living conditions, since low salaries did not allow them to hire more comfortable premises. Let's consider the level of salaries among teachers. During the academic year 1882-1883, the teachers in the public schools of the Vyatka province received the following salary. 36% of teachers (229) Another legal opportunity for teachers was to award them with gold and silver medals.
Since 1893, "male and female teachers who attracted the attention of their superiors for a long and useful service were awarded by gold and silver medals with the inscription "For diligence" on the Andreevskaya and Aleksandrovskaya ribbons" (JARANSK, 1908).
Vacations were another option. At the legislative level, no vacation was provided for women. All Russian legislation existed for men exclusively. Nevertheless, the increasing number of female teachers pushed the Ministry of Public Education to focus on the women's issue. In particular, this concerned women's vacation. During the years 1850-1851 a circular decision was made to provide leaves on a general basis for the women who held the posts of female gymnasium teachers (RGIA). So, based on this decision, "all persons serving at women's educational institutions may ask to be dismissed on leave for one month or more for household chores or other needs, but not longer than 4 months" (RGIA).
The teacher daily work consisted of different aspects. The presence or absence of school premises, the frequency of movements of teachers, the relationships with the local population, etc. are of particular interest to us. The absence of their own buildings in some schools clearly demonstrates the peculiarities of the daily work of folk teachers. This forced the zemstvo to rent premises. In rural areas, peasant huts were hired for these purposes. Newspapers tell about the working conditions in these huts. One teacher who worked in a similar peasant hut in the Oryol district recalled the following.
My class was housed in a rented building with the owner's heating and maintenance. It was a large hut with three tiny windows. The desks had to be moved very close to accommodate about 50 students. The classroom was dark, stuffy, and dirty. The owner did not even want to make one window. "I will not indulge your whims, spoil the frame and freeze the hut." The owners baked bread in the room twice a week and closed the oven early. They didn't care as they lived in another hut, and all the smoke went to us. When we opened the door, the owners said: "Close the door! Otherwise, we will not heat the premise! Why are you letting the heat out?" There were a lot of cockroaches in the hut. They fell from the ceiling on notebooks, crawled over the children heads and clothes. I asked the owners: "Let's freeze the cockroaches in winter, let us skip a day or two". "We will not freeze the hut for you". A member of the Zemstvo Council came. I had the following requests: to make vents, to convince the owners to bake bread in their hut and heat the classroom with zemstvo firewood. And he replied: "If you don't like it, we don't keep you here" (RUSSIA, 1957).
Another characteristic of the teacher's everyday life was the relationship with the local population. Zemskaya teacher from the village of Kurchum, Nolinsky uyezd, Benevitskaya, decided to leave the student after the lessons. She wanted to correct his behavior. The boy's father P. P. Bugrev came to the school when he knew about it and began to scold the teacher with cynical words, and then took his son home. Benevitskaya was terribly frightened and fell ill (VYATSKAYA, 1913). The everyday life of the folk female teachers was greatly overshadowed by the excessively strict teachers of the law, the volost clerks who did not pay their salaries on time, thus, the teachers had to go to the district council several times. The dismissal of teachers did not always happen unnoticed. There have been the cases when local residents, with whom teachers were very popular and highly respected, came to defend them. This situation arose in the Yaransk district of the Vyatka province. The newspaper "Vyatskaya Rech" (25 April 1908) reported that "the school in the village of Nikolskoye had to be closed, since the peasants declared a boycott of those teachers who would take the place of the teacher Dernova, who was very popular among the Nikolsk peasants" (ORLOV, 1908).
Cultural, racial, social, religious, ethnic heterogeneity of educational groups is a problem of modern education in the context of integration and globalization. This often becomes a reason for misunderstandings, sometimes aggression in the interaction of representatives of opposing worldviews, preferences or traditions (SOKOL, 2021).
Conclusions
The position of female teachers in pre-revolutionary Russian society was rather difficult.
On the one hand -the rights and opportunities provided by the kind of service, on the otheroppression and resentment from the authorities and the public. We associate the latter phenomena with the fact that the position of women in society has not yet taken shape during this historical period. On the one hand, woman reached a professional level, began to receive education at various levels and master a profession. On the other hand, the role of women in public opinion was limited to the role of mother and wife, which is associated with the patriarchal nature of Russian society. Under these conditions, a woman had to cope with growing responsibility on the one hand and oppression on the other, also to realize her place in Russian society. Sources show that not all women coped with this situation and were forced to leave their jobs. However, it is not uncommon for female teachers not only to keep their jobs, but also to achieve great success in their field.
Summary
Thus, we have identified characteristics of the everyday life of female teachers, which consisted of such aspects as living conditions, legal security, and everyday work. Analyzing the sources, we concluded that by the end of the 19th and the beginning of the 20th centuries, female folk teachers occupied the majority of teaching positions in public schools and, at the same time, had a low financial position and a low professional status in general. However, it is important to note that the attempts have been made to change their legal status at the legislative level. In particular, female teachers had the opportunity to receive pensions, awards, they were given vacation. Further study of the everyday life history of folk female teachers will allow us to determine the overall place of the teaching profession in the social hierarchy of prerevolutionary Russian society.
|
2021-10-15T00:10:01.394Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2f4179d38ff3d33a699e27c3b4d9b61bd11bd703",
"oa_license": "CCBYNCSA",
"oa_url": "https://periodicos.fclar.unesp.br/rpge/article/download/15267/11135",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b79b6a22d57b30c05b236c54f333dcaa809e03ac",
"s2fieldsofstudy": [
"History",
"Education"
],
"extfieldsofstudy": [
"History"
]
}
|
247792508
|
pes2o/s2orc
|
v3-fos-license
|
Circular RNA PLCE1 promotes epithelial mesenchymal transformation, glycolysis in colorectal cancer and M2 polarization of tumor-associated macrophages
ABSTRACT Plentiful studies have clarified that circular RNAs (circRNAs) are crucial in colorectal cancer (CRC)’s occurrence and development, but its function has not been fully elucidated. The purpose of this study was to investigate the biological functions of circPLCE1 on epithelial mesenchymal transformation (EMT) and glycolysis in CRC, and tumor-associated macrophage (TAM) polarization. The results affirmed augment of circPLCE1 and γ-Actin Gene (ACTG1) but decline of miR-485-5p in CRC. Knockdown circPCLE1 refrained CRC proliferation, glucose consumption, lactic acid and pyruvate production, M2 macrophage markers (IL-10, MRC1), N-cadherin, Snail, reduced the proportion of CD206+ and CD168+ macrophages, but expedited M1 macrophage markers (TNF-α, IL-6) and E-cadherin, while descending miR-485-5p expedited EMT, glycolysis in CRC and TAM M2 polarization . Additionally, it was affirmed that the repression or motivation of depressive or elevated circPCLE1 on EMT, glycolysis in CRC and TAM M2 polarization were reversed via facilitated ACTG1 and miR-485-5p, separately. Mechanism studies have clarified that circPCLE1 as a competitive endogenous RNA adsorbed miR-485-5p to mediate ACTG1. It was assured that refrained circPCLE1 constrained CRC tumor growth, EMT and TAM M2 polarization. In brief, circPCLE1 expedites EMT, glycolysis in CRC and TAM M2 polarization via modulating the miR-485-5p/ACTG1 axis, and is supposed to be a latent molecular target for CRC therapy later.
Introduction
Colorectal cancer (CRC) is a common malignant tumor of the digestive system and remains a huge public health challenge worldwide [1]. Among malignancies, CRC takes on a surprising incidence and mortality rate, ranking the third and second separately [2]. Nowadays, CRC therapy mainly contains radiotherapy, surgery, chemotherapy, and other adjuvant therapies, while the impact is not ideal with the unpleasing prognosis [3]. Hence, it is vital to have a deeper understanding of the pathogenesis of CRC and to seek brand-new therapeutic targets.
CRC cell behaviors are concentrated on in most former studies, consisting of proliferation, migration and invasion CRC [4,5]. Plentiful studies have clarified that changes in tumor microenvironment is crucial in the malignant metastasis and proliferation of CRC [6,7]. Tumor-associated macrophages (TAM) in tumor microenvironment are more likely to polarize into M2 kind and expedite metastasis, tumor proliferation, migration, and angiogenesis [8,9]. In the meantime, epithelial-mesenchymal transformation (EMT) is an crucial reason of tumor invasion and metastasis [10,11]. During EMT, epithelial phenotype is lost in the basal epithelial cells to gain mesenchymal phenotype with the cell characteristics of descending E-cadherin, strengthened N-cadherin and Snail, and destruction of extracellular matrix, which is momentous in the stage of tumor metastasis [12,13]. A great many energy is needed in tumor progression and metastasis, and glycolysis is the main source of energy for tumor. Nevertheless, the mechanisms underlying TAM polarization, EMT and glycolysis in CRC have not been entirely expounded [14,15]. Circular RNAs (circRNAs), a kind of noncoding RNA, are elevated and stable in tissues and cells [16]. Recently, plenty of studies have elucidated the role of circRNA in the malignant metastasis and proliferation of CRC. For instance, circ3823 is discrepantly expressed in CRC and expedites its angiogenesis, growth, and metastasis [17]. Meanwhile, circRHOBTB3 restrains CRC metastasis via mediating the mRNA stability of the HUR-mediated Polypyrimidine tract-binding protein 1 [18]. CircMFN2 expedites CRC radiation resistance, proliferation and metastasis via controlling the microRNA (miR)-574-3p/the type 1 insulin-like growth factor receptor pathway [19]. Recently, Chen Z et al. reported that a brandnew circRNA PLCE1 expedites CRC proliferation and migration [20]. Nevertheless, the character of circPLCE1 on TAM polarization, EMT and glycolysis in CRC tumor microenvironment is still ambiguous.
In this study, it was examined the expression of circPLCE1 in CRC tissues and cells and confirmed its clinical significance. In addition, it was explored the effects of circPLCE1 on TAM M2 polarization, EMT and glycolysis of CRC cells through functional gain and loss experiments. Finally, bioinformatics and functional rescue experiments were applied to identify the underlying molecular mechanisms in which circPLCE1 influenced CRC progression via modulating miR-485-5p/γ-Actin Gene (ACTG1) axis.
Clinical samples
From April 2017 to January 2020, CRC patients admitted to Maotai Hospital affiliated to Zunyi Medical University (n = 38) were included. No acceptance of preoperative chemotherapy or radiotherapy was in the patients. During tumor resection, collection of fresh tumor and paired paracancer tissues was manifested. Confirmation of all resected tissues was via pathology. Storage of the tissue was in liquid nitrogen at −80°C for subsequent research. The research methods lived up to the criteria set out in the Declaration of Helsinki. Approval of this study was via the Ethics Committee of Maotai Hospital affiliated to Zunyi Medical University. Informed consent has been gained from all patients in this study.
Reverse transcription quantitative polymerase chain reaction (RT-qPCR)
Conduction of RT-qPCR was as set forth [21]. In short, extraction of total RNA of the sample was with Trizol reagent (Takara, Otsu, Japan). Conduction of reverse transcriptase reaction was with HiScript® III 1 st Strand cDNA Synthesis Kit (Vazyme, Nanjing, China). Employment of SYBR green reagent (Vazyme, Nanjing, China) and RT-qPCR assay system (Analytic, Jena, Germany) was in all RT-qPCR tests. Quantitative measurements were gained via 2 −ΔΔCT and normalized to glyceraldehyde-3-phosphate dehydrogenase (GAPDH) or U6 levels. Primer sequences are clarified in Table 1.
Acquisition of conditioned medium (CM)
For gaining CM for SW480 cells, wash of SW480 cells with 80% confluence was in FBS-free medium and maintain in fresh FBS-free medium. Treatment of THP-1-differentiated macrophages was with CM of SW480 cells.
Colony formation
Detection of cell proliferation was via Colony Formation [22]. Seeding of SW480 cells was in 6-well plates at a density of 5 × 10 3 cells per well with changed medium every 3 d. After removing the medium and the unbound cells, fixation of adherent cells was with 4% paraformaldehyde and stain was with 0.01% crystal violet. Calculation of colonies of over 50 cells was manifested. N = 3.
Flow cytometry
Detection of cell apoptosis was via Annexin V-fluorescein isothiocyanate kit (Thermo Fisher Scientific, Inc.) [23]. Collection of SW480 cells, resuspension in cold phosphate buffered saline (PBS), centrifugation, incubation with 5 μL Annexinv-Alexa Fluor 647 and then with 5 μL propidium iodide before detection were implemented. Finally, analysis of the apoptosis rate was via FACSCalibur flow cytometer (BD Biosciences, Franklin Lakes, N).
Glycolysis analysis
Determination of pyruvate and lactate production and glucose consumption in SW480 cells was via the pyruvate, lactic acid, and glucose detection kits (ab65342, ab65331, ab136955, Abcam) obeying the manufacturer's agreement.
Western blot
Conduction of Radio-Immunoprecipitation assay (RIPA) lysis buffer (Beyotime, Shanghai, China) and Bradford Protein Assay Kit (Beyotime) was for protein extraction and concentration measurement. Isolation via 10% sulfate polyacrylamide gel electrophoresis of the protein sample (15 μg) and electroblot onto Polyvinylidene fluoride membrane (Millipore, Bedford, MA, USA) were implemented. Then incubation of the membrane was with 5% skim milk and test was with primary antibodies, consisting of ACTG1 (4968), E-cadherin (3195), N-cadherin (13,116), Snail (3879), GAPDH (2118) (all Cell Signaling Technology). Detection of the membrane was with horseradish peroxidase-coupled secondary antibody (Beyotime), followed by visualization with electrogenerated chemiluminescence reagent (Millipore). Application of GAPDH was as a loading control gene.
The luciferase activity assay
Dual-luciferase reporting assay was performed as described previously [24]. Subclone of wild/mutant type reporter vectors (ACTG1/circPLCE1-WT /MUT) (Synthgene Biotech, Nanjing, China) was into pmirGLO vector (Promega, Madison, WI, Table 1. Primer sequences of RT-PCR. USA). Co-transfection of the vectors was with miR-485-5p mimic or its NC into SW480 cells via lipofectamine 3000 (Invitrogen). Measurement of the relative luciferase activity of the cells was via a dual Glo luciferase assay system (Promega, Shanghai, China) with introduction of the manufacturer. Normalization of the luciferase activity was via Renilla signals.
RNA immunoprecipitation (RIP) assay
With the manufacturer's instructions, application of the EZ-Magna RIP kit (Millipore) was for RIP [25]. Lysis of SW480 cells was with RIPA lysis buffer consisting of a mixture of protease and phosphatase inhibitors (Sigma-Aldrich Chemical Company, St Louis MO, USA). Preincubation of magnetic beads (Invitrogen) was with anti-AGO2 (ab32381, Abcam) or anti-Immunoglobulin G (ab6721, Abcam). Immunoprecipitation of the magnetic beads was with the lysate. Purification of RNA was from RNAprotein complex and analysis was via RT-qPCR.
Xenograft in nude mice
Gain of 14 male 6-week-old athymic BALB/c nude mice was from SLAC Laboratory Animal Co. Ltd (Shanghai, China). Carrying out animal experiments was on the grounds of the guidelines of the Animal Care and Use Committee of Maotai Hospital affiliated to Zunyi Medical University. Subcutaneous injection of SW480 cells (2 × 10 6 cells) of knockdown circPLCE1 was into the flank of nude mice. Weekly recording of tumor growth was with a vernier caliper. Calculation of tumor volume was as (a × b) 2 × 0.5 (a, long axis; b, short axis). On the 28 th d after injection, euthanasia of the mice, resection of the tumor, photographing and weighing were implemented. After fixation of the tumor tissue with 4% paraformaldehyde and embedding in paraffin, Immunohistochemical analysis of tumor tissues was via antibodies against Ki-67, CD206 and CD163.
Immunohistochemistry
Performance of immunohistochemistry was as set forth [26]. Dewaxing of paraffin sections was in xylene and rehydration was via an alcohol gradient. After antigenic repair, seal of all sections was in an avidin/ biotin blocking buffer (Vector Laboratories) and then in 3% Bovine serum albumin. Incubation of the sections was with primary antibody Ki-67 (ab15580, Abcam), CD206 (18,704-1-AP, Protein-tech) and CD163 (ab182422, Abcam). Protein staining was via Diaminobenzidine substrate kit (Maixin Biotech, Kit-9710). Counterstain of the samples was with hematoxylin. Gain of immunohistochemical images was via a forward microscope (Olympus BX51). Brown staining clarified immunoreactivity of the sample.
Data analysis
Expression of all data was as mean ± standard deviation (SD). Application of SPSS 18.0 (IBM, Armonk, NY, USA) and Graphpad Prism 9.0 was for data analysis and mapping, separately. Student's t-test was for comparison of differences between the two groups, with Chi-square test for detection of differences in clinical features. P < 0.05 was manifested statistically significant. Biological replication of all experiments was at least three times.
Elevated circPLCE1 is in CRC
In this study, examination of circPLCE1 was in CRC, affirming the elevation in CRC tissues and cell line SW480 versus the adjacent normal tissues and human normal colonic epithelial cell-line NCM460 (Figure 1(a, b)), which was in agreement with a former report [20]. For examination of the relationship of circPLCE1 and clinicopathological features of CRC patients, assignation of the patients was into high and low circPLCE1 groups on the grounds of the median circPLCE1 expression. As clarified in Table 2, circPLCE1 was relevant to lymph node metastasis in CRC patients. All in all, circPLCE1 was up-regulated in CRC and connected with clinicopathology.
Knockdown circPLCE1 represses TAM M2 polarization, and CRC EMT and glycolysis
Next, exploration of the biological function of circPLCE1 was in CRC. Transfection of siRNA targeting circPLCE1 was into SW480 cells to knock down circPLCE1, the success of which was testified (Figure 2 (a)), clarifying that depressive circPLCE1 refrained SW480 cell proliferation but accelerated the apoptosis rate ( Figure 2(b,c)). For exploration of the impact of circPLCE1 knockdown on macrophage polarization, culture of PMA-induced human monocyte THP-1 was with CM transfected with SW480 cells of si-circPLCE1. It was affirmed that circPLCE1 knockdown reduced the percentage of CD206+ and CD168+ cells in THP-1-derived macrophages (CD206 and CD168 are surface markers of M2 macrophages) (Figure 2(d)). In the meantime, detection of M1 macrophage markers TNF-α and IL-6, and M2 macrophage markers IL-10 and MRC1 affirmed that suppressive circPLCE1 elevated TNF-α and IL-6, but restrained IL-10 and MRC1 (Figure 2(e)). Glycolysis is a crucial approach for cancer cells to gain energy for growth and metastasis. Examination of the impact of circPLCE1 depression on CRC glycolysis was conducted. As clarified in figure 2(f), restrained circPLCE1 brought down glucose consumption, lactate and pyruvate production in SW480 cells. Then examination of the influence of circPLCE1 suppression on EMT was conducted. As clarified in Figure 2 (g), depressive circPLCE1 motivated E-cadherin but restrained N-cadherin and Snail. Shortly, suppressive circPLCE1 effectively refrained TAM M2 polarization, and CRC EMT and glycolysis.
CircPLCE1 competitively adsorbs miR-485-5p
Next, it was explored the relationship of circPLCE1 with miR-485-5p. First of all, through bioinformatics website http://starbase.sysu.edu.cn/ was found that circPLCE1 and miR-485-5p had potential-binding sites (Figure 4(a)). Dual-luciferase reporting assay clarified that WT-circPLCE1 could reduce the luciferase activity in miR-485-5p mimic group, but MUT-circPLCE1 could not affect that in miR-485-5p mimic group (Figure 4(b)). In addition, the results were further verified via RIP assay. As manifested in Figure 4 (c), enriched miR-485-5p and circPLCE1 were detected in the Ago2 group. In the meantime, it was found that knocking down circPLCE1 elevated the expression of miR-485-5p in SW480 cells Expression of the data was as mean ± SD (n = 3); *P < 0.05.
ACTG1 is the target gene of miR-485-5p
MiRNAs influence their biological functions via binding to downstream proteins. Next, exploration of the target genes of miR-485-5p was conducted. Via biological information website http://starbase. sysu.edu.cn/ was forecast that ACTG1 and miR-485-5p had latent-binding sites (Figure 6(a)). In order to further validate the conjecture, dualluciferase reporting assay was carried out, and the results revealed that WT-ACTG1 reduced the luciferase activity in miR-485-5p mimic group, while MUT-ACTG1 had no effect on that in miR-485-5p mimic group (Figure 6(b)). In addition, RIP assay manifested that miR-485-5p and ACTG1 were apparently enriched in the Ago2 group ( Figure 6 (c)). Former studies have assured that ACTG1 is an oncogene in hepatocellular carcinoma and colorectal adenocarcinoma [29,30]. It was affirmed elevated ACTG1 in CRC tissues and cells versus normal controls ( Figure 6(d)), which further supports the idea that ACTG1 is an oncogene in cancer. In the meantime, descending miR-485-5p strengthened ACTG1 (Figure 6(e)), manifesting that miR-485-5p modulated ACTG1 in CRC.
Depressive circPLCE1 represses tumor growth, TAM M2 polarization, and CRC EMT in vivo
To support the in vitro results, performance of in vivo experiments was for validation. As clarified in Figure 8 RT-qPCR to detect circPLCE1 and miR-485-5p after co-transfection of pcDNA 3.1-circPLCE1 and miR-485-5p mimic; b. Colony formation to detect the impacts of augmented circPLCE1 and miR-485-5p on SW480 cell proliferation; c. Flow cytometry to detect the impacts of augmented circPLCE1 and miR-485-5p on apoptosis of SW480 cells; d. Flow cytometry to detect the impact of macrophages cultured with SW480 cells with augmented circPLCE1 and miR-485-5p and CM on the proportion of CD206+ and CD163+ cells; e. RT-qPCR to detect the impact of macrophages cultured with SW480 cells with augmented circPLCE1 and miR-485-5p and CM on M1 macrophage markers TNF-α and IL-6, and M2 macrophage markers IL-10 and MRC1; f. The impacts of simultaneously elevated circPLCE1 and miR-485-5p on glucose consumption, lactic acid and pyruvate production in SW480 cells; g. Western blot to detect the impacts of augmented circPLCE1 and miR-485-5p on E-cadherin, N-cadherin and Snail in SW480 cells. Expression of the data was as mean ± SD (n = 3); *P < 0.05.
Discussion
Nowadays, the unpleasing long-term survival prognosis of CRC patients is principally limited via the treatment [31]. Therefore, it is crucial to further understand the mechanisms by which CRC appears and develops. Recently, a great many studies have clarified that circRNA is crucial in the development of CRC, but its specific mechanism has not been fully figured out. In this study, it was further testified the biological role of the brandnew circRNA PCLE1 in CRC macrophage polarization, glycolysis and EMT, manifesting that circPLCE1 expedited TAM M2 polarization, and CRC EMT and glycolysis via competitively binding miR-485-5p to mediate ACTG1.
Plenty of studies have clarified the potential of discrepantly expressed circRNA in the serum of cancer patients as biomarkers for cancer diagnosis and prognosis [32]. In this study, it was affirmed that circPLCE1 is elevated in CRC tissues and cells and linked with lymph node metastasis in CRC patients, which is consistent with previous results [20]. It was speculated that circPCLE1 might be a biomarker for the diagnosis and prognosis of CRC. Although the biological mechanism of circPLCE1 in CRC has been partially clarified, it is momentous to further analyze circPLCE1 in serum of CRC patients in subsequent studies. A former study has clarified that circPCLE1 expedites CRC cell proliferation but represses apoptosis, which is consistent with the results of this study [20]. This offers more sufficient data to sustain circPCLE1 performing as a protooncogene in CRC.
To explore whether circPCLE1 in CRC cells could impact TAM polarization in tumor microenvironment, culture of macrophages was in SW480 cell CM with repressive circPCLE1, manifesting that reduced circPCLE1 expedited TAM polarization into M2 type. Numerous studies have affirmed that long non-coding RNA (lncRNA) can control TAM polarization in CRC. For example, lncRNA HLA-F-AS1 expedites CRC metastasis by stimulating PFN1 in the exocytosis of CRC cells and mediating macrophage polarization [33]. LncRNA RPPH1 expedites CRC metastasis via interacting with TUBB3 and motivating exosomemediated macrophage M2 polarization [34]. Nevertheless, the role of circRNA in TAM polarization remains uncertain. In this study, the first exposure of the role of circRNA was in CRC TAM polarization. Elevation of circPCLE1 expedites the polarization of TAM into M2-type and represses its polarization to M1-type, which may be conducive to the proliferation and distal metastasis of CRC. A study has clarified that lncRNA XIST expedites the proliferation and metastasis of breast and ovarian cancer via motivating the polarization of TAM into M2 type [35]. This clarifies that targeting non-coding RNAs in the tumor microenvironment rather than in cancer cells may be influential in modulating tumor growth or metastasis. Nevertheless, the expression and biological function of circPCLE1 in TAM are still ambiguous, which requires to be explored in subsequent studies. Western blot to detect the impact of co-transfected si-circPLCE1 and oe-ACTG1 on ACTG1 in SW480 cells; b. The impact of simultaneous circPLCE1 knockdown or ACTG1 elevation on SW480 cell proliferation detected via colony formation assay; c. Flow cytometry to detect the impacts of simultaneous circPLCE1 knockdown or ACTG1 elevation on apoptosis of SW480 cells; d. Flow cytometry to detect the impact of macrophages cultured with SW480 cells with simultaneous circPLCE1 knockdown or ACTG1 elevation and CM on the proportion of CD206+ and CD163+ cells; e. RT-qPCR to detect the impact of macrophages cultured with SW480 cells with simultaneous circPLCE1 knockdown or ACTG1 elevation and CM on M1 macrophage markers TNF-α and IL-6, and M2 macrophage markers IL-10 and MRC1; f. The impacts of simultaneous knockdown of circPLCE1 or elevation of ACTG1 on glucose consumption, lactic acid and pyruvate production in SW480 cells; g. Western blot analysis of the impacts of simultaneous circPLCE1 knockdown or ACTG1 elevation on E-cadherin, N-cadherin and Snail in SW480 cells; Expression of the data was as mean ± SD (n = 3); *P < 0.05.
Glucose reprogramming is an elementary feature of cancer cells. Unlike normal cells, cancer cells like metabolizing glucose through glycolysis better, thereby enhancing glucose uptake and lactate production [36]. The energy gained via glycolysis is advantageous to cancer cell proliferation, invasion, migration, EMT, and distal metastasis [37]. Former studies have clarified that circNOX4 [38], circ0136666 [39], circTADA2A [40] and other circRNAs are momentous in the glycolysis of CRC. A novel circRNA PCLE1 was supplemented in this study, and motivated glycolysis in CRC, assuring that strengthened circPCLE1 metabolizes more glucose and does not offer sufficient energy for proliferation and metastasis of CRC. A former study has reported that ACTG1 expedites glycolysis in hepatocellular carcinoma [29]. Notably, exploration of the mechanism by which circPCLE1 modulated CRC glycolysis manifested that it functioned through the competitive adsorption of miR-485-5p to mediate ACTG1, which further emphasized that ACTG1 is vital in cancer glycolysis and supposed to be a new target for repressing cancer glycolysis later.
Conclusion
In general, the role of circPCLE1 was further clarified in the biological process of CRC. It is regarded as a competitive endogenous RNA of miR-485-5p and mediates ACTG1 to expedite TAM M2 polarization, and CRC EMT and glycolysis. CircPCLE1/miR-485-5p/ACTG1 axis is supposed to be a potential molecular target for CRC treatment later.
|
2022-03-31T06:22:54.850Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "72d108447d76644430689b32bbaf425b0a513bcf",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2021.2003929?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "2130c0e1cfd473f45a6e26cf96113fcdcf0cb2c9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212718010
|
pes2o/s2orc
|
v3-fos-license
|
A Risk Aware Two-Stage Market Mechanism for Electricity with Renewable Generation
Over the last few decades, electricity markets around the world have adopted multi-settlement structures, allowing for balancing of supply and demand as more accurate forecast information becomes available. Given increasing uncertainty due to adoption of renewables, more recent market design work has focused on optimization of expectation of some quantity, e.g. social welfare. However, social planners and policy makers are often risk averse, so that such risk neutral formulations do not adequately reflect prevailing attitudes towards risk, nor explain the decisions that follow. Hence we incorporate the commonly used risk measure conditional value at risk (CVaR) into the central planning objective, and study how a two-stage market operates when the individual generators are risk neutral. Our primary result is to show existence (by construction) of a sequential competitive equilibrium (SCEq) in this risk-aware two-stage market. Given equilibrium prices, we design a market mechanism which achieves social cost minimization assuming that agents are non strategic.
of problem can be formulated as a two-stage stochastic program, and in fact it is possible to show that stochastic clearing is more efficient than two-settlement systems [6], [8].
There are a couple of issues with the use of expected social welfare as an objective function. In purely mathematical terms, a given realization of a random variable can be quite different from its expectation. Thus optimization of an expectation guarantees little in terms of variation over possible outcomes. Further, real-world observations indicate that economic decision makers are risk averse, or at least act so [1]. Therefore, given increasing levels of generation variability, it is of both theoretical and practical interest to incorporate some notion of risk into market objective functions.
In this paper we study how the introduction of risk preferences into the central objective function affects market operation.
We consider a setting with an ISO and multiple generators. The ISO owns a nondispatchable, renewable resource, and the market clears in two stages: a forward stage in which only a forecast for renewable generation is available, and a real time stage, wherein the exact realized renewable generation is known. The generators each own primary and ancillary plants, which may be dispatched in the forward and real time stages, respectively. In the forward market, the ISO schedules primary energy procurements from the generators, and in the real time market purchases ancillary service where necessary. All participants are assumed to be non-strategic price takers. However, while we assume that the generators seek to maximize their expected profit, the ISO is risk averse and minimizes a weighted sum of the expectation and conditional value at risk (CVaR) of its costs. CVaR has over the past two decades become the most widely used risk measure, due to the fact that it is a coherent risk measure, and can be calculated via a convex program [4].
Our main result is the proof of existence of a sequential competitive equilibrium (SCEq) in this risk aware, two-stage market with recourse. In particular, we demonstrate the existence of first and second stage prices such that, given these prices, the generation decisions of the generators in both decisions achieve market clearance in stage two, thus balancing supply and demand. We then specify a two-stage market mechanism which implements the SCEq.
Related work. Numerous past works have studied market and mechanism design and equilibrium outcomes in the two-stage expected welfare maximization, or risk neutral setting, e.g., [3], [15] and [16].
Turning to literature which incorporates risk preferences, several works consider settings in which agents may enter into contracts in order to hedge against risky outcomes. In [11] it is shown that a complete market, wherein all uncertainties can be addressed via a balanced set of contracts, involving agents equipped with coherent risk measures, is equivalent to one in which said agents are risk neutral, and take actions based on a probability density function determined by a system risk agent.
The work then investigates necessary and sufficient conditions for existence of an equilibrium consisting of allocations, prices and contracts. Assuming a similar setting in the context of hydro thermal markets, [10] shows that given a sufficiently rich set of securities are available to risk averse agents, that a multi-stage competitive equilibrium may be derived from the solution to a risk-averse social planning solution. [6] investigates difficulties that may arise when risk averse agents maximize their welfare in a market are not complete, including existence of multiple, potentially stable equilibria. Our setting differs from these works in that we have one risk aware customer for multiple risk neutral producers, and we do not allow for transactions between agents outside of the quantities of energy purchased and consumed.
A. Risk Measures
In stochastic optimization we are concerned with losses Z(ω) = L(x, ω) that are both a function of a decision x, as well as some random outcome ω, unknown when the decision is made. Generally speaking, a risk measure is a functional which accepts as input the entire collection of realizations Z(ω), w ∈ Ω.
More specifically, consider a sample space (Ω, F ) equipped with sigma algebra F , on which random functions Z = Z(ω) are defined. A risk measure ρ(Z) maps such random functions into the extended real line [13]. Often times the domain of ρ, denoted Z is taken as L p (Ω, F , P ) for some p ∈ [1, +∞) and reference probability measure P . The following characteristics of risk measures will become useful in later sections. We denote by Z Z ′ the pointwise partial order, meaning Z(ω) ≥ Z ′ (ω) for a.e. ω ∈ Ω.
Definition 3:
A risk measure is coherent if it is monotonic, convex and satisfies translation equivariance and positive homogeneity (see [13] for details on these properties).
B. Conditional value at risk
In the following sections, we will focus in particular on conditional value at risk, or CVaR. CVaR is an example of a coherent risk measure [13]. Before defining CVaR, we introduce the related quantity, value at risk.
Suppose that random variable Z is distributed according to Borel probability measure P , with associated sample space (Ω, F ), and cdf F . When Z represents losses, the α-Value-at-Risk is defined as follows.
Definition 4: For a given confidence level α ∈ (0, 1), the α-Value-at-Risk or VaR α of random loss Z = Z(ω) is Thus, VaR α (Z) is the lowest amount z such that, with probability α, Z will not exceed z. In the case where F is continuous, VaR α (Z) is the unique z satisfying F (z) = α. Otherwise, it is possible that the equation F (z) = α has no solution, or an interval of solutions, depending upon the choice of α. This, among other difficulties, motivates the use of alternative risk measures such as CVaR [12].
Informally CVaR α of Z gives the expected value of Z, given that Z ≥ VaR α (Z). The precise definition is as follows. Let Definition 5: [12] Let Then CVaR α (Z) = min ζ φ α (Z, ζ), and VaR α (Z) = lower endpoint of arg min It follows from the joint convexity of φ α in Z and ζ that CVaR α is convex over Z. Restricting attention to random losses Z(ω) = L(x, ω) which depend upon a decision x, we have the following result.
Theorem 1 will later ensure that optimization problems with objectives including a CVaR α term are convex.
III. RISK AWARE STOCHASTIC ECONOMIC DISPATCH FORMULATION
We consider a setting with N conventional generators, and a single renewable generator. An additional entity, the independent system operator (ISO) operates the power grid and plays the role of the social planner (from this point we use the terms interchangeably). For simplicity we consider a single bus network.
We consider a two-stage setting, where generation is dispatched in the first stage (also referred to as day-ahead or DA) and then adjusted in the second stage (real time or RT) to match demand.
Let D ≥ 0 denote the aggregate demand. This demand is assumed inelastic, i.e., it is not affected by changes in first or second stage prices.
The renewable generator's output is modeled as a nonnegative random variable W , upper bounded by W > 0. We make the following additional assumption on the distribution of W .
Assumption 1: Random variable W is distributed according to pdf f W (and cdf F W ), which is continuous and positive on The probability distribution of W is assumed to be known to all market participants. The marginal cost of renewable generation is zero. The quantity of renewable generation scheduled is denoted y.
Conventional generator i has access to a primary plant and an ancillary plant. Generator i schedules its primary plant to We assume the primary plant is inflexible, so that its generation level must remained fixed once it is scheduled. After realization of W , generator i can activate its ancillary Any ancillary generation produced in excess of aggregate demand D can be disposed of or sold in a separate spot market, which we do not consider. We assume that a i <ã i for all i, a i = a j for i = j, and that max i a i < min iãi andã i =ã j for i = j.
The generator is compensated for its first stage production x G i at price P 1 . In the second stage, given W = w, the generator is compensated for second stage generation z G i (w) at price P 2 (w).
A. Generator's Problem
We assume that each generator i is price taking, i.e., its decisions x G i and z G i (w) do not affect prices in either stage. Therefore, generator i's profit is given by Each generator is risk neutral, and so makes first and second stages to maximize the expectation of (4). In stage 2, given production level w and price P 2 (w), generator i solves the following problem Let π 2 i (w, P 2 ) be the maximum objective value obtained in solving (5), given w and P 2 . Then in the first stage, given price P 1 , generator i solves the following problem The term E[π 2 i (W, P 2 )] is a constant when optimizing over x G i , as generator i's DA and RT decisions can be made independently. In order to emphasize the fact that generator i observes W = w prior to selecting z G i (w), we separate generator i's two optimization problems.
B. ISO's Problem
In Section III, our definition of a sequential competitive equilibrium includes a tuple of allocations, i.e., generation levels.
For the purposes of examining the welfare properties of these allocations, we now introduce a two stage social planner's problem (SPP), corresponding to our two settlement market. As is seen in the static case, the SPP involves maximizing the social welfare of all market participants. We take the welfare of generator i to be the negation of generation costs from stages 1 and 2. Given W = w, the aggregate welfare is the negation of the summation of these costs over all generators: where (x i ,ẑ i (w)) for all i and w are the social planner's decisions in stages 1 and 2. Definex := (x 1 , . . . ,x N ), and similarlŷ We assume that the social planner is risk averse. That is, instead of seeking to minimize the expectation of (7), they seek to minimize a weighted combination of E[c SPP (W )] and CVaR α (c SPP (W )). α ∈ [0, 1) signifies that the ISO considers worst case or tail events with cumulative probability 1 − α to be "risky", and therefore weights them more heavily. We now introduce the additional parameter ǫ ∈ [0, 1], which gives the social planner's relative weighting of overall expectation and CVaR α of the first and second stage generation costs, and define the social planner's risk measure as It can be shown that ρ SPP (·) is a coherent risk measure [13].
Given thatŷ is the amount of renewable generation scheduled by the social planner in stage 1, and W = w, the social planner's second stage problem is Note that constraint (10) is an inequality in order to accommodate scenarios in which renewable generation exceeds residual Define c SPP 2 (x, w) as the minimum aggregate social cost achieved in the second stage, givenx and W = w. Then the social planner's first stage problem is where we have used translation equivariance of CVaR α to move the summed first-stage costs outside of ρ SPP . We now argue that problems (SPP1) and (SPP2) can be combined into the following single stage optimization problem.
Similar to the equivalency demonstrated for the ISO's problem in Lemma 2, it can be shown that the following single stage problem is equivalent to (GEN1 i ) and (GEN2 i ) where z G i (·) : R + → R + .
IV. SEQUENTIAL COMPETITIVE EQUILIBRIUM
In a single stage market for a single good, a competitive equilibrium is specified by a price P and quantity x such that, given P , producers find it optimal to produce, and consumers find it optimal to purchase, quantity x of the good. Thus, the market clears, i.e., demand equals supply.
To understand the outcome of the two-stage market, we consider a sequential version of competitive equilibrium.
Definition 6:
A sequential competitive equilibrium (SCEq) is a tuple (x * , z * (·), P * 1 , P * 2 (·)) such that, for all i, given P * 1 and P * 2 (·), x * i is optimal for (GEN1 i ), z * i is optimal for (GEN2 i ), and there exists a y * , such that Note that in the SCEq definition, z * i (·) and P * 2 (·) are functions. We now investigate the existence of an SCEq in our two stage, risk aware setting.
Letμ(w) be the Lagrange multiplier corresponding to constraint (10). The Lagrangian for (SPP2) is giving, in addition to feasibility, the following optimality conditions for problem (SPP2): Assumingŷ > w,ẑ * i (w) > 0 for all i, and in particular Ifŷ ≤ w thenẑ * i (w) = 0 for all i. Summing (23) over i, applying constraint (15), and rearranging giveŝ where the constantã is defined asã : Summing over i gives the optimal second stage objective value (i.e., the minimum recourse cost givenx) Therefore, VaR α (c SPP 2 (x, W )) may be expressed as Given this expression for VaR α , the following lemma gives an explicit expression of CVaR α for our quadratic cost function setting.
Lemma 3: Assuming first and second stage generation cost functions of the form ax 2 andãz(w) 2 , a,ã > 0, CVaR α (c SPP 2 (x, W )) can be expressed as Proof 2: Given Assumption 1, the cdf F c SPP 2 of losses c SPP 2 (ŷ, W ) will be continuous everywhere except possibly at zero, since P (c SPP 2 (x, W ) = 0) = P (W ≥ŷ). By Theorem 6.2 of [13], when VaR α (c SPP 2 (x, W )) > 0, we may write where f c SPP 2 gives the pdf corresponding to F c SPP 2 . If VaR α (c SPP 2 (x, W )) = 0, then using Definition 5, we have that Substituting for c SPP 2 (x, w) and then combining (29) and (30) completes the proof. While CVaR α (c SPP 2 (x, W )) is convex in the first stage decisionx due to (25) and Theorem 1, the upper limitθ of the integral in (28) is not a differentiable function ofŷ, so that the Leibniz integral rule does not directly apply. The next lemma addresses this issue.
Lemma 4: Given Assumption 1, expression (28) is continuously differentiable with respect toŷ, with derivative
Proof 3: We consider two cases, depending on the two possible values ofθ(ŷ). Whenθ(ŷ) = F −1 W (1 − α), applying the Leibniz integral rule gives Whenθ(ŷ) =ŷ, application of the Leibniz integral rule gives Combining the last two equations gives the expression in the lemma statement.
) is an affine function ofŷ, and whenŷ > F −1 Then, problem (SPP) may be written as Locational marginal pricing (LMP) is a commonly used settlement scheme for economic dispatch problems, and previous work has examined extensions of LMPs to problems including two stage markets with recourse. In such models, the LMPs arise as the dual variables to power balance constraints for each stage (in our setting (33) and (34) in (SPP)). Previous work ( [3], [15]) has demonstrated that such LMPs support a competitive equilibrium when the ISO or social planner is risk neutral, i.e. when ǫ = 0. We state this formally in terms of our setting in the following theorem.
Theorem 5: When ǫ = 0, there exists an SCEq. In particular, (x * , z * ) are given by (x * ,ẑ * ) in the optimal solution to (SPP), and the equilibrium prices are given by Proof 4: Our setting with ǫ = 0 can be seen as a special case of that in [3]. The proof then follows from Theorem 1 in [3].
Theorem 6: If 0 ≤ ǫ < 1, then there exists a competitive equilibrium. In particular, (x * , z * ) are given by (x * ,ẑ * ), the optimal solution to problem (SPP), and the equilibrium prices are given by Then, in addition to feasibility, the optimality conditions for (32)-(34) are [13]: In addition to feasibility, the optimality conditions for (GEN i ) are In view of optimality conditions (42) and (43), we choose the following price schedule Given these choices, for each i, the optimality conditions for (GEN i ) become Choosing x G * i =x * i for all i and z G * i (w) =ẑ * i (w) for all i and w, (50) and (51) become identical to (38) and (39), and (52) and (53) become identical to (42) and (43). Therefore x G * i =x * i for all i, and z G * i (w) =ẑ * i (w) for all i and w, and the selected prices, together with (x * i ,ẑ * i (w)) for all i and w constitute an SCEq, and we have shown by construction the existence of an SCEq.
Assumingẑ * i (w) > 0 for any i (and therefore for all i), the second stage price given in (36) can be rewritten in terms of the social planner's primal decision variables and the level of renewable generation. Rearranging the term in parenthesis in (43) Summing both sides of (54) over i and using constraint (15) giveŝ Thus when 0 ≤ ǫ < 1, we have Given thatx * i > 0 for any i (and therefore for all i), a similar calculation giveŝ We now address the case where ǫ = 1, as prices given in the statement of Theorem 6 cannot be applied directly in the case whereθ * <ŷ * . Consider a sequence {ǫ(k)}, where lim k→∞ ǫ(k) = 1. Then, suppressing the dependence ofμ(w) on ǫ, and taking the limit as k → ∞ on both sides of (55) gives The limit lim k→∞ŷ * (ǫ(k)) exists, as (SPP) may be solved for the case where ǫ = 1, and the optimal solution is unique given our assumptions on the generator cost function form.
Therefore, it still holds in the case where ǫ = 1 that P * 2 (W ) = 2ã · [ŷ * − W ] + , and in turn a competitive equilibrium is given by (x * ,ẑ * (·), P * 1 , P * 2 (·)), where P * 1 =λ * and P * 2 (W ) = 2ã · [ŷ * − W ] + . Finally we give the following lemma on continuity of the equilibrium prices in ǫ. In the proof of Theorem 6, it was shown that the SCEq prices arise as optimal dual solutions to (SPP). If we assume that the generators are not strategic, and that all participants know the distribution of W , then the following mechanism implements the SCEq: (1) Each generator i submits cost function coefficients a i andã i .
(2) The ISO solves (SPP), and announces stage 1 price P * 1 and stage 2 price schedule P * 2 (·) as given by (36). (3) Generator i solves (GEN1 i ) and receives P * 1 x G * i . (4) At the start of stage 2, the renewable generation output W = w is observed by the generators. Generator i solves (GEN2 i ) and pays P * 2 (w)z G * i (w). (5) Generator i produces x G * i + z G * i (w).
VI. CONCLUSION
In this paper we consider a two-stage electricity market model with a single customer and multiple generators, taking into account the risk preferences of the customer while assuming that the generators are risk neutral. Our goal has been to determine if a sequential competitive equilibrium exists in such a market, given this discrepancy in risk attitude. We show that such an equilibrium does exist by formulating the risk aware stochastic economic dispatch market as a two-stage stochastic program, and solving this problem to determine equilibrium energy procurements and prices. The equilibrium prices directly reflect the social planner's risk attitude. Given these prices, we specify a market mechanism for implementation of the equilibrium, assuming that the generators are not strategic. In future work we will incorporate network topology, multiple consumers, and strategic behavior in both the generators and consumers and general convex cost functions.
APPENDIX B
PROOF OF LEMMA 7 Let F (ǫ) denote the feasible set of (SPP), given parameter ǫ ∈ [0, 1]. From [14], the local compactness (LC) of F at some ǫ is satisfied if there exists a δ > 0 and compact set C 0 such that Observing (SPP) is equivalent to a problem with the same objective and constraints, with the additional constraints that ix i ≤ D,ŷ ≤ D and iẑ i (w) ≤ M for a large enough finite M , and that the feasible set of (SPP) does not depend upon ǫ, LC is satisfied for any ǫ ∈ [0, 1].
Due to the strict convexity of the first and second stage cost functions, the objective of (SPP) is strictly convex, so that when an optimal solution (x * ,ẑ * (·)) exists, it is unique. Therefore, outer semicontinuity of the optimal primal solutions in ǫ is equivalent to continuity in ǫ. Since the equilibrium prices depend continuously on the primal solutions to (SPP), the prices themselves are continuous at any ǫ ∈ [0, 1].
|
2020-03-16T01:00:45.516Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3aa19f114fceff7105c8d2f96ffcc41f6833797d",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/2003.06119",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3aa19f114fceff7105c8d2f96ffcc41f6833797d",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Computer Science",
"Engineering"
]
}
|
56169431
|
pes2o/s2orc
|
v3-fos-license
|
Accelerometry-Based Distance Estimation for Ambulatory Human Motion Analysis
In human motion science, accelerometers are used as linear distance sensors by attaching them to moving body parts, with their measurement axes its measurement axis aligned in the direction of motion. When double integrating the raw sensor data, multiple error sources are also integrated integrated as well, producing inaccuracies in the final position estimation which increases fast with the integration time. In this paper, we make a systematic and experimental comparison of different methods for position estimation, with different sensors and in different motion conditions. The objective is to correlate practical factors that appear in real applications, such as motion mean velocity, path length, calibration method, or accelerometer noise level, with the quality of the estimation. The results confirm that it is possible to use accelerometers to estimate short linear displacements of the body with a typical error of around 4.5% in the general conditions tested in this study. However, they also show that the motion kinematic conditions can be a key factor in the performance of this estimation, as the dynamic response of the accelerometer can affect the final results. The study lays out the basis for a better design of distance estimations, which are useful in a wide range of ambulatory human motion monitoring applications.
Introduction
Since seminal work by Morris [1], MEMS accelerometers and gyroscopes have been increasingly used for the real-time measurement of body motion spatio-temporal parameters, due to their low consumption and cost, and their easy connectivity.Specifically, they are the base component for wearable devices that measure linear and/or angular displacements in the human body, e.g., the shank rotation, the stride length, and the pelvis displacement.The stride length is needed, for example, in pedestrian navigation systems [2] or for clinical gait analysis [3].The pelvis displacement is useful in rehabilitation or prosthetics, as an indicator of the metabolic cost in walking [4], to discriminate left and right steps [5], or to estimate the step length [6][7][8].
Although particularly convenient for real-time applications, these MEMS-based estimations suffer the problem of an unbounded error growth with time [9].For angular displacements, the problem is usually addressed by the sensor integration of gyroscopes, accelerometers, and magnetic field sensors, by means of Kalman-filtering-like algorithms, forming inertial measurement units (IMU) [10].
However, for linear displacements, there is no unique accepted solution.The basic principle of accelerometers as linear position sensors is straightforward: the measured acceleration is converted to a linear position by doubly integrating the accelerometer data, that is, by adding up a noisy signal coming from a sensor supposedly aligned in a measurement axis.However, error grows too fast with time, making it mandatory to provide some kind of error compensation.Otherwise, the results can be unacceptable for most applications, even if integration is made in short time slots.
The main error source comes from sensor bias and the duration of the integration time.Bias is the difference between the sensor output and the true value and it has multiple origins, both deterministic and stochastic.Deterministic bias sources are those that could be predicted, and possibly corrected by a per device calibration.On the contrary, stochastic bias sources can be considered random and need to be compensated.They are probabilistically modeled, in the form of power spectral density or Allan variance, and they have a variety of origins: velocity random walk (additive white noise), bias instability (flicker noise), quantization noise, sinusoidal errors, rate random walk, and rate ramp [11].A possible general model of the accelerometry measurement process could be y(t) = S • a(t) + (T) + N(a, T, t) + e(t) y(t) being the sensor output, a(t) the real acceleration to be measured, S the scale factor, (T) a temperature-dependent bias, N(a, T, t) a time-dependent non-linear bias function, and e(t) the bias error caused by stochastic sources [12].If a certain bias in acceleration is integrated, it will produce an error in distance estimation which grows quadratically with the integration time [13].Then, integration time is a key parameter to consider when studying distance estimation errors.
In spite of the widespread use of this measurement technique in real-time human-motion-related applications, there is a lack of agreement about how to define and compensate its errors.The error-correction methods proposed in the literature are heuristic in nature and do not provide a general design criteria.It is not clear how different error sources behave with integration time [9] or how to evaluate the relative weight of each source in the final results.
In this paper, we aim to shed some light on this problem by making a systematic comparison of different distance estimation methods with different sensors, to check if they are compatible with the demands imposed by ambulatory human motion monitoring applications: accuracy in real-time and robustness to various experimental kinematic conditions.
The presented results confirm that it is possible to use accelerometers to estimate short linear displacements of the body with a 4.5% error in typical conditions.However, it is also shown that the motion conditions, such as mean velocity and acceleration profiles, can be a key factor in the performance of this estimation, as the dynamic response of the accelerometer can affect the results.The study sets out the basis for a systematic design of distance estimations, and it provides a tool to better interpret their results depending on the experimental conditions.Section 2 will develop the distance estimation problem and its experimental conditions in the context of human motion science.Section 3 will detail the estimation methods selected and the design and conditions of the experiments.The results will be explained in Section 4 and discussed in Section 5.The main conclusions will be synthesized in Section 6.
Real-Time Distance Estimation for Cyclic Motion
In human movement science (HMS), most distance estimations of interest are not the result of a free linear motion, but rather they are related to cyclical or periodical motions, such as human walking.For example, the body center of mass moves up (vCOM) and sideways (mlCOM) to its maximum twice each stride in normal gait or the distance traveled by one foot between two consecutive heel strike events, which defines the stride length (SL).
Several distance estimation methods have been proposed in the human movement science literature.In spite of its known limitations, the simple direct cumulative double integration (CMS) method has been applied to estimate the stride length in devices for the ambulatory analysis of gait [14].We will test this method as a worst-case scenario for distance estimation.Other methods, see Figure 1, try to mitigate the growth of error caused by the direct signal integration, because they are noticeable even within the short time integration periods of interest (0.4-1.6 s).< l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i 8 J x S g Z g Y v Y = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m K o M e C F 4 8 V T V t o Q 9 l s N + 3 S z S b s T o Q S + h O 8 e F D E q 7 / I m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 M J X C o O t + O 6 W N z a 3 t n f J u Z W / / 4 P C o e n z S N k m m G f d Z I h P d D a n h U i j u o 0 D J u 6 n m N A 4 l 7 4 S T 2 7 n f e e L a i E Q 9 4 j T l Q U x H S k S C U b T S A w 7 E o F p z 6 + 4 C Z J 1 4 B a l B g d a g + t U f J i y L u U I m q T E 9 z 0 0 x y K l G w S S f V f q Z 4 S l l E z r i P U s V j b k J 8 s W p M 3 J h l S G J E m 1 L I V m o v y d y G h s z j U P b G V M c m 1 V v L v 7 n 9 T K M b o J c q D R D r t h y U Z R J g g m Z / 0 2 G Q n O G c m o J Z V r Y W w k b U 0 0 Z 2 n Q q N g R v 9 e V 1 0 m 7 U P b f u 3 V / V m o 0 i j j K c w T l c g g f X 0 I Q 7 a I E P D E b w D K / w 5 k j n x X l 3 P p a t J a e Y O Y U / c D 5 / A F W o j c Q = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i 8 J x S g Z g Y v Y = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m K o M e C F 4 8 V T V t o Q 9 l s N + 3 S z S b s T o Q S + h O 8 e F D E q 7 / I m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 M J X C o O t + O 6 W N z a 3 t n f J u Z W / / 4 P C o e n z S N k m m G f d Z I h P d D a n h U i j u o 0 D J u 6 n m N A 4 l 7 4 S T 2 7 n f e e L a i E Q 9 4 j T l Q U x H S k S C U b T S A w 7 E o F p z 6 + 4 C Z J 1 4 B a l B g d a g + t U f J i y L u U I m q T E 9 z 0 0 x y K l G w S S f V f q Z 4 S l l E z r i P U s V j b k J 8 s W p M 3 J h l S G J E m 1 L I l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i X H e n Y 9 l a 8 k p Z k 7 h D 5 z P H 1 r q j 3 Y = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " R l i y X H e n Y 9 l a 8 k p Z k 7 h D 5 z P H 1 r q j 3 Y = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " R l i y X H e n Y 9 l a 8 k p Z k 7 h D 5 z P H 1 r q j 3 Y = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " R l i y X H e n Y 9 l a 8 k p Z k 7 h D 5 z P H 1 r q j 3 Y = < / l a t e x i t > t i < l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i 8 l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i 8 l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i 8 l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i 8 X H e n Y 9 l a 8 k p Z k 7 h D 5 z P H 1 r q j 3 Y = < / l a t e x i t > t i < l a t e x i t s h a 1 _ b a s e 6 4 = " k R E r 4 B a y q a e f e S r I i 8 In Ref. [15], the direct method is slightly modified for both horizontal and vertical motion of the foot, to estimate the step length.Previous to the second integral, a lineal resetting mechanism is applied to the velocity by weighting linearly between 1 and 0 during the integration time (LRI method).This is to ensure that v(T) = 0, a biomechanics property that both foot velocities were supposed to have.The authors claim that the aim of the method is to remove measurement error due to noise and drift.
An integration with mean subtraction (MSI method) is proposed in Ref. [8] to estimate zCOM displacements from COM accelerations, and in Ref. [16] for foot displacements, both in the context of gait analysis.The idea was to remove the non-zero medium value ās = 0 caused by sensor artifacts and uncertainties, prior to the first velocity integration.The mean value of the acceleration between two consecutive integration events (ipso-lateral heel strikes) ās was subtracted, and a first integration was made to estimate the instantaneous velocity.Notice that forcing the acceleration to have a zero mean value is equivalent to assuming that the velocity is the same at the beginning and the end of the segment, for example, if the motion comes to rest at the end of the period, ās being the average acceleration in the segment period, and ās = 1 T T 0 a s (τ)dτ.In Ref. [8], the mean subtraction was applied twice in the particular case where the final position is the same at the beginning and end of the integration period, but this case will not be addressed here.
In Ref.
[17], a technique is presented to estimate the stride length by integration of the body antero-posterior (forward) acceleration, measured at the pelvis.It is denominated as the optimally filtered integration (OFI method).To reduce the low frequency acceleration noise, at each gait cycle, the acceleration signal was high-pass-filtered, with a 2nd order Butterworth filter with an f c Hz cut frequency, calculated from data of a given gait cycle with known equal initial and final velocities.After filtering, a first integration produced instantaneous velocity.Taken from Ref. [18], the numerical method chosen to integrate the data was the Cavalieri-Simpson.If the so-estimated final velocity is close to the initial velocity, they are supposed as equal, and a weighted direct and reverse integration is applied, aimed to reduce the time drift in velocity.Notice that this method, contrary to the LRI and MSI methods, did not need a general assumption about the velocity at the end of the integration time, because the authors were dealing with a biomechanics problem (apCOM) with different characteristics.
A different method is proposed in Ref. [19], named de-drifted integration (DDI method).It encompasses the subtraction of a weighted mean function of data samples, previous to each of both integrations.In this case, it was applied to instep foot accelerations during a stride time cycle, with no previous filters, to estimate the stride length.The acceleration drift function was computed by averaging a few initial and final accelerometry samples and by removing a lineal interpolation between both values from the original signal.The size of both sets of samples was empirically selected.The velocity drift function was based on the assumption of zero velocity at the beginning and end of the step, and a lineal function from zero to the mean of the last velocity samples was subtracted from to achieve it.
Other methods have been proposed in human motion analysis for distance estimation that will not be included in this study.In Ref. [3], zCOM and mlCOM displacements are estimated in two steps: (1) filtering the sensor data before the double integration with a low-pass 4th order zero-lag Butterworth filter, at a 20 Hz cut frequency, to remove acceleration noise and (2) filtering the distance estimation after the double integration with a high-pass 4th order zero-lag Butterworth filter, at a 0.1 Hz cut frequency, to remove its long-term time drift.This drift filtering has to be made in longer time intervals (30 s in Ref. [3] or 12-18 s in Ref. [7]).These are useful solutions for off-line data analysis, but not feasible for real-time step-to-step applications.
Materials and Methods
To evaluate and compare the estimation methods presented, a set of experiments was designed.Inertial sensors were attached to a robot arm programmed with a set of motions.The robot position was stored synchronized with the sensor readings.The data were processed with all methods, and their results were evaluated against the actual robot displacement, which provided the ground truth.
The robot was a six-degree-of-freedom industrial manipulator IRB-120 from ABB, reaching 580 mm with a maximum payload of 3 kg and a position repeatability of 0.01 mm.It was fixed to a workbench and previously calibrated.This robot provided a worst-case linear trajectory repeatability of 0.07 mm (a nominal maximum load of 3 kg and a linear motion with six axes at 1600 mm/s [20]).
To calculate cyclical displacements in real-time for ambulatory applications, they have to be estimated in one stride or step time period.This time span, or integration time, is very short.For a wide population of 95% of healthy men and women, aged from 13 to 80, the step time in walking can be as long as 0.8 s (a cadence of 75 steps/min), and it can decrease to 0.4 s (cadence of 150 steps/min) when the walking velocity increases [21].This can occur at free-speed walking velocities in ranges approximately from 0.8 to 1.8 m/s.
Likewise, these parameters are subject to certain kinematic restrictions.For example, the stride length (SL) can be measured from the foot [19] or from the antero-posterior displacement of the body center of mass (apCOM) [17].The average stride length (or two steps) increases with walking speed, varying in the interval of 94-185 cm for the same wide population and the same walking speed ranges mentioned previously [21].These motions are made with accelerations around 2-5 g [22].Likewise, trunk vertical excursion (vCOM) is known to increase with walking velocity, roughly from 2.2 to 6 cm at cadences between 66 to 120 steps/s [23] or step times of 0.5-0.9s.It is also reasonable to hypothesize that vCOM initial and final positions at every stride are the same, which makes its average velocity zero.Inversely, the COM medio-lateral displacement (mlCOM) decreases with walking velocity, in the range from 8 cm at 66 steps/s to 2.4 cm at 120 steps/s [24].All these variables affecting the estimation of displacements of interest in HMS are summarized in Table 1, assuming a duration of a gait cycle (step) in the range of 0.4-0.8s.From these considerations, it is clear that the problem of MEMS-based distance estimations in human motion science is relevant for integration times from 0.4 s (a fast step time) to 1.6 s (a slow stride time).Within this time rank, motions of interest can be divided into two groups: short range motions of a few centimeters (COM-type) occurring at mean velocities between 1 to 20 cm/s, and large range motions (step-like) with mean velocities ranging between 60 and 230 cm/s.These numbers will be used to define eight experiments covering both types of kinematic conditions.
The robot was programmed to make eight types of linear vertical motion in the Cartesian space, which tries to resemble the previously discussed scenario of distance estimation in typical human motion applications, summarized in Table 2.A short (4 cm) linear trajectory was executed at increasing speeds {1, 2.5, 5, and 10} cm/s.Similarly, a large (40 cm) trajectory was made with speeds of {10, 25, 50, and 100} cm/s.The short trajectory resembles the conditions of COM-like displacements, and the large trajectory of step-like (half the stride) displacements.Each experiment was repeated 50 times to provide statistical significance.The motion parameters were chosen also to provide the same four integration time scenarios in both trajectories, {4, 1.6, 0.8, and 0.4} s, relevant in the context of biomechanics, as discussed in the previous section.The motion was recorded with margins of 0.05 s before and after it was made, to clearly capture the whole motion, so the effective integration time applied was {4.1, 1.7, 0.9, and 0.5} s.In the case of the fastest motion 100 cm/s (Experiment 8), the integration time was incremented further to 0.625 s forced by the physical limitations of the robot.The Experiments 1 and 4 imply an integration time of τ = 4.1 s, which is outside the scope of interest of this study.They will be used as a reference, to understand how the conclusions obtained for τ under 2 s extrapolate to larger integration times.A custom-designed end effector for the robot was used, which allows one to simultaneously test several accelerometer units, see Figure 2. The units were rigidly attached to the end effector and statically calibrated.Individual calibration was performed with the sensor devices attached and at rest, in the middle point of the linear trajectory.
Three different accelerometers were tested simultaneously, and their nominal characteristics are synthesized in Table 3. Their noise density σ c given by the manufacturer was taken as a first indication of the sensor quality.The Accel.XS is part of an MTi IMU device from Xsens R with accelerometers from ADXL206 [25,26].The Accel.LS is integrated in an IMU unit from Shimmer R with a low-noise analog accelerometer Kionix model KXRB5-2042, which has a better noise performance than the previous model.The Accel.WR is a wide-range accelerometer also included in the previous device, with an LSM303DLHC digital accelerometer from STMicro with the highest nominal noise level of the three [27].
In the three cases, the user can read the raw uncalibrated sensor data with a sampling frequency of f s Hz.However, the setup of the internal signal conditioning, the anti-aliasing filter, and the sampling by the dedicated CPUs are not always available, so it is difficult to know the effective sensor bandwidth f c .In the table, a reference value is given, as suggested by the IMU manufacturers and its corresponding RMS noise density level σ d .In our experiments, sensor data was recorded at the maximum sampling rate available, f s = 120 or 512 Hz, depending on the device.Once the sensor was fixed to the end effector, an individual static calibration of its orientation was required.This reorientation was made with the TRIAD method, by using the magnetometer data with the robot at rest [28].The direction of earth gravitational and magnetic fields (global coordinates) were used to compute two measurements { m g , m m } in local coordinates, which were converted to a local triad H L = { u 1 , u 2 , u 3 } by the following process: The same process was applied to calculate a global triad H G , and both of them allowed us to compute a re-orientation matrix H G = G R L H L .The gravitational field is much more robust and reliable than the magnetic field to be used as a measurement reference, and, for that reason, the robot moved precisely along the vertical axis, and the raw data was reorientated with matrix G R L .No other data curation is done.
The controlled motion of the robot generated an acceleration profile that is similar in the eight experiments described in Table 2.It started with a short acceleration peak, a long period of zero acceleration corresponding to the constant velocity motion, and an inverse symmetrical peak of deceleration to reach zero velocity again, see Figure 3 Left.Each of the eight experiments in Table 2 was repeated a number of times M = 50.The M signal segments, sampled with frequency f s , were stored and inspected to find artifacts that can occur if samples are lost during a repetition.This was the case for Accel.LS and Accel.WR when sampling at high frequencies (512 Hz).All the signals were inspected to remove those that were non-useful, leading to the discarding of six and seven in 400 repetitions, respectively.
Each of the valid M blocks formed a data set, each having N samples, accounting for τ = N/ f s s, which corresponds to the integration time for the estimations.Within each block, the data were integrated to obtain a linear position with the five distance estimation methods in Table 4.The error in the last sample (Nth sample) ε(k, N) was calculated for each block k ∈ M.Then, its root mean square error (RMS) ε(N) will be used to evaluate the response: The complete methodology is represented in Figure 3.
The Effect of Noise in Distance Estimation
A first experiment was conducted in order to evaluate how different methods cope with sensor noise.To do that, acceleration data were taken from stationary sensors with a sampling frequency f s , during 36.95,37.89, and 50.45 min for each sensor, respectively.Then, data were segmented in groups of fixed time length τ = {0.1,0.2, 0.4, 0.8, 1.6, 2, 3, 4} s.This generated 554, 568, and 756 data groups for the 4 s integration time, respectively.The same number of independent groups were used for the rest of the integration times.The estimation methods of interest in Table 4 were applied, and the RMS value of the maximum distance observed in the period, (ε(N)) in Equation ( 3), was computed.The results are stored in Appendix A, Table A1. Figure 4 represents these results for each method, with different markers for the three sensors.The lines represent a polynomial interpolation made with four central data corresponding to integration times of T = {0.2,0.4, 0.8, 1.6}.The model is a third order cubic polynomial on the square of the RMS value, RMS 2 = at 3 , as proposed in Ref. [29].The goodness-of-fit statistics used to evaluate the model is the sum of squares due to error (mm, in parenthesis).The increase in the estimation error with the simple double integral or CMS method, Figure 4a, follows the expected pattern of cubic growth with time.This growth is correlated to the sensor noise level stated in their datasheet (Table 3), the XS accelerometer being the one with slowest RMS growth, followed by the LS and lastly the WR sensor.We will denote this sensor's classification with (XS > LS > WR) rms CMS (meaning better results respecting the RMS error in the distance estimation with the CMS method).
For large integration times (>2 s), the cubic model of the CMS method underestimates the RMS error for every sensor.For example, with τ = 3 s, the model-predicted error magnitude is 12.6% 22.5%, and 17.1% greater than the experimental for XS, LS, and WR sensors, respectively.However, this result is compatible with the literature, as will be discussed in the following section.The LRI method, Figure 4b, although reducing the error magnitude compared to simple CMS, has a similar qualitative response.
On the contrary, the other three methods in Figure 4 exhibit a more profound modification of the CMS-LRI response to noise, as revealed by a much better model fit and a better prediction of error for large integration times (>2 s).We will refer to the three together as the three main methods (3MM).These methods succeed in their objective of reducing the RMS error growth even for the smallest integration time of t = 0.1 s.Two of them, methods OFI and MSI in Figure 4c, show a very similar quantitative response, their error being less than one-third of the CMS error for 4.1 s of integration time.The error-reduction effect of the DDI method, Figure 4d, is smaller, around half the CMS error for the same 4.1 s mark.The three methods make the Accel.LS sensor behave numerically better than the Accel.XS, a fact which is also reflected in their respective cubic error model.That is, they change the sensor ranking to (LS > XS > WR) rms 3MM .
The Effect of Velocity in Distance Estimation
The five distance estimation methods were applied with the sensors in motion at different velocities, for short distances (4 cm) and larger distances (40 cm), as prescribed in Table 2.The RMS error value was computed and the results are shown in Appendix A, Tables A2-A4 for the short distance experiments, Experiments 1-4, and in Tables A5-A7 for the large distance experiments, Experiments 5-8, respectively.To visualize these results, we represent the four data points corresponding to integration times of T = {0.5, 0.9, 1.7, 4.1} s, for both short and large range motions, in six diagrams in Figure 5, two for each sensor.
In the case of short range motions, the three data points T = {0.5, 0.9, 1.7} s for the time frame of interest (up to 2 s) are used to compute a linear interpolation, using a similar third order cubic polynomial model that in the previous section, RMS 2 = at 3 + b, adding a constant term b that represents the significance of the error even for very small integration times.That term was not meaningful nor necessary in the static sensor experiments.As in the static case, the goodness-of-fit is evaluated with the sum of squares due to error statistic ( ).The last data point from the experiments (4.1 s) is used to test how the model extrapolates for large integration time data.For large range motions, a simpler linear interpolation model is proposed, RMS = at + b.We will comment on the meaning and significance of these models in the discussion section.
For the sake of clarity, we will eliminate the CMS and LRI methods from the graphics.The reason is that a mere inspection of the six tables reveals that the simple double integration CMS is clearly inferior to the other methods since it is not designed to limit the error growth.This naive approach to distance estimation produces a greater growth of error with the sensors XS < LS < WR, the same order as in the static case.It is also noticeable that the LRI method gives very bad estimations for all sensors at all integration times.
Distance Estimation with Accel.XS
Starting with the Accel.XS sensor, the results for short range motions (4 cm) in Table A2 are represented in Figure 5a.A smaller RMS error occurs with the OFI method for the three integration times of interest (0.5, 0.9, and 1.7 s), with the greatest percentage error occurring at 1.7 s with values 3.1% (MSI), 2.85% (OFI), and 4.46% (DDI).The cubic interpolation shows a smaller constant value b in the OFI model, suggesting that this method could present a more robust behavior in short integration times.The model adjustment is good for the three methods, slightly better for the DDI method.For larger integration times, T > 2 s, the models predict smaller errors with the MSI method and larger errors with DDI.However, the experiments at T = 4.1 s indicate an outcome similar to the three methods, RMS ≈ 0.47 cm), resulting in large model prediction errors: −57% (MSI), −23% (OFI), and 26% (DDI).The behavior of the Accel.XS sensor in large range motions (40 cm) in Table A5 is represented in Figure 5b.Again, the OFI method performs the best for short integration times.The greatest percentage error occurs at 1.7 s with similar values for the three methods: 3.61% (MSI), 3.32% (OFI), and 3.49% (DDI).The three models predict a very similar extrapolation value for large integration times (T = 4.1 s), as in fact happens in the experiments, but with an overestimation close to 90%.The RMS error at 4.1 s is even smaller than the error with only 1.7 s of integration, a counterintuitive result that will be discussed in the next section.
Distance Estimation with Accel.LS
The Accel.LS device performs worse than the Accel.XS with the simple CMS method at low velocities (see Table A3).However, as in the case of a stationary sensor, the three main methods make it behave better when used as a distance estimator.The OFI method has better results in two of the three integration times (0.5 and 0.9 s), Figure 5c, with the greatest percentage error occurring at 1.7 s with values of 2.49% (MSI), 2.40% (OFI), and 2.34% (DDI).The cubic interpolation fitness is even better than with the Accel.XS sensor.For larger integration times, where the three methods produce very similar results in the experiments (RMS ≈ 0.21 cm), the model predictions are better than those of the Accel.XS sensor: 5% for MSI, −11% for DDI, and 23% for the OFI method.
The Accel.LS sensor in large range motions (40 cm), Table A6 or Figure 5d, produces results qualitatively similar to the XS device and are better numerically for every method and integration time.The greatest percentage error occurs at 1.7 s with values of 2.74% (MSI), 2.66% (OFI), and 2.75% (DDI).Additionally, the linear model shows a bad adjustment for a large integration time 4.1 s, with the estimation errors similar to those of the three methods (RMS ≈ 1.02 cm), but an overestimation close to 100%.
Distance Estimation with Accel.WR
The Accel.WR device is the lowest cost device of the three tested, with a noise level and a measurement range around four times larger than the previous Accel.LS device.This is confirmed with the simple CMS method in a short distance or with low velocities, Table A4 or Figure 5f, whose RMS errors duplicate those of the previous device.The three main methods reduce RMS levels as expected, presenting almost identical results, with the greatest percentage error occurring at 1.7 s with values of 3.00% (MSI), 2.96% (OFI), and 2.89% (DDI).The cubic model shows a very good adjustment, and the results in T = 4.1 s, which tend to be similar to those of the three methods (RMS ≈ 0.39 cm), present a deviation from the model prediction by 5% (MSI), 9% (OFI), and 5% (DDI).
The estimations for large range or higher speed motions, compiled in Table A7 and represented in Figure 5g, are the worst from the three sensors, for all integration times.Unexpected results occur with the smallest (t = 0.5 s), Experiment 8 at 100 cm/s, which showed a very low RMS value (≈0.2 cm), as well as the following at 50 cm/s, with an abnormally high value (≈2.3 cm).This behavior is independent of the 3MM method applied.
That is, the greatest error occurs at 0.9 s instead of 1.7 s , with values of 5.90% (MSI), 5.94% (OFI), and 5.94% (DDI).At 1.7 s , the error decreases to around 3.6% for all methods.Because of this inconsistency in the error behavior, the linear model fitness is even worse than that with the other two sensors.An explanation for these anomalies will be given in the following section.
The Effect of Noise in Distance Estimation
The growth of distance error from an accelerometer at rest can only be explained as the effect of stochastic sources of noise from the sensor.The error caused by noise integration can be considered as a lower bound of distance estimation.This effect has been analyzed thoroughly by Thong et al. [9].They propose a mathematical model of the growth of the RMS error with integration time T, which depends on three parameters: the sampling frequency f s (or number of samples N = f s T), the cut frequency of the internal anti-aliasing filter of the device f c , and the sensor noise power spectral density as given approximately by its datasheets σ 2 c .After some simplifications (see Appendix B), the model can be reduced to a cubic polynomy in the square of the RMS error of the estimated distance d: x = f c f s being the operation point, which depends on the specific election of cut-off and sampling frequencies, and the function f (x): which represents a correction factor that steps down the sensor analog noise level in the datasheets, σ 2 c , to discrete equivalent C σ , which depends on the operation point.Notice that the Accel.XS sensor has a smaller value of its f (x) coefficient than the other two sensors (20% smaller).
The Thong model presented good agreement with experimental data at up to 1 s of integration, and it started to progressively underestimate the positional error with integration time, reaching a 22% error at T = 3.33 s [29].We reported similar results for the CMS method in Figure 4 a, with the three sensors.The experimentally adjusted model, RMS(d) 2 = at 3 , shows coefficients a =0.55, 0.94, and 3.95 for each sensor, respectively.These values correlate with their respective digital noise level σ d in Table 3, taking into account the smaller value (20%) of the Accel.XS f (x) coefficient.
The results also indicate that the three main methods show a smaller RMS error growth for any tested sensor, as expected.The DDI method is second-class from this point of view, with errors 100% greater.
As far as the model is concerned, the three main methods show an improved prediction capability.For example, for the three second distance estimation mark, the error underestimation with the CMS method was 12,6%, 22.5%, and 17.1% for the three sensors XS, LS, and WR, respectively.These numbers decrease to 6.7%, 14.7%, and 1.9% for the MSI method, 8.2%, 16.8%, and 3.1% for the OFI method, and 7.9%, 2.3%, and 2.1% for the DDI method.That capability could be useful to make model-based corrections of distance estimations in applications needing larger integration times than those addressed in this work.
Another effect found with the three error-reduction methods is to make the Accel.LS perform better than the Accel.XS sensor.What is expected from the standard deviation of the sampled data specification, σ d in Table 3, is a slower RMS growth of the latter, as is the case applying the CMS method.The reason why this tendency is reversed by those methods is not clearly understood.One explanation could be related to the parameter σ d , which comes from applying a bandwidth parameter f c that is not clearly defined.If the real bandwidth were closer for both sensors, the lower analogue standard deviation σ c in the Accel.LS sensor could explain the results, except for the CMS method.The different sampling rate used with both sensors could be the root of this result, but this possibility was discarded with additional experiments.
The Effect of Motion in Distance Estimation
If for a static sensor its performance as a distance estimator correlates to the sensor quality, as given by its noise level and bandwidth, for a sensor in motion other factors are more relevant.In this work, we defined eight motions, grouped in two sets with integration times (0.5, 0.9, 1.7, and 4.1 s): short distance motion (4 cm) traversed at decreasing average speeds (10, 5, 2.5, and 1 cm/s) and long distance motion (40 cm) traversed at higher speeds (100, 50, 25, and 10 cm/s).The results reveal that distance estimation shows remarkable differences in its qualitative behavior for both groups.
The Case of Short Distances or Low Average Speed Motion
For short distance experiments (4 cm), the CMS method generates a ranking of sensors similar to that one in the static case, (XS > LS > WR) rms CMS .It might suggest that the XS sensor is, in principle, the best election for short-distance estimation.However, for the three main noise-reduction methods, which presented a very similar response among them, the sensor ranking reorders to (WR > LS > XS) rms 3MM for very short integration times (0.5 and 0.9 s), and to (LS > WR > XS) rms 3MM for times (1.7 and 4.1 s).
This confusing behavior can be explained considering that the RMS value aggregates two magnitudes: the mean error and the data variance.For most combinations of methods and experiments, the Accel.WR device shows a greater experimental variance than Accel.LS, (LS > WR > XS) var 3MM .
However, Accel.WR also exhibits a smaller mean error than the other two sensors in most cases (except at the 4.1 s data point).For short integration times (0.5 and 0.9 s), the variance is not large enough to degrade the better mean error, and it prevails in the final RMS value.From 1.7 s onwards, this is not the case, and both the mean and variance errors are better in the Accel.LS sensor, as is the RMS value.Figure 6 illustrates this combined effect of mean and variance.The same grounds explain the misleading RMS results with the CMS method, whose large mean errors disguise the real variance growth with time, (LS > WR XS) var CMS .
Figure 6.Estimated short range distance profiles for sensors Accel.LS (left) and Accel.WR (right), for the four integration times and the MSI method.For short integration times (0.5 and 0.9 s), a better mean error of Accel.WR masks its greater variance in the final RMS value.Example at 0.9 s: Accel.WR: rms = 0.05 cm, mean = 0.02 cm, var = 0.05 cm; Accel.LS: rms = 0.09 cm, mean = 0.09 cm, var = 0.02 cm.
The conclusion is that the LS sensor is the best election for distance estimation.However, the behavior of the mean error is still to be explained, which could make the WR sensor preferable over LS when mean motion velocities are faster (>2.5 cm/s) or integration times are short (<1 s).Notice that these conditions can be frequently met in biomechanics applications related to COM measurements.
The three main methods tend to produce similar estimations for a given sensor.The reason for that is that the DDI method tends to be similar to MSI for long integration times, because the level removal converges to a mean subtraction.The OFI method appears to be slightly better for short integration times, but its complete estimated distance profile is unrealistic, as Figure 7 reveals, calling into question the possible superiority of the method.Another conclusion is that the cubic model of the RMS growth is only an effective predictor for a larger integration time (4.1 s) in the case of the WR sensor.It is also true with the combination LS sensor/MSI method.How to use this for model-based error reduction is not clear and needs further research.
The Case of Long Distances or High Average Speed Motion
The results for long distance experiments (40 cm) are more troubling.To start, the simple CMS method does not allow a clear classification of the sensors from the RMS error point of view.The pattern of the variance growth is the same as that in short range, (LS > WR > XS) var all .Thus, the RMS differences with the CMS method must come from a more dispersed behavior of the mean error.
The three main methods, which again presented a similar response among them, rank the sensors differently: for 0.5 s integration times the order is (WR > LS > XS) rms 3MM , and for times (0.9, 1.7, and 4.1 s) it is (LS > XS > WR) rms 3MM .That is, for all cases under 50 cm/s, the LS outperforms the WR sensor as before.Even the XS sensor performs better, because WR is no longer the sensor with the smallest mean error, and the mean value produces (XS > LS > WR) avg 3MM .As a consequence, the possibility that the WR could be preferable is restricted only to Experiment 8 at 100 cm/s.The WR sensor has no saturation problems, but its deceleration peak seems truncated around −10 m/s 2 in both experiments, see Figure 8.The reason for this is not clear, and further research is required.The XS sensor in Experiment 7 shows a large deceleration peak, which could correspond to the 30 Hz filtered real deceleration.In Experiment 8, the peaks are lower, but that can be explained by the fact that, at 100 cm/s, the robot works within its acceleration limits, and the change in velocity takes 0.3 s instead of 0.1 s, greatly reducing the effect of the sensor-limited bandwidth.Hence, the three sensors registered different motions in the last two experiments (Experiments 7 and 8).A measure of this difference is the mean value of the accelerometer peaks in the experiments, Table 5.For Experiments 1 to 6, the mean peak is similar for LS and WR sensors, and it is downscaled for the XS sensor.This is so because the Accel.XS has a bandwidth of 30 Hz, and the robot's change in velocity occurs at ≈0.1 s, implying that acceleration has 10 Hz components that are attenuated by its filter.This can explain why the Accel.XS, which has a good performance in static, ranks last in motion-related experiments.
However, in Experiments 7 and 8 the mean deceleration peak is very different for the three devices.The actual motion profile carried out by the robot was the same for all sensors, so they failed to register the actual acceleration.Under those circumstances, rejecting those two experiments from the point of view of a fair evaluation of methods is unavoidable.The conclusion to be drawn is that the two first data points in Figure 5b,d,f are invalid, so we do not have enough data to build a model to predict errors for larger (>2 s) integration times.
To explain why the model fails in these cases, it is necessary to inspect individual raw signals with detail.The actual robot accelerations in Cartesian space are not easy to obtain, but a first inspection suggests that they can rise over the 2 g limit during the acceleration peak in Experiments 7 and 8, saturating the LS sensor.It is not clear how an MEMS accelerometer reacts to saturation, but in any case the distance estimation results obtained are subject to objections.
For a given sensor, any of the main methods produce similar results, and the same reasoning made in short distances is valid for the preference of DDI or MSI over OFI.Another outcome of this study is that, for integration times under 2 s, the worst-case RMS error stays in a range between 2 and 4% in percentage, always with a longer time 1.7 s.This is true for all main methods and sensors.Therefore, the average speed of the motion is not, in itself, a key factor for distance estimation.
Conclusions
The measurement of distances using accelerometers is a functionality that is used in many HMS applications.The context of these measurements is an integration time of less than 2 s, and a movement length of up to 2 m.Even in these restricted conditions, it is advisable that the estimation algorithm incorporates some mechanism to limit the growth of the error, which, being quadratic in nature, deteriorates the estimates even for such small integration times.
In this work, we have compared five methods with three different sensors in controlled experimental conditions, which reflect those found in practice in HMS.We have separated these conditions into two groups: short and long distances.The results obtained are as follows: (1) With the static sensor, the RMS error is related to the quality of the sensor in terms of bandwidth, noise density, and sampling frequency.The algorithms to reduce the error can reduce it by up to one-third.The simple elimination of the mean (MSI method) produces optimal results.The XS and LS sensors have a similar response, better than the WR.(2) With the sensor in motion, the expected error (RMS) in distance estimation is below 4.5% in 1.7 s for both distance ranges, whatever the sensor used.(3) The tested methods produce quite similar results in both groups of experiments.The OFI is sometimes better with low periods of integration, but no general rule can be defined in this respect.(4) The best sensor to estimate distances turns out to be the LS at all distances.For long distances and average velocities over 50 cm/s, the measurements are no longer reliable in all three devices.
The study confirms that it is feasible to use accelerometers to estimate short linear displacements of the body.More than the estimation method applied, the motion kinematic conditions can be a key factor in the performance of this estimation, combined with the type of accelerometer used, and those factors have to be jointly evaluated.
p x 8 5 h T + w P n 8 A e q C k F 0 = < / l a t e x i t > dv< l a t e x i t s h a 1 _ b a s e 6 4 = " z h M W y 2 9 V l Q k l 7 m 3 R I R x G 8 o m g T w Y = " > A A A B 7 3 i c b V B N S 8 N A E J 3 4 W e t X1 a O X x S J 4 K o k I 9 l j w 4 r G C / Y A 2 l M 1 m 0 y 7 d b O L u p F B C / 4 Q X D 4 p 4 9 e 9 4 8 9 + 4 b z s e y d c M p Z s 7 g D 5 z P H 6 S 0 j 6 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " z h M W y 2 9 V l Q k l 7 m 3 R I R x G 8 o m g T w Y = " > A A A B 7 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K o k I 9 l j w 4 r G C / Y A 2 l M 1 m 0 y 7 d b O L u p F B C / 4 Q X D 4 p 4 9 e 9 4 8 9 + 4 b z s e y d c M p Z s 7 g D 5 z P H 6 S 0 j 6 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " z h M W y 2 9 V l Q k l 7 m 3 R I R x G 8 o m g T w Y = " > A A A B 7 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K o k I 9 l j w 4 r G C / Y A 2 l M 1 m 0 y 7 d b O L u p F B C / 4 Q X D 4 p 4 9 e 9 4 8 9 + 4 b z s e y d c M p Z s 7 g D 5 z P H 6 S 0 j 6 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " z h M W y 2 9 V l Q k l 7 m 3 R I R x G 8 o m g T w Y = " > A A A B 7 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K o k I 9 l j w 4 r G C / Y A 2 l M 1 m 0 y 7 d b O L u p F B C / 4 Q X D 4 p 4 9 e 9 4 8 9 + 4 b
Figure 1 .
Figure 1.Estimation methods compared in this work.
Figure 2 .
Figure 2. (Left) End effector designed to test several sensor units at once.(Right) Robot tool motion and sensor attachment.
Figure 3 .
Figure 3.The complete experimental procedure: (Left) the accelerometer signals were segmented and analyzed, and the erroneous one was discarded; (Center) the estimation methods were applied, and the RMS value was computed at the end of the motion; (Right) RMS values were compared for different integration times, sensors, and methods.
Figure 4 .
Figure 4. Static RMS distance estimation errors for each sensor (markers) and its cubic polynomial model (lines).(a) The CMS method.(b) The LRI method.(c) The MSI and OFI methods.(d) The DDI method.The model fitness is evaluated with statistic (mm, in parenthesis).
Figure 5 .
Figure 5. RMS position error for the Accel.XS (first row), Accel.LS (second row) and Accel.WR (third row) devices: numerical result of the estimations and interpolated model for small integration times (under 2 s).(Left) Short distance estimations.(Right) Large distance estimations.
Figure 7 .
Figure 7.Estimated distance profile using methods MSI (left) and OFI (right) in Experiment 1 and Accel.LS sensor.Although the final RMS are similar in both methods, the distance profile of the OFI method is unrealistic.
Figure 8 .
Figure 8.Average accelerometry profile of Accel.WR (first row) and Accel.XS (second row), with a band corresponding to two standard deviations.(Left) Experiment 7 at 50 cm/s.(Right) Experiment 8 at 100 cm/s.
Table 1 .
Kinematic restrictions of the spatial gait parameters (displacements) of interest.
Table 2 .
Range of motion conditions for the eight designed experiments: four integration times to measure short and range distances, with increasing average speeds.
Table 3 .
Accelerometer characteristics from their datasheet.
Table 4 .
Methods for distance estimation and correction.
Table 5 .
Acceleration and deceleration average peaks (m/s 2 ) for every experiment and accelerometer.
WR 21.2614 7.3826 5.9158 6.3202 12.9859Results of the experiments in Table2: RMS position error growth (cm) with integration time (s) for the five estimation methods.
Table A3 .
Accel.LN in short distance.
Table A4 .
Accel.WR in short distance.
Table A5 .
Accel.XS in large distance.
Table A6 .
Accel.LN in large distance.
Table A7 .
Accel.WR in large distance.
|
2018-12-19T14:03:51.222Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "a885c6794375c2de013a390d05e7c8d009e790a9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/18/12/4441/pdf?version=1544865182",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a885c6794375c2de013a390d05e7c8d009e790a9",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
264668007
|
pes2o/s2orc
|
v3-fos-license
|
Molecular Insights into Systemic Lupus Erythematosus Pathogenesis
Systemic lupus erythematosus (SLE) is a complex, heterogeneous, and chronic autoimmune disorder of an unknown origin. Its clinical symptoms range from a benign skin disorder to severe, life-threatening conditions.1 Immune effector dysfunctions are hall marks of SLE disease.2 The etiopathogenesis of the altered immune response in SLE remains unknown. SLE is characterized by the presence of auto-antibodies (AutoAbs) for a wide variety of self antigens and circulating immune complexes.1,2 The onset of lupus is variable and may affect all stages of life. The disease predominantly affiicts females in the child-bearing years about 6- to 10-fold more frequently than males. We have not had a new drug in 50 years, because of the unknown etiology of the abnormal immune response. The current lupus therapies are non-specific, symptomatic, and cause significant side effects. In this editorial, I have made an attempt to describe multistep immune alterations that pave the way for the inception and sustaining of SLE pathogenesis, and postulated molecular mechanisms involved in SLE disease onset. Such information will help in better understanding SLE etiopathogenesis and in developing effective and safer strategies to combat SLE as well as other autoimmune diseases.
Systemic lupus erythematosus (SLE) is a complex, heterogeneous, and chronic autoimmune disorder of an unknown origin. Its clinical symptoms range from a benign skin disorder to severe, life-threatening conditions. 1 Immune effector dysfunctions are hall marks of SLE disease. 2 The etiopathogenesis of the altered immune response in SLE remains unknown. SLE is characterized by the presence of auto-antibodies (AutoAbs) for a wide variety of self antigens and circulating immune complexes. 1,2 The onset of lupus is variable and may affect all stages of life. The disease predominantly afflicts females in the child-bearing years about 6-to 10-fold more frequently than males. We have not had a new drug in 50 years, because of the unknown etiology of the abnormal immune response. The current lupus therapies are non-specific, symptomatic, and cause significant side effects. In this editorial, I have made an attempt to describe multistep immune alterations that pave the way for the inception and sustaining of SLE pathogenesis, and postulated molecular mechanisms involved in SLE disease onset. Such information will help in better understanding SLE etiopathogenesis and in developing effective and safer strategies to combat SLE as well as other autoimmune diseases.
A strong association has been found between elevated levels of circulating type I interferons (IFNs) and autoimmune diseases like type I diabetes and SLE. [3][4][5][6] Remarkably, the therapeutic administration of type I IFNs has provoked type I diabetes, SLE, and primary Sjögren's syndrome (pSS) in some individuals. [7][8][9] Constitutive expression of type I IFNs was observed in SLE patients. 5 The expression and up-regulation of several type I IFN-regulated genes is associated with SLE pathogenesis and disease severity. [10][11][12] Natural IFN-α producing cells (NIPCs)/plasmacytoid dendritic cells (pDCs) play a major role in endogenous type I IFN production and NIPCs/pDCs are increased in SLE. 13 Deficient expression of the type I IFN receptor reduced lupus-like disease in NZB mice. 14 These studies demonstrate strong correlation between type I IFNs and SLE pathogenesis. In seeking the molecular mechanism(s) for lupus pathogenesis, I discovered editing in SLE T cell transcriptome, 15 because of the up-regulation of the transcript editing gene, 150 kDa adenosine deaminase that act on RNA 1 (ADAR1). 16 Studies by other investigators confirmed these findings. 10,17,18 The ADARs belong to a family of mammalian RNA editing enzymes, which play an important role in several physiological and pathological processes by catalyzing hydrolytic deamination at C-6 of the adenosine (A) base in certain mRNAs, which leads to inosine (I) formation. 15,16,19 Inosines are subsequently recognized as guanosine (G) by the translation machinery. Such editing will result in A to I (G) transcript mutation. The 150 kDa ADAR1 expression is regulated by type I IFNs while 110-kDA ADAR1 and ADAR2 are constitutively expressed in T cells and other cell types.
The occurrence of conserved RNA secondary structures in the human transcriptome is extensive, 20 which indicates enormous amounts of potential ADAR substrates in the human transcriptome. Widespread ADARs mediated editing of exonic and intronic elements in human RNAs has been identified. [21][22][23][24][25][26][27][28] Most of the editing of exonic elements is random and
Molecular Insights into Systemic Lupus Erythematosus Pathogenesis dama laxminarayana
heterogeneous. 15,16 The up-regulated 150 kDa ADAR1 randomly edit adenosines located in double stranded base-paired coding and noncoding regions and cause novel mutations in gene transcripts. 16,24,25 Repeated occurrence of such exonic and intronic editing at certain base positions has been identified in SLE T cells and in normal T cells, which express up-regulated 150 kDa ADAR1, at different time points. 16,[24][25][26] In addition to ADARs induced A to I (G) editing, apolipoprotein B-editing enzyme, catalytic polypeptide-1 (APOBEC1) mediated cytidine (C) to uridine (U) editing has been well documented in human transcriptome. 29 A low frequency of G to A and U to C changes were also observed only in SLE and other pathological conditions. 24 The enzymatic machinery responsible for such editing and the molecular mechanism underlying such changes are unknown. Protein molecules translated from edited mRNAs have been identified in normal human B-lymphocytes. 30 Recently, about hundred million A to I editing sites were identified in human transcriptome, 28 which can make possible a generation of extremely diverse transcriptome. Extensive editing of human genome by activation-induced cytidine deaminase (AID) enzyme has been well documented. 31,32 The AID mediated DNA editing may result in the formation of anti-DNA antibodies in addition to its role in the induction of somatic hyper mutations and class switch recombinations in immunoglobulin genes of B-lymphocytes. The occurrence of AutoAbs for mutant DNA molecules in scleroderma has been described recently. 33 Peptidylarginine deiminases (PADs) edit protein molecules by deiminating arginine into citrulline and play a critical role in generating anti-citrulline antibodies in rheumatoid arthritis (RA). The association of anti-citrulline antibodies with RA pathogenesis is well established. 34 The autoAbs to several proteins, RNA molecules were identified in addition to anti-DNA antibodies in SLE. Occurrence of such plethora of autoAbs in SLE by molecular mimicking (self antigens mimicking as viral and/or bacterial products) without alterations, such as editing and/or mutations in DNA, RNA, and protein molecules is impossible. Therefore, it is hypothesized that, altered and/or enhanced DNA, RNA, and protein editing will not only induce altered gene regulations and immune functions but also set the stage for production of novel auto-antigens (autoAgs). The occurrence of such process repeatedly at different time points will result in the generation of autoAbs followed by auto-immunogenicity and the onset of autoimmunity.
The induction of autoimmunity involves two distinct phases. In the first phase autoAgs are formed by the following molecular mechanisms: (a) modulation of DNA, RNA, and proteins by editing and/or by induction of somatic mutations; (b) occurrence of same editing and/or mutation(s) at specific site(s) at different time points; (c) apoptosis of cells carrying such editing and/or mutation(s) and impaired clearance of apoptotic material by nucleases and proteases; and (d) presentation of such altered DNA, RNA, and protein molecules as non self by antigen presenting cells to T cells. Type I IFNs and/or IFN-inducible genes in the presence of autoAgs, will promote the activation and survival of naive T cells by dendritic cells (DCs), which is independent of BCL and BCL XL gene function. 35 Activated T cells will induce B cell stimulation and production of autoAbs. 36 This process needs constitutive and repeated occurrence of specific editing and/or mutations at the same site(s) in DNA, RNA, and/or proteins followed by impaired cellular functions and auto-immunogenicity as described earlier. Such initiated autoimmunity will be sustained by the following events, which occur as second phase; (a) the autoimmune complexes formed in the first phase act as endogenous inducers of type I IFNs, replacing exogenous type I IFNs and continuously inducing the production of type I IFNs by NIPCs; (b) continuous generation of autoAbs and autoimmune complexes is maintained this process; (c) a vicious cycle becomes established 5 ; (d) in addition, superantigens (SAgs), products of type I IFN-regulated HERVs, target the immune system causing massive polyclonal T cell activation, cytokine release, T cell apoptosis, and/or anergy, which aid in enhancing autoimmunity. 37 Therefore, such information indicates why lupus pathogenesis is so complex, variable, and hard to predict definite cause(s) for and raise the following questions. During their life time, all individuals will sustain viral infections, which are combated by endogenous and exogenous IFNs and IFN-regulated genes. However, why do only relatively few people develop autoimmunity, especially certain women during the childbearing years? Why do only a small percentage of cancer patients (20%), who are treated with IFN, express transient autoimmunity and only a fraction of them (1%) acquire SLE? 7 Why do about 20% of normal subjects demonstrate the presence of antinuclear antibodies (ANAs) but fail to develop the onset of autoimmunity and why do age related increase occur in the prevalence of AutoAbs in healthy elderly subjects? 38 These questions will help in hypothesizing that only some editing events and/or mutations in DNA, RNA, and protein will result in the formation of autoAgs, like how only extremely rare somatic mutations initiate cancer induction. These autoAgs will be able to produce autoAbs and induce autoimmunity only when cells containing such autoAgs undergo apoptosis followed by non clearance of apoptotic material. In addition, the induction of autoimmunity mimics the process of immunization, which needs vaccination with pathogenic material followed by repeated booster dose administration to attain good immune response for such pathogens. This may also be true in the process of attaining autoimmunity, in which repeated production of autoAbs to specific edited and/or mutated DNA, RNA, and protein molecules and their availability for developing auto-immunogenicity are important and necessary.
Based on this information, I postulate that, no present and/or future drug(s) will help in curing and/or preventing autoimmunity, specifically SLE after its onset, except for symptomatic treatment and temporary relief. Drug therapy(s) cannot modulate and/or suppress such a multistep and complex autoimmune response generated by altered plethora of self DNA, RNA, and protein molecules, before and after the onset of SLE pathogenesis. Moreover, it will be impossible to delineate the autoimmune response from normal immune response to selectively suppress it, without impairing normal immune response. Therefore, the best strategy to combat this anomaly is the multipronged approach of monitoring and regulating (a) frequent and prolonged expression of type I IFNs; (b) DNA, RNA, and protein editing; (c) apoptosis; (d) clearance of apoptotic material by nucleases and proteases during autoimmunity onset susceptible circumstances such as repeated viral and bacterial infections, radiation exposure, cancer treatment, and in women during child bearing years. Such timely and focused regimen and/or approaches could pave the way for effective and safer ways to prevent and/or control SLE as well as other autoimmune diseases.
|
2017-06-18T16:08:19.238Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "1f45f0dfa04e80897eccbb71f3b544f2e3197f8d",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3964202?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f45f0dfa04e80897eccbb71f3b544f2e3197f8d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12073111
|
pes2o/s2orc
|
v3-fos-license
|
Why is there a Spike in the Job Finding Rate at Benefit Exhaustion?
Putting a limit on the duration of unemployment benefits tends to introduce a “spike” in the job finding rate shortly before benefits are exhausted. Current theories explain this spike from workers’ behavior. We present a theoretical model in which also the nature of the job matters. End-of-benefit spikes in job finding rates are related to optimizing behavior of unemployed workers who rationally assume that employers will accept delays in the starting date of a new job, especially if these jobs are permanent. We use a dataset on Slovenian unemployment spells to test this prediction and find supporting evidence. We conclude that the spike in the job finding rate suggests that workers exploit unemployment insurance benefits for subsidized leisure.
Introduction
In theory, unemployment benefits provide a disincentive to benefit recipients. The greater the level of benefits relative to the expected wage, the less costly the period of the job search, so workers tend to search for jobs less intensely and tend to remain unemployed longer. Putting a limit on the duration of benefits tends to speed up the job search. As the date approaches when benefits will expire, unemployed workers may increase the intensity of their job search and thereby the rate of job-finding.
Moreover, many empirical studies find that the exhaustion of benefits creates a "spike" in the exit rate from unemployment. Usually, these spikes are found as a "by-product" of an analysis focusing on the relationship between potential benefit duration (PBD) and exit rates from unemployment. Moffitt (1985) is an example of an early US study finding benefit exhaustion spikes. He analyzes administrative unemployment insurance records from the Continuous Wage and Benefit History (CWBH) database. As Moffitt indicates the main advantage of administrative data is the high accuracy, while the main disadvantage is that the variable of interest, the duration of UI benefits is truncated at the point of maximum benefits. Most individuals in his data have a maximum benefit duration of 26 weeks but some individuals are entitled to extended benefits up to 13 weeks. Moffitt finds that the unemployment exit rate at 26 weeks is 3 times the exit rate one month before benefit expiration. At 39 weeks there is a spike in the exit rate which is about 2 times the regular exit rate. Meyer (1990) analyzes the same CWBH data as Moffitt using a more extensive statistical model finding similar results: the exit rate in the week before benefit exhaustion is about twice the size of the usual exit rate. Katz and Meyer (1990a) use two datasets, the CWBH dataset and data from the Panel Study of Income Dynamics. The results concerning the spike at benefit exhaustion using the CWBH data are similar to previous studies: in the week of benefit expiration the exit rate is about 80% higher. The survey data allow for a distinction between transitions to jobs at the previous employer (recalls) and transitions to new jobs. In both cases there is a substantial increase in the job finding rate close to benefit exhaustion. 1 Katz and Meyer (1990b) use CWBH data supplemented with telephone interviews to provide additional information. They 1 Katz and Meyer also show that such spikes are not present for UI non-recipients. find spikes in the job finding rates in the exhaustion week which are 2.2-2.3 times the usual job finding rate, both for recalls and new jobs. Card and Levine (2000) analyze administrative data from the New Jersey Extended Benefit Program. They find that the exit rate in the week of benefit exhaustion is about twice as large as the regular exit rate.
There are also quite a few European studies that find spikes near benefit exhaustion. Carling, Edin, Harkman and Holmlund (1996) analyze Swedish data and find a big increase in the outflow from unemployment to labor market programs whereas the increase in the exit rate to employment is substantially smaller. Roed and Zhang (2003) find for Norwegian unemployed that the exit rate out of unemployment increases sharply in the months just prior to benefit exhaustion with the effect being larger for females than for males. Adamchik (1999) finds a strong increase in re-employment probabilities around benefit expiration in Poland. Lalive et al. (2006) analyze Austrian social security data finding large spikes in the exit rate out of unemployment at benefit exhaustion. Van Ours and Vodopivec (2006) studying PBD reductions in Slovenia find both strong effects on the exit rate out of unemployment and substantial spikes around benefit exhaustion; the spikes in the job finding rate in the month prior to benefit exhaustion are 2.2-2.5 times as high as the usual job finding rate. In a recent study that focuses exclusively on the endof-benefit spike phenomenon Card et al. (2007) find that the unemployment exit rate increases much more than the re-employment hazard rate does. Their main conclusion is that the spike in unemployment-exit rates is to a large extent due to measurement error. Researchers mistake leaving the unemployment register for job finding. only. But, there are also studies that focus on job finding rates and find benefit exhaustion spikes. This indicates that the benefit exhaustion spike is more than a statistical artefact. As we discuss in more detail in the next section theoretical work based on non-stationary search theory explains the increase in the job finding rate towards benefit exhaustion (Mortensen (1977) and Van den Berg (1990)). However, these studies do not explain why it falls again after expiration, which is needed to get a spike. 2 Our contribution to the literature is twofold. First, we provide a theoretical explanation for the existence of benefit exhaustion spikes, which are caused by delays in job acceptance. Our theoretical model suggests that spikes in job finding rates are more likely to occur for permanent jobs than for temporary jobs. Second, we use a dataset on Slovenian unemployment spells to test this prediction. The existence of end-of-benefit spikes per se in Slovenia has been shown before but this was a by-product of an analysis on the impact of changes in potential benefit duration on job finding rates (Van Ours and Vodopivec (2006)). Here, we focus on the nature of the benefit exhaustion spikes. We show that indeed these spikes are more important in transitions from unemployment to permanent jobs than they are in transitions from unemployment to temporary jobs. This paper is set-up as follows. In section 2 we present our theoretical model in which individuals optimize the delay in job acceptance. In a stationary labor market the job finding rate equals the job offer arrival rate, but initially, because of the delay period no jobs are accepted. When an individual approaches benefit expiration the delay is reduced as no individual will want to accept a job offer beyond the point of benefit expiration. This delay behavior causes a spike at the point of benefit expiration. Delays in accepting job offers will not always occur. In case of temporary jobs delaying the start will not be acceptable to firms. Therefore, in the transition rate to temporary jobs an end of benefit spike is less likely to occur.
In section 3 we discuss our data and present some stylized facts. Using data from the Slovenian unemployment register we present "eyeball" evidence supporting our delay theory. Section 4 presents the results of our empirical analysis. We investigate whether job finding spikes at benefit exhaustion are smaller for temporary jobs. We find that this is indeed the case. Section 5 concludes.
2 Suggested explanations for the benefit spike include the strategic timing of job starting dates and implicit contracts between unemployed workers and their previous employers, in which the employers rehire the workers at about the time their benefits expire (Card and Levine, 2000).
However, these are notions rather than formalized theories. Also, the explanation provided by Card and Levine (2000) is very much related to the US labor market as in almost all European labor markets temporary layoffs do not occur.
2 Optimal delay in job acceptance -theory 5 2 Optimal delay in job acceptance -theory We aim to explain the spike in the outflow from unemployment around the time the unemployment benefit expires. Therefore, we cannot use stationary models where unemployment benefits are paid irrespective of unemployment duration (or where the benefit entitlement is lost with an constant probability per period). We assume that for a duration T the unemployed worker is entitled to unemployment benefits b > 0. After expiration the benefits drop to a level normalized to zero.
Two well known papers on nonstationarity in job search theory are Mortensen (1977) and Van den Berg (1990). In this setting, Van den Berg (1990) implies that the job acceptance probability increases with duration, jumps up at time T and stays (that) high thereafter. Intuitively, if the benefit has dropped (to zero) the value of unemployment becomes so low that almost any job becomes acceptable.
This reduces the reservation wage and hence increases the job acceptance rate.
Thus, this analysis does not explain a spike in exit rates at T as the exit rate does not fall after T .
Mortensen's (1977) model can explain a spike if one is willing to assume that income and leisure are substitutes. In that case, acceptance rates increase with duration and drop (discontinuously) at T to a lower level. As the benefit drops to zero, leisure substitutes for income thereby reducing search effort (which takes up leisure time). In the case where leisure and income are complements (leisure is more enjoyable if there is more money to spend), the acceptance probability jumps up at T and stays high thereafter. We find the assumption that income and leisure are complements more convincing since this is in line with all literature on the effects of the level of benefits on the job finding rate. 3 Moreover, even in the case of substitutes, it is not clear why these effects would differ for different type of jobs.
There are also studies that use a static labor supply theory to motivate the existence of an end-of-benefit spike. Then, it is assumed that a new job can be found at any time (Meyer (1990) and Moffitt and Nicholson (1982)). At the time a worker loses his job he decides on consumption and the duration of unemployment 3 Note that the case of income and leisure being substitutes would imply that an increase in unemployment benefits increases the job finding rate. A prediction that is clearly at odds with empirical research (see for example Atkinson and Micklewright (1991)).
subject to a budget constraint. At the expiration date T the budget constraint is kinked and hence many indifference curves are tangent at the kink. Therefore, many individuals choose to leave unemployment at benefit expiration, which explains the spike in the outflow rate at T . However, this static model is less suitable as a framework to study benefit exhaustion spikes because it does not explain why the size of the spike depends on the type of job. As we show below in our empirical analysis benefit exhaustion spikes are larger for permanent than for temporary jobs.
In fact, one can argue that in the Meyer (1990) and Moffitt and Nicholson (1982) framework to the extent that temporary jobs are easier to find than permanent jobs, this theory would predict a larger spike for temporary jobs. 4 We propose a model where firms and workers are matched and then decide on the wage and the starting date of the job. We first describe the firm side of the story and then move on to the worker. Then we show how a delay in the starting date generates a spike in the outflow rate.
Delay
Consider a firm that has found a worker with productivity q. This worker yields a surplus to the firm equal to s(q) with s (q) > 0. Typically, s(q) will be of the form q − w(q) where the wage increases with q but w (q) < 1.
Let l denote the length of the contract that the firm offers the worker and τ the period for which the worker would like to postpone actually starting to work. Then the discounted value for the firm of accepting this worker can be written as where ρ denotes the discount rate. We assume here that the firm has work for a period l independent of when this is done. 5 4 Note that the story in Meyer (1990) and Moffitt and Nicholson (1982) is not necessarily dynamically consistent. When a worker loses his job, a "holiday" of T periods looks nice enough, but a week before T he might like to re-optimize and extend his holiday. With such re-optimization it is not clear that many workers return to work at the kink in the budget constraint. In other words, how can the worker commit to starting a job at time T ? Note also that in our model, the worker commits by signing a contract that specifies his starting date on the job. 5 Alternatively, we can assume that the job involves seasonal work, that needs to be finished Suppose the worker insists on a delay of τ periods, then this is only acceptable to the firm if V (q, τ ) exceeds the outside option O for the firm. One way to think of the outside option is that the firm can draw a new job applicant at cost c. This will yield For concreteness, we assume that the worker makes a take-it-or-leave-it offer to the firm about the starting date for the job. 6 Further we assume that the wage cannot be varied with τ . This can be justified in two ways. First, the wage may be given to the firm by an agreement with the labor unions. Second, if the wage is determined by bilateral bargaining between firm and employee, the employee may not be able to commit to a wage before starting the job. That is, if the worker agrees to a low wage (in return for a high τ ), he will renegotiate the wage once he starts the job. Hence the firm assumes that it will pay the employee this (renegotiated) wage anyway, independent of τ . For our purposes we do not need to specify how the wage is determined exactly. We only assume w ∈ b, q where b denotes the unemployment benefit level. Nash bargaining between worker and firm will give this result.
Further, we assume that the worker when asking for a delay of τ periods does not know q. 7 Hence the probability that the firm rejects τ is given by before a certain date l, say the end of the summer. In that case, the second integral is from τ to l. One can verify that similar results hold in this case. Clearly, also in this case V τ < 0: delaying the start of the job reduces the firm's profits as production and profit opportunities are destroyed. 6 Clearly, other assumptions would work here as well. The important point is that in negotiation with the firm the worker feels some restriction in delaying the start of the job out of fear that V (q, τ ) < O. See footnote 11 below. 7 Alternatively, we can assume that the worker knows his own productivity q but not the outside option O of the firm.
where H(.) denotes the distribution function of s(q).
Hence we find 8 Delaying the starting date of the job makes it more likely that the firm will look for another worker. Longer contracts make it more likely that a firm accepts a given delay τ . Assuming that s(q) is uniformly distributed with a constant density h, we further find Now consider a worker who is matched with a firm. The worker's wage in this firm is given by w. 9 The worker proposes to delay the starting date by τ periods.
Delay τ is given by the solution to max t (1 − G(t, l))V e (t, l, σ) + G(t, l)V u (σ) (2) where V e denotes the value of having a job with the starting date postponed t periods.
We focus on the case where (temporary and permanent) job opportunities satisfy V e (t, l, σ) > V u (σ). Hence the solution to equation (2) does not imply G(τ, l) = 1.
where the utility (in terms of leisure or home production) of delay is given by 8 To ease notation, we do not assume that the outside option depends on l. In fact, we could make this assumption. As long as the signs on the derivatives of G below are unchanged, this has no effect on our analysis. 9 If the wage will be determined by (re)negotiation after the worker has actually started to work, the worker does not know w exactly as he does not know q. In this case, w denotes the expected wage.
v(t, σ). 10 We assume v t , v σ > 0 and v tt < 0, v tσ > 0, v ttσ < 0. In words, σ determines the preference for delay. The higher σ the higher the utility and marginal utility of delay. Finally, v tσ falls with t. We refer to σ as the value of leisure or home production.
After l + t periods, the worker loses his job and is unemployed again (which gives him an expected discounted value ρV u (σ)). We assume that these discounted effects are rather small in the following sense: In words, the effects over t + l periods are small and dominated by the direct effects of v(t, σ).
The trade off described in equation (2) is between increasing the utility V e if the firm accepts the delay (which happens with probability 1 − G) and increasing the probability of being rejected in which case the worker continues to be unemployed with expected discounted value V u . We assume that d(V e (t, l, σ) − V u (σ))/dσ < 0.
In words, the higher the value of leisure or home production, the smaller the value of employment compared to unemployment. A search and matching framework will give this intuitive result. However, again, we do not need much structure on V u and V e and hence do not specify how they are determined exactly by the working of the labor market.
The first order condition for (an interior solution for the) optimal delay τ can be written as follows. 10 We assume that the worker signs a labor contract stipulating a starting day for the job. Hence we do not allow the worker to use the delay period to search for a better job. Allowing this would increase the size of the spike for the following two reasons. First, delay periods then become cumulative. Second, delay becomes more valuable for the worker giving a higher incentive to bargain for delays.
imply that V eτ > 0. Because delaying the starting date increases the risk of being rejected for the job, the worker asks for less than optimal delay in the sense that at the margin an increase in τ (still) raises V e . 11 We can now derive the following result.
Proposition 1: Assume the optimization problem given by (2) has an interior solution τ . Then we find that Proof of proposition 1 For an interior solution, the second order condition implies that The result then follows from the implicit function theorem: where the first inequality follows from G τ > 0, d(V e − V u )/dσ < 0 and assumption (7). The second inequality follows from G τ l , G l < 0, V eτ > 0 and assumptions (4) and (6).
Hence workers with a higher σ postpone the starting date for a longer period.
This happens for two reasons. First, they get a higher utility from delaying the starting date. Second, even if their proposal is rejected and they lose the job, this is not too bad for them as the loss V e − V u is decreasing in σ.
Jobs with a longer tenure period lead to longer delays in starting the job. This happens for two related reasons. First, higher l makes it more likely that the firm accepts the delay (that is, G l < 0). Second, increasing τ increases the probability that the firm rejects the employee's offer, but at a lower rate as l increases (G τ l < 0).
There are two effects going in the opposite direction. First, higher l implies that the job is worth more, as the drop in income from w to the value of unemployment ρV u happens further away in the future. This increases the loss if the employee's offer of τ is rejected (V el > 0) and may reduce τ . Second, higher l reduces the benefit of postponing the start of the job (V eτ l < 0). This is again due to the effect at τ + l.
However, since these are effects in the future, discounting reduces the size of these effects. If the effects are small enough, they are dominated by the first two effects.
Finally, we can show the following.
Hence we see that higher σ employees delay more (for each l). And for higher σ the effect of l on τ becomes smaller. If it is correct to assume that women have a higher value of home production (e.g. because they are not breadwinner), this result explains the findings below that women postpone more jobs (both temporary and permanent) and the differential effect between temporary and permanent jobs is smaller in terms of delay.
The conditions in the proposition are sufficient and not necessary. Although, the conditions are rather technical, they have the following interpretation. We know that dτ /dl > 0 and we want to understand when this derivative is smaller as σ goes up.
There are three effects going against this. First, V eτ σ > 0: higher σ leads to a higher marginal value of delay. This tends to raise dτ /dl as higher l decreases the probability that the match is dissolved. Second V eτ lσ > 0: longer term contracts are more valuable to postpone as σ goes up (that is, V eτ l < 0 increases with σ). This tends to raise dτ /dl as well. Third, if G τ τ > 0 (which is the case above if the worker does not know q and H is uniform), an increase in σ makes the problem less concave, as the loss V e − V u falls with σ. This makes the agent's problem more elastic and tends to blow up dτ /dl as σ increases. Hence we need to assume that these effects are relatively small.
Spikes
The delay in starting a job, described above, can lead to spikes in the unemployment outflow rate. Let α l denote the arrival rate of jobs with duration l for an unemployed worker searching for a job.
Proposition 3: If v t (0, σ) < w then the outflow to l-jobs at T is given by where τ l denotes the solution to equation (9).
Proof of proposition 3 First, note that v t (0, σ) < w implies that for t > T (when the benefit level b is reduced to 0), the starting date of a job is not delayed at all. In that case, V et = −w + v t (0, σ) < 0 and hence τ = 0. Further, with the assumption made above that v tt < 0 we also find that jobs found at time t < T are Hence the outflow at time T is the "sum" (actually "integral") of workers matched with l-jobs from time T − τ l till T . The stock of "free" workers at time t that can be matched with an l-job is given by workers who up till then have not been matched with any job at all: e − j α j t . Q.E.D.
The assumptions imply that a worker who has found a job, never delays the start of the job beyond the expiration date T . Moreover, if a worker finds a job at date By way of illustration we did some numerical simulations. The simulations concern a labor market with temporary jobs that last 12 months and "permanent" jobs that last 32 months. The monthly job offer arrival rates are 0.08 for temporary jobs and 0.02 for permanent jobs. Under some additional assumptions, which are presented and discussed in more detail in Appendix A we find that with a permanent job the worker delays the start of the job by 3 months, while a worker with a temporary job does not delay at all. Figure 1 shows in the case where the maximum benefit duration is 9 months the evolution of the outflow rates to temporary and permanent jobs. Comparing the outflow rate at the spike with the average outflow at non-spike periods is one way to measure the size of the spike. This we call the relative size of the spike. In the example in the appendix, we find that a delay of three months leads to a relative spike equal to 4.35.
Summarizing, the theory above leads to the following testable predictions. First, spikes are higher for permanent than for temporary jobs. If women have a higher value of home production (σ) than men, we find that women delay the starting date longer than men (for both temporary and permanent jobs). Finally, the difference in spike between permanent and temporary jobs is smaller for women than for men.
3 Data and stylized facts
Data
The empirical analysis is based on administrative records of unemployment spells, combined with selected information on formal employment spells. Here, we are only interested in job finding rates. If individuals left unemployment for other reasons their durations of unemployment are considered to be right-censored.
The data we use are suitable to explore the existence of end of benefit spikes since our data do not only cover the period when workers were covered by unemployment benefits but also the period of transition from unemployment to employment after benefits expired. The date at which individuals started working on a job is not dependent on self-reporting of the unemployed workers but comes from employers. A unique feature of the Slovenian data is that at the start of the post-unemployment job its nature -temporary or permanent -is registered. Furthermore, we observe how long people stay in their jobs.
In our analysis we focus on individuals that were entitled to benefits for a maximum duration of 6, 9 or 12 months. For every unemployed worker after 3 months the replacement rate was reduced from 70% of the previous wage to 60% (subject to a minimum and maximum). Because the end of benefits effects for some workers coincides with a drop in the replacement rate for all workers, we ignore individuals with a potential benefit duration of 3 months. Slovenia reformed its unemployment benefits in 1998; the reform shortened the potential duration of benefits for most groups of workers. We use data from before and after the reform. Appendix B provides more information about the data.
Stylized facts
To illustrate the end of benefits effects, figures 2 and 3 show the relationship between job finding rates and months to benefit expiration for men and women, distinguished by potential benefit duration. 13 There are clear spikes at benefit expiration for each of the three groups of workers. It is also clear that for temporary jobs there is a spike at benefit expiration, but in relative terms -compared to the job finding rates 2 or 3 months before benefit expiration -the spikes are considerably smaller than for the transition rate to permanent jobs.
To get a first impression whether indeed temporary jobs are less likely to generate benefit expiration spikes in the transition rate than permanent jobs, we did some simple calculations. For each group of workers with the same potential benefit duration we divided the permanent job finding rate in the month of benefit expiration by this rate in the month prior to that. We did the same for the temporary job finding rate. The first two columns of Table 1 show the outcomes of these calculations. Indeed, whereas for males the average relative spike for permanent jobs is 3.37, it is only 1.43 for temporary jobs. For females the average relative spike for permanent jobs is 3.91, while for temporary jobs it is 1.64.
The top part of Figure 4 shows job separation rates for men, distinguished between temporary jobs and permanent jobs. As was to be expected, the job separation rates from temporary jobs are much higher as the job separation rates from permanent jobs. There is a large spike in the job separation rate for temporary jobs at 9 months indicating the importance of fixed-term contracts of that particular duration. Similarly there are also spikes at 3, 6 and 12 months. The bottom part of Figure 4 shows the job separation rates for women which are very similar to those of men. Clearly, individuals on permanent jobs do not leave their jobs quickly; 12 months after starting on a permanent job 94% the men and 95% of the women are still employed. From the workers on temporary jobs after 1 year 59% of the men and 66% of the women is still employed.
13 The job finding rates are calculated on a monthly basis taking right-censored durations into account. The same holds for the job separation rates in figure 4. Note that we can identify the spikes because they occur at different unemployment durations for different groups of workers.
Otherwise, we could not distinguish the spike from the effect of duration dependence.
4 Empirical analysis
Job finding rates
The use of hazard rate models and the data with individuals facing different potential benefit durations allow us to identify the end-of-benefit spikes. We distinguish between transition rates to permanent and to temporary jobs and start with a setup that is in line with our theoretical model. The rate at which individuals find a permanent or a temporary job at unemployment duration t conditional on observed characteristics x and unobserved characteristics u is assumed to have the following where i indicates the type of job (p=permanent, n=temporary), β is a vector of parameters and λ represents individual duration dependence, which is modeled in a flexible way by using step functions: where k (= 1,..,4) is a subscript for duration interval. We distinguish four intervals, monthly for the first three months, and the fourth interval larger than three months.
For reasons of normalization we impose µ i,4 =0. If a period of delay exists, in the job finding rate concerning permanent jobs at least the initial µ-parameters should be smaller than zero. For temporary jobs this should not be the case. Furthermore, I s is an indicator for the month of benefit expiration (s = 6, 9, 12). The µ-parameters measure the pattern of duration dependence, and δ indicates the size of the spike in the month of benefit expiration. If the period of delay exists we expect a spike to be present in the job finding rate for permanent jobs, while such a spike should be less important in the job finding rate for temporary jobs.
The conditional density function of the completed unemployment duration t i that ended in a transition towards a job of type i can be written as We assume that the unobservables in both job finding rates are from discrete distributions with two points of support, which we assume to be perfectly correlated.
Then, the joint distribution also has two points of support, p 1 and p 2 P r(u p = u p,a , u n = u n,a ) = p 1 P r(u p = u p,b , u n = u n,b ) = p 2 Because the hazard rates also contain constant terms, we normalize u p,a = u n,a = 0.
The discrete distribution is supposed to have a logit specification with p 1 = exp(α)
1+exp(α)
and p 2 = 1 1+exp(α) . We remove the unobserved components by taking expectations: The parameters are estimated with the method of maximum likelihood, taking into account that some durations are right-censored.
The analyses are done separately for males and females to account for possible differences in labor market behavior. In addition to this distinction by gender the effect of the following personal characteristics are taken into account: age, education, family situation, health, and calendar period of inflow into unemployment (see the appendix for details).
Panel a of table 2 shows the parameter estimates for the baseline model. Age has a negative effect on all job findings rates. Education has a positive effect on the job finding rate concerning permanent jobs but has no effect on the rate by which individuals find temporary jobs; with the exception of higher educated males who have a smaller transition rate to temporary jobs. Family conditions do not affect the transition rate to temporary jobs, but the effect for permanent jobs differs for males and females. Concerning permanent jobs, males who have dependent family members have a higher job finding rate than males who do not, but females with dependent family members have a lower job finding rate than other females. Bad health reduces all job finding rates substantially.
There is also evidence of unobservables affecting the job finding rates. Conditional on the observable characteristics and the elapsed duration of unemployment there is a group of 87% of the males that has a high job finding rate both to permanent and temporary jobs, while the remaining 13% has substantially lower job finding rates. For women these percentages are 83 for the group with high job finding rates and 17 for the group with low job finding rates. 14 The pattern of duration dependence is different for permanent jobs and temporary jobs. For permanent jobs the job finding rate is low in the first months of the unemployment spell, which is support for the existence of a delay period. The transition rate to temporary jobs in the first months is higher than later on.
The most important parameter estimates refer to the spike in job findings rates at benefit exhaustion. This spike is identified by comparing the job finding rate in the month of benefit expiration for some groups of workers with the identical nonexpiration month for other groups of workers. It appears that there are substantial spikes. The job finding rate concerning permanent jobs in a month of benefit expiration is about 3 times as high for men and 3.7 times as high for women as in the same month without benefit expiration. Also in the transition rates to temporary jobs we find spikes, which are about 50% (men) to 75% (women) higher than regular job finding rates. The difference between the spikes in the job finding rates between permanent and temporary jobs supports our theoretical model. Apparently for temporary jobs delaying acceptance is more difficult. Hence the spike is smaller.
Panel b of table 2 shows the parameter estimates for the spike if we impose that there is no duration dependence. There is a clear drop in the log-likelihood value from which we conclude that we cannot reject the pattern of duration dependence found in panel a.
Panel c of table 2 shows the parameter estimates for the spike if we introduce a very flexible specification of duration dependence with monthly intervals for the first six months, and after that the intervals 6-9, 10-12, 13-18 months and 18+ months.
Furthermore, we introduce an indicator for benefit expiration because individuals may increase their search intensity after benefits have expired. 15 As shown the spike in the job finding rate for permanent jobs is substantially larger than for temporary jobs. However, as in the baseline estimates we cannot ignore the existence of a benefit spike in the exits to temporary jobs.
14 We investigated whether it was possible to estimate an extended distribution of unobserved heterogeneity but we were not able to identify a third mass-point. 15 Note also that we can still identify δ because the spike occurs at different unemployment durations. If not, we could not distinguish the spike from the effect of duration dependence.
Job separation rates
The type of post-unemployment jobs is registered as being permanent or fixed-term, which we interpret as being temporary. The nature of the job is labeled at the start, but permanent jobs may not last long and temporary contracts may be extended so that temporary jobs may last quite some time.
To investigate the determinants of job separations we estimate a proportional hazard model where the job separation rate from jobs of type i at employment duration t conditional on observed variables x and unobserved characteristics v where z is a vector of variables that indicate when the unemployed left unemployment -the month of unemployment and whether or not it was the last month of benefits before expiration. Furthermore, β s and γ s are vectors of parameters, and λ s represents individual duration dependence, which is again modeled in a flexible way by using step functions: where k (= 1,..,10) is a subscript for duration interval and we consider the following ten intervals: 1, 2, 3, 4, 5, 6, 7-9, 10-12, 13-18, 18+. For reasons of normalization we impose µ s i,1 =0. The conditional density function of completed job durations and the likelihood function are set-up as before. As with the job finding rates also for the job separation rates we assume that the unobservables are from discrete distributions with two points of support, which we assume to be perfectly correlated.
The parameter estimates are presented in Table 3. The duration of permanent jobs for males is affected by their age, education and health. Older, lower educated males with bad health have a higher job separation rates than their counterparts.
For females the separation rate form permanent jobs is not affected by any personal characteristic. The duration of temporary jobs is only affected by age and family situation. Older individuals are more likely to loose their temporary job quickly.
The effect of family situation differs for males and females. Whereas males with 1 dependent family member are more likely to loose their temporary job, females with 2 or more dependent family members are less likely to loose their job.
Remarkably, for permanent jobs the duration doesn't depend on the previous unemployment spell. It doesn't matter whether an unemployed worker finds a permanent job early on in the unemployment spell or much later, the job separation rate is equally high. It's also irrelevant whether or not the unemployed worker finds a permanent job in the month of benefit expiration. Apparently, it is not just the "strong" worker that postpones his or her start until the moment at which benefits expire. This is support for our hypothesis that it is the delay in acceptance which is driving the benefit spike. For temporary jobs the previous unemployment spell has some importance. Especially workers that find a temporary job in the first month of their unemployment spell are more likely to loose this job quickly. Males that find a temporary job in the month of benefit expiration are less likely to loose this job quickly. This could point at reverse causality. Males that have the opportunity to start on a long term temporary job are more likely to postpone this start until the month of benefit expiration. Again, this would be support for our delay theory.
Concerning unobserved heterogeneity the results are different for permanent jobs and temporary jobs. Whereas for permanent jobs we found no indication of unobserved heterogeneity, for temporary jobs we do find that unobserved heterogeneity affects the separation rate. Most temporary jobs exist only shortly but there are also temporary jobs which last very long. 16 Conditional on the observed characteristics, the unemployment history and the duration of the employment spell there is a group of temporary jobs of 82% for males (80% for females) that last short, while the complementary 18% (20% for females) lasts very long.
Temporary jobs 4.3.1 Finding and separating
The relationship between the benefit expiration spike and the duration of the first job may be affected by correlation between unobservables in the job finding rate and the job separation rate. To investigate this we estimate a bivariate duration model with correlated error terms. We do the estimates for temporary jobs, separately for males and females. 16 In the estimates one of the mass points turned out to be very small, converging to minus infinity.
In both the job finding rate for temporary jobs and the job separation rate from temporary jobs we introduce unobserved heterogeneity. Both rates are now specified as where u n and u s n represent unobserved heterogeneity. As before, we assume that the unobservables in both the job finding rate and the job separation rate are from discrete distributions with two points of support which are integrated out of the likelihood specification.
The main parameter estimates are summarized in Table 4. As shown the second mass points are negative for the job finding rate and positive for the job separation rate. Conditional on the observed characteristics and the elapsed durations of the unemployment and employment spells individuals that have a low job finding rate also have a high job separation rate. If it takes a long time to find a job, the job found doesn't last very long. However, the size of the benefit exhaustion spikes are not influenced by the introduction of unobserved heterogeneity. They are almost identical to the ones presented in Table 2.
The duration of temporary jobs reconsidered
Our theory predicts that there is a positive correlation between the expected duration of a job and the size of the benefit expiration spike. We showed that indeed for permanent jobs there is a larger spike in the job finding rate than for temporary jobs. However, there is a large variation in the duration of temporary jobs. Some ex ante temporary jobs turn out to be ex post long employment spells. Our theory also predicts that the benefit expiration spike should be bigger for long temporary jobs.
To investigate a first impression whether indeed shorter temporary jobs are less likely to generate benefit expiration spikes in the transition rate to these jobs, as before we calculated the relative spike as the job finding rate in the month of benefit expiration divided by the job finding rate in the month previous to that . 17 The 17 Note that some job durations were right censored with a duration less than 1 year. This causes a bias in the calculations for jobs that lasted less than 1 year. results of these calculations are shown in columns (3) and (4) of Table 1. For males with a potential benefit duration of 6 months the relative spike for shortterm temporary jobs is 0.79, i.e. there is no spike at all. For the same category of workers the relative spike for long-term temporary jobs equals 1.76. On average there is no relative spike for short-term jobs while the relative spike for long-term jobs is 1.82. These findings confirm our theoretical model. For females the results are less clear. On average there are spikes for short-term temporary jobs and for long-term temporary jobs. The last type of spike is larger than the first type but the differences are small. This is consistent with our results (in propositions 1 and 2) that for workers with higher value of home production (assuming this is the case for women) both types of jobs are postponed and the difference in delay is smaller (compared to workers with lower σ)
The Slovenian labor market reconsidered
An important issue that may arise when analyzing the Slovenian labor market is the interpretation of behavior of workers in relation to the informal sector. Vodopivec (1995) indicates that in the early 1990s unemployed workers is Slovenia might have collected unemployment compensation and work at the same time under informal employment. Vodopivec claims that during 1990-92 there was a tendency among the recipients of unemployment benefits in Slovenia to stay unemployed until their benefits expired before taking a job. If so, benefit exhaustion spikes wouldn't have much to do with delay behavior as we claim. Instead, they would simply reflect the end of a waiting for benefits to expire period. Nevertheless, Slovenian legislators in 1993 and 1994 enacted several laws to prevent this type of waiting behavior from happening. In 1998, there was a major reform of unemployment benefits drastically reducing the potential benefit duration, roughly by half for most groups of recipients. (2006), the 1998 reform also called for several measures aimed at speeding up benefit recipients' reemployment, including improvement in employment services, the obligatory preparation of a reemployment plan for each benefit recipient, and more frequent contact between counselors and recipients. Furthermore, reform called for stricter monitoring of eligibility. Benefit recipients had to make themselves available to employment office counselors several hours a day. For the first time, inspectors (a special arm of employment offices) would check to see if benefit recipients were in fact unemployed (inter alia, by paying home visits to benefit recipients) and actively searching for a job. To the extent that collecting benefits and working in the informal sector until benefits expire was an issue, this should have been more prevalent before the 1998 reform. Tougher monitoring of the unemployment status should have ruled out a lot of this type of abuse. Nevertheless, to investigate this issue in more detail we performed separate estimates on data collected before and after the reform. Table 5 shows the relevant parameter estimates. Clearly there is no tendency for exhaustion spikes in the job finding rates to be smaller after the reform. And, the difference in the size of the spikes between permanent jobs and temporary jobs is present as much after the reform as it was before the reform. From this we conclude that although we cannot rule out some influence of the informal sector, this doesn't seem to be an important explanation for the existence of the end-of-benefit spikes.
Conclusions
Putting a limit on the duration of unemployment benefits tends to introduce a "spike" in the job finding rate just before benefit exhaustion. Previous studies refer to two alternative explanation for the existence of such a spike. First, a static labor supply model in which a kink in the budget constraint causes many individuals to choose the same benefit duration. Second, a non-stationary search model in which the job finding rate is slowly increasing due to increasing search intensity and falling reservation wages. In neither of the two models the nature of the job is important.
Our study presents a theoretical model in which the nature of the job affect the size of the end-of-benefit spike. In our model spikes in the job finding rates are caused by delays between job finding and the start of the job. Workers prefer to delay and make an offer to the firm about the starting date for the job. The firm will only accept a delay if the value of the job including delay is larger than the value of searching for a new worker, who may (also) have a preference for delay. When workers decide about their offer to the firm they take into account that the firm might reject the offer if the delay is too long. They also take into account that longlasting jobs have more value to the firm so for these jobs employers are more likely to accept longer delays. From our theoretical model we derive that delays are more likely to occur for permanent jobs than for temporary jobs. Our model assumes that workers who have found a job will never delay the start of the job beyond the expiration date of their benefits, since that would be too costly. This causes many unemployed to leave unemployment at benefit exhaustion thus causing a spike in the job finding rate. Since the delay period is longer the size of the end-of-benefit spike will be larger for permanent jobs than for temporary jobs. We investigate the validity of our model using Slovenian unemployment data which have the unique feature that the temporary or permanent nature of the post-unemployment job is registered. Indeed, we find that spikes are more likely to occur in transitions from unemployment to permanent jobs.
All in all, we conclude that end-of-benefit spikes in job finding rates are related to optimizing behavior of unemployed workers who rationally assume that employers will accept delays in starting date of a new job. Thus the spikes in the job finding rate suggests that workers exploit unemployment insurance benefits for subsidized leisure.
Appendices Appendix A: Numerical simulations on delay period and spike
To illustrate the relationship between the delay period and the spike, we consider two types of job: temporary (l = 12 months) and "permanent" (l = 32 months).
Job arrival rates per month for these jobs equal α 12 = 0.08, α 32 = 0.02 resp. We normalize the wage at w = 1 and assume b = 0.6. The discount rate equals ρ = 0.1/12 per month, that is 10% on a yearly basis. Instead of specifying, the uncertainty of the worker over q and the firm's outside option O, we directly specify G(t, l) = t tu(1+l(tu−t)) with t u = 6 months. The interpretation of this function is as follows. No firm accepts a delay of longer than 6 months (i.e. G(t u , t u ) = 1).
Further, higher l reduces the probability that an offer t < t u is rejected. We specify the value V e as in equation (3) with the value of losing the job in l + t periods time equal to b + σ. 18 Finally, we do not model the precise search and matching on the labor market. We simply assume that the loss of being rejected by the firm equals That is, for an agent with σ = 0, the loss of losing this job equals 10% of the value of the job. This is not a big loss. Intuitively, the worker receives unemployment benefits b (till period T ) and will be matched with other jobs in the future. The loss falls with σ and when σ = w = 1 (home production is as productive as an outside job) there is no loss at all. As above, the worker chooses τ to maximize W = V e − G * (V e − V u ). Table A1 summarizes the outcomes for two values of σ. With σ = 0.5, a worker who finds a temporary job does not delay at all. This is roughly consistent with what we see in the data for men. 19 With a permanent job, this worker delays the start of the job by 3 months. A worker with higher value of leisure, σ = 0.8, delays the start of both jobs. Note however, that the difference in delay is smaller for σ = 0.8 than it is for σ = 0.5 (as 3 − 0 > 5 − 3.5). This is consistent with proposition 2.
To get an idea what determines the size of a spike, we go back to the example 18 That is, for simplicity we do not model the probability of finding another job again after losing this one in l + t months. 19 The column labeled spike l α l will be discussed below. with σ = 0.5. The stock of "free people" at time t looking for a job is given by However, this stock is not observed. The observed stock of unemployed at t evolves as follows. For t < τ 32 we have only people matched with a temporary job leave the observed stock of unemployed.
Then for t ∈ [τ 32 , T , we have In addition to people leaving for temporary jobs, we have people leaving for permanent jobs who were matched with these permanent jobs τ 32 periods ago.
At t = T we have Table A1) the average outflow rate to a permanent job before and after the spike (i.e. over the interval [0, t] for t > T ). This average outflow rate is approximated by α 32 . 20 Further note that the outflow rate to permanent jobs just before T is higher than after T . This is due to the fact that before T , the observed outflow equals α 32 s(t − τ 32 ). That is, it is determined by a delayed stock which is higher (as the stock falls over time) than the current stock. After T the outflow is given by α 32 s(t) and hence the outflow rate is equal to α 32 = 0.02. Note that the spike with "relative size" 4.35 is generated by a three month delay in accepting permanent jobs. That is, the spike itself does not directly give us the delay in months.
Now consider the temporary outflow rate. For months before T this rate is below α 12 = 0.08. This is because the stock of "free" agents that would accept a temporary job right away is smaller than the observed stock of unemployed which includes workers that have accepted permanent jobs but have not started yet due to the delay τ 32 . This causes the entry for the spike in table A1 to be smaller than one ( spike 12 α 12 = 0.9). 21 As can be seen in table A1, with s = 0.8 we get spikes (bigger than 1) for both types of jobs. But the difference in spikes is smaller than for s = 0.5 (as 5.39 − 3.72 < 4.35 − 0.9).
Appendix B: Variables used in the analysis
In the analysis we use the following variables: • Age: continuous variable The simulations calculate outflows per day. The sum of these outflows over the days of the months is divided by the average stock in that month. This approximation of the outflow rate per month also causes small deviations from the true α's.
• Potential Benefit Duration (PBD) and time of entrance into unemployment (before or after the policy change): 5 dummy variables, Group 2 = PBD of 9 months, entrance before, Group 3 = PBD of 6 months, entrance after, Group 4 = PBD of 12 months, entrance before, Group 5 = PBD of 6 months, entrance after, Group 6 = PBD of 9 months, entrance after; reference group = PBD of 6 months, entrance before.
The characteristics of our dataset are presented in Table B1: Note that the average relative spike is calculated as the job finding rate in the month of benefit expiration divided by the job finding rate in the month previous to that. Also note that the distinction between the duration of temporary jobs is based on ex post information.
Tab. 2: Parameter estimates job finding rates -baseline model Not reported are the parameter estimates related dummy variables for each of the 6 groups of unemployed; absolute t-statistics in parentheses; a ** (*) indicates significance at a 95% (90%) level.
Tab. 5: Parameter estimates spikes -before and after the 1998 reform The "total" estimates are the same as those reported in Table 2 panel c; absolute tstatistics in parentheses; a ** (*) indicates significance at a 95% (90%) level.
|
2014-10-01T00:00:00.000Z
|
2009-10-01T00:00:00.000
|
{
"year": 2009,
"sha1": "209d8276beea7d801fccde1177162275ad2a46c2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10645-012-9187-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "ElsevierPush",
"pdf_hash": "b709f5b762caa3d1591bb21b56f8cb9d4c62c4a7",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
129972233
|
pes2o/s2orc
|
v3-fos-license
|
The ensemble scenarios projecting runoff changes in large Russian river basins in the 21st century
An approach is presented for carrying out a long-term projection of river runoff changes in large Russian river basins in the first three decades of the 21st century. These changes may be caused by climate warming and socio-economic factors. The approach utilizes a method for scenario estimation of runoff changes with a range of possible climate warming effects. This range is chosen by generalizing calculation results obtained by using an ensemble of global climate models for two contrasting scenarios (A2 and B1) of globally-averaged air temperature rises. The approach also utilizes a method for alternative scenario estimation for water consumption as related to socio-economic changes. The estimates show that the expected runoff changes in the first third of this century due to climate warming scenarios can compensate the runoff decrease caused by the realization of some of the scenarios for socio-economic changes in the Volga River basin. The same compensation does not occur in the Don River basin, where negative effects are expected for the regional ecology.
INTRODUCTION
Global climate warming and socio-economic changes are the leading factors in determining the future state of large river basin water systems which play an important role in the economic development of Russia.For this reason, it is necessary to generate integrated scenarios of river runoff changes within the large river basins, which would take into account the long-term probable changes of the two factors.Such scenarios should provide a basis for an ecologically safe management of water systems in the future.
According to the results of the previous investigations, one-scenario forecasts, let alone longterm forecasts, do not generally provide a comprehensive picture of future conditions.This is particularly true for the rapidly changing economic and water management activities.Therefore, it seems important to predict the future situation based on different scenarios of its development.This is the reason why so much attention is given to the development of long-term scenario forecasting for the hydrological effects of global climate change and the water management system transformation in large Russian river basins.
In recent years, the authors have developed a methodology for long-term scenario projections of river runoff changes, which includes a water balance model, methods for the assimilation of global climate warming scenarios, methods for the scenario estimates of the water management system transformation, and GIS technologies.
RESEARCH METHODOLOGY
The approach taken to create a long-term scenario projection of river runoff changes in large Russian river basins in the first third of the 21st century includes two methods: (1) a method generating scenario estimations of runoff changes for a range of probable climatic warming scenarios based on the generalization of calculated results obtained by using an ensemble of global climate models, and (2) a method for alternative scenario estimations for the water management system transformation caused by socio-economic changes and their impact on the river runoff.
Monthly water budget model
The model and its application to the largest river basins of the Russian plain are considered in detail in the following publications: Georgiadi andMilyukova (2002, 2010) and Georgiadi et al. (2011).This model can be categorized under macro-scale hydrological models that have been actively developed in recent years (Willmott et al. 1985, WATCH 2008).The model is based on the conservation equation for the long-term monthly average water balance of river watersheds.The model simulates the following processes: infiltration and moisture accumulation in the soil; evaporation (based on a modified Thornthwhaite's method (Willmott et al. 1985)); water accumulation in the snow cover and snow melting (based on V.D. Komarov's method, Manual on Hydrological Forecasts 1989); movement of the freezing front calculated from a simplified solution for classical single-front Stefan problem (Belchikov and Koren 1979;Pavlov 1979); formation of surface, subsurface and groundwater flow in the rivers and full river runoff.In the monthly water balance model, the changes in the river runoff and other water balance elements are estimated in the cells of a regular grid, which facilitates the coupling of the model and climate model simulations.
The range of probable climatic changes, which is estimated by calculating deviations of climatic elements from their recent values, is used as a climatic scenario.The calculations are made for the two scenarios using the most (A2) and least (B1) intensive rises of globally averaged air temperatures.Calculation results obtained by using 10 global climate models are employed.These were included in the IPCC program for their experiment "20C3M-20th Century Climate in Coupled Models".The climatic models were chosen from more than 20 climatic models by comparing the present-day observed climatic conditions with the simulated ones (Georgiadi et al. 2011).The range of scenario deviations of mean monthly air temperatures and precipitation totals is determined for each scenario ensemble mentioned by averaging the calculated results obtained from each of the climate models chosen.
Method for alternative scenario estimations for water management system transformation
The methodology of estimating the impact of socio-economic changes on river runoff resources (Koronkevich 1990, Georgiadi et al. 2009, 2011) is based on the assumptions of different rates of socio-economic development of a country and its regions and on the scenarios built around using different levels of water consumption and the water system protection technologies in place.
Major water consumers (household and industrial water use, irrigation, and rural water supply) are taken into account.Scenarios of household water use changes are recognized with regard to urban and rural population dynamics.
Scenarios of accelerated, moderate and minimum socio-economic development are considered.The scenarios are based on the current specific level of water consumption and its maximum, average and minimum decrease.Changes in storage evaporation rates and land treatment effects are also taken into account.
It is essential to understand that in the past decades, the water consumption dynamics in the Volga and Don river basins are in many respects close to that which was typical for Russia as a whole (Fig. 1(a),(b)).This makes it possible to use economic and water consumption changes recorded/predicted for the whole of Russia when working out basin scenarios.Along with this, the natural and economic peculiarities of individual basins are to be taken into account in prediction scenarios as well.The general algorithm for the method behind alternative scenario estimations for the water management system transformation takes place in two stages: pre-prediction and prediction.
The pre-prediction stage includes the following steps: general orientation of the method development; analysis of natural conditions and space-time regularities of water resources distribution and water resources quality; analysis of economic activity and its impact on water systems; analysis of water system state dynamics; and selection of operating units.
The prediction stage consists of the following steps: consideration of the expected natural hydrological and climatic situation; consideration of predicted population and economic development; estimation of probable changes in water use technology; consideration of the aggregate of anthropogenic and natural climatic factors; and scenario verification from water economy balances.Estimates of future anthropogenic impacts on water resources for the years 2025-2030 are based on three scenarios for population change (average, maximum and minimum), three options for economic development (inertia, energy and resource-based, and innovative) and four scenarios of specific water consumption change (reserved by the basic levels in 2000-2005, average, maximum and minimum reduction).According to the official statistical forecast, the 1.05-1.15times population decline is expected to be reached by 2025-2030.The Ministry of Economic Development of The Russian Federation gives the following economic growth rates of economic development for the same period -in industry 3-5% per year increase, 2-4% in the sectors of agriculture, 1-3% in other industries (Kuzyk and Yakovets 2006).
Possible improvement in the water use technology provides an opportunity to plan for a 1.2-5 times reduction in waste delivery (Laskorin et al. 1981, Demin 2005).There is a 10% per capita decline expected in domestic water use according to the scenario of the average specific water consumption changes, 20% -according to the maximum changes and 5% -according to the minimum changes.Industrial water use in the Volga and Don basins is projected to show a 1.7 times reduction according to the scenario of maximum specific water consumption changes, a 1.5 times reduction in the medium scenario and a 1.2 times reduction in the minimum scenario, which is slightly lower than the average reduction for Russia as a whole, taking into account possible waterintensive industry distribution in the areas rich in water resources.In agriculture, the consumption of water for irrigation will decrease by 1.1-1.5 times.This reduction is less than the average for Russia at the present day, where for example in the Volga and Don basins sprinkler irrigation is used for large areas as it is a more economical form of irrigation compared to contour ditch irrigation, prevailing in regions like northern Caucasia.
Specific features of air temperature and atmospheric precipitation changes
In the first three decades of the 21st century the mean annual air temperature in the Volga and Don river basins is expected to rise by 1.4-2ºС and 1.3-1.5ºС,respectively (the first value is appropriate to the A2 scenario and the second value, to the B1 scenario).According to the scenarios, the mean annual atmospheric precipitation will increase in the Volga basin by 32 mm (the A2 scenario) and by 24 mm (the B1 scenario) and in the Don basin by 10 and 13 mm, respectively, which is within the limits of 5% for the Volga and 2% for the Don as it relates to its recent values.Intra-annual distributions of air temperature and atmospheric precipitation scenario changes in the Don basin were quite similar for the A2 and B1 scenarios, however the figures for the Volga basin were substantially different.
Main trends for river runoff changes
Considering the previously mentioned scenarios for climate changes, the mean annual Volga River runoff is expected to change slightly in the B1 scenario, but can increase by more than 10% under the A2 scenario, whereas the annual runoff in the Don basin remains almost unchanged under both the А2 and В1 scenarios (Fig. 2).The response of the intra-annual runoff structure to scenario climate changes is also quite different for the Volga and Don basins.A flattening-out of the flood wave can be expected for the Don River, while in contrast, on the Volga River in the month of the highest runoff during a flood there may be a runoff increase, whereas the runoff of the next month can decrease (Fig. 3).The winter runoff can increase both on the Volga and on the Don, however the summer-autumn runoff on the Volga may be lower than the recent runoff and on the Don it may be higher.
SCENARIO CHANGES IN THE CHARACTERISTICS OF WATER MANAGEMENT SYSTEMS
It is shown that the further retaining of the existing specific water consumption rates in the Volga and, in particular, the Don basins is inadmissible since under any scenario, this imposes an excessive load on the water elements of the environment, mainly of the river runoff.
With the most favourable scenario of economic development and the current specific water consumption retained, water abstraction, as compared to the existing situation (Fig. 2(a)), can increase by 2.7 to 3 times and reach 28 and 74% of the mean annual runoff in the Volga and Don basins, respectively, (Fig. 2(b)), which is inadmissible in respect to water economy and ecology.However, a close to current level of water abstraction can be retained (Fig. 2(c)) with specific water consumption reduced by a factor of 1.5-1.6 and moderate rates of economic development.
Reduction in specific water consumption based on the known technological solutions, primarily those intended to avoid non-productive water losses, will allow a substantial decrease in major water consumption indices.Moreover, with one of the scenarios of economic development and maximum possible introduction of new technology, this will allow the anthropogenic load on water resources to be lower than or approximately equal to current levels, with a significantly higher standard of living attained.
CONCLUSION
The proposed ensemble approach to the long-term scenario projecting of runoff changes in large river basins, related to the socio-economic transformation and global climate warming, allows for the estimation of a range of runoff changes in the Volga and Don basins that can be expected in the first three decades of the 21st century.
With the most favourable scenario of economic development and the current specific water consumption retained, water abstraction, as compared to its current level, can increase by as much as three times and reach a critical level, which will have an adverse effect on the water management system and the environment.However, the current water abstraction levels can be retained with specific water consumption reduced by a factor of 1.5 and with moderate rates of economic development.By recognizing global climate warming scenarios, the mean annual Volga runoff can increase, which, to a certain extent, offsets the negative impacts of water abstraction growth.Meanwhile the same compensation does not occur in the Don River basin, where negative effects are expected to take their toll on regional ecology.
Fig. 1
Fig. 1 Water consumption indices in Russia as related to those in the Volga (a) and Don (b) basins (in km 3 /year) in 1990, 1995, 2000 and 2005.(1) the total water amount abstracted; (2) the total water amount used; (3) the water amount used to meet production needs; (4) the water amount used for domestic water supply; (5) the water amount used for irrigation; and (6) the total sewage amount discharged.
Fig. 2
Fig. 2 Observed and expected future (2025-2030) water abstraction in the Don and Volga basins, and the projected change in their mean annual runoff in the first three decades of the 21st century with contrasting A2 and B1 scenarios of global climate warming (as a percentage of the mean annual runoff).t -total water withdrawal, c -consumptive water use
|
2017-12-15T07:28:21.688Z
|
2014-09-16T00:00:00.000
|
{
"year": 2014,
"sha1": "3022949119f9d63c861fe965f724681e31697fab",
"oa_license": "CCBY",
"oa_url": "https://piahs.copernicus.org/articles/364/210/2014/piahs-364-210-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3022949119f9d63c861fe965f724681e31697fab",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
84834389
|
pes2o/s2orc
|
v3-fos-license
|
On the minimizing movement with the 1-Wasserstein distance
We consider a class of doubly nonlinear constrained evolution equations which may be viewed as a nonlinear extension of the growing sandpile model of [15]. We prove existence of weak solutions for quite irregular sources by a semi-implicit scheme in the spirit of the seminal works of [13] and [14] but with the 1-Wasserstein distance instead of the quadratic one. We also prove an L1-contraction result when the source is L1 and deduce uniqueness and stability in this case.
Introduction
Given a convex nonlinearity F , and Ω, an open bounded subset of R d , we are interested in (a suitable weak notion of solution for) the following evolution system: supplemented with the zero flux condition i.e. the requirement that that a∇F ′ (u) is tangential to ∂Ω and constrained by At least formally, (1.1)-(1.2) can be viewed as the limit as q → ∞ of the doubly nonlinear evolution equation: where ∆ q is the q-Laplace operator, ∆ q v = div(|∇v| q−2 ∇v).In the linear case where F ′ (u) = u, this equation arises as a model for growing sandpiles introduced by Prigozhin [15] and very much studied since, see in particular [2], [11], [5], [6], [10], [9], [12] and the references therein.We shall address existence of weak solutions to (1.1)-(1.2) by a simple constructive Euler scheme reminiscent of the seminal works of Jordan-Kinderlehrer and Otto [13] and Kinderlehrer and Walkington [14] but with the 1-Wasserstein distance and not the more traditional quadratic one.Thanks to this point of view, we will obtain weak solutions for irregular sources f , namely f ∈ L 1 ((0, T ), (C 0,α 0 (Ω)) ′ ).If the source is in fact L 1 in t and x, then the flow of (1.1)-(1.2) defines a contraction in L 1 which implies uniqueness, stability as well as full convergence of the Euler scheme.
The paper is organized as follows.Section 2 is devoted to some preliminaries, the definition of weak solutions and a summary of our main results.Existence is proven via a variational scheme à la Jordan-Kinderlehrer and Otto [13] and Kinderlehrer and Walkington [14] in section 3.In section 4, an L 1 -contraction result that implies uniqueness and stability for an L 1 source f is proved.Finally, section 5 is devoted to some variants and concluding remarks. 2 The PDE and its weak formulation
Preliminaries
It is well-known that the constraints (1.2) are very much related to the 1-Wasserstein distance and the notion of Kantorovich potentials.In the following, we assume that Ω is an open bounded convex subset of R d and T > 0. We denote by Lip 1 the set of 1-Lipschitz functions on Ω, and by Lip := W 1,∞ (Ω) the set of Lipschitz functions on Ω, given a distribution of order one, g ∈ Lip ′ such that g is balanced i.e. g := g, 1 = 0, we denote by W 1 the dual semi norm of g: which, when g is signed measure g = g + − g − , with g ± probability measures on Ω is the well-know 1-Wasserstein distance between g + and g − (see [17]).Define , θ is called a Kantorovich potential of g and we denote by K(g) the set of Kantorovich potentials of g i.e.: For an arbitrary g ∈ Lip ′ 0 , it may be the case that K(g) is empty, nevertheless, K(g) = ∅ as soon as g ∈ Lip 0 ∩ X ′ where X is a space of functions such that the embedding from Lip to X is compact (for instance X = C 0 , C 0,α with α ∈ [0, 1),...).
Using the Fenchel-Rockafellar duality theorem gives the following dual formula for W 1 (g): where the equilibrium condition −div(σ) = g has to be understood in the weak sense i.e. 2) It actually also follows from the Fenchel-Rockafellar duality theorem that (2.1) admits solutions (in (L ∞ ) ′ and not in L 1 in general) whatever g ∈ Lip ′ 0 is, such solutions are called optimal flows.A Kantorovich potential θ ∈ K(g) is related to an optimal flow σ in (2.1) by the extremality relation which, very informally, means that σ is concentrated on the set where |∇θ| equals 1 and is collinear to ∇θ.If by chance σ is L 1 , the previous relation expresses the fact that σ = a∇θ with a ≥ 0 as well as the complementary slackness condition a(1 − |∇θ|) = 0, note also that σ is in some weak sense tangential to ∂Ω because of (2.2).
For an arbitrary g ∈ Lip ′ not necessarily balanced we define With the previous considerations in mind it is natural to interpret the PDE (1.1) coupled with (1.2) as the inclusion whose implicit in time discretization, given a time-step τ > 0, reads as As we shall see later, these conditions appear as the Euler-Lagrange equations for the following Euler implicit scheme à la Jordan-Kinderlehrer-Otto (henceforth JKO) [13] to construct weak solutions but using W 1 instead of the more familiar 2-Wasserstein distance, W 2 (the idea to incorporate the source in an explicit way in the scheme was actually introduced by Kinderlehrer and Walkington [14]).Let τ > 0 be a time step, let us construct inductively a sequence u τ k by setting u τ 0 = u 0 and where (extending f by 0 outside [0, T ] if necessary) From now on, in addition to the assumption that Ω is convex and bounded, we suppose that there exists α 0 ∈ [0, 1) such that which in particular implies that and to make things as elementary as possible we take a power nonlinearity for F : and u 0 ∈ L m (Ω). (2.10) 1 note that this allows a rough dependence in x, not even a measure for instance f (t, .) It then follows directly from the fact that W 1 is lsc for the weak L m topology as well as the strict convexity of F (and the convexity of W 1 ) that the sequence u τ k of the W 1 -JKO scheme (2.5) is uniquely well-defined.We define then two curves corresponding to linear and piecewise constant interpolation: ) We also define the piecewise constant approximation of the source f : (2.12) Note that by construction ) dt so that with (2.8) and (2.10), we have
Weak solutions
The notion of weak solution of (1.1)-(1.2) we consider heavily relies on (2.3) and the following (slightly formal) observations.Recall that (2.3) means that F ′ (u) ∈ Lip 1 and for every ξ ∈ Lip 1 , one has Note that giving a pointwise in time sense to this condition would require that ∂ t u ∈ L 1 ((0, T ), Lip ′ ), which will not be guaranteed by the rather weak assumption (2.7).Defining for every k > 0, the truncation map T k : R → R by . and observing that for any These considerations lead to the following definition of weak solutions: and for every θ ∈ Lip 1 and every k > 0, in the sense of distributions.In other words, in the sense of distributions, which implies in particular that t → Ω F (u(t, x)) dx as well as t → u(t, .),θ (with θ ∈ Lip) are BV functions (but not necessarily absolutely continuous).
Remark 2.2.One can see here that the notion of solution we are using in Definition 2.1 is weaker than the standard one which consists in requiring that u ∈ L ∞ ((0, T ), Indeed, by using the same arguments of Lemma 4 of [7], one can prove that if u is such a solution it is also a solution in the sense of Definition 2.1.Indeed, if ∂ t u ∈ L 1 (0, T ; Lip ′ ) one can prove rigorously that (2.15) yields
Main results
Our main results concerning the existence and uniqueness of weak solutions can be summarized as follows.First, existence will be obtained (proof will be detailed in section 3) by convergence of the JKO-scheme: and a vanishing family of stepsizes τ n → 0 such that u τn converges strongly to u in L p ((0, T ), C 0 (Ω)) for every p ∈ [1, ∞) and u is weak solution of (1.1)-(1.2).
Uniqueness will be guaranteed by the following L 1 -contraction result (see section 4) which requires an L 1 assumption on the source: and let u and v be weak solutions associated respectively to the initial conditions u 0 and v 0 respectively, then
Euler-Lagrange equation for the discrete scheme
The fact that the Euler-Lagrange equation of the variational problem in (2.5) is very much linked to an implicit time discretization of (2.3) follows from: Proof.We proceed by duality.Consider the convex minimization problem inf It is easy to see that it admits a (unique by strict convexity of F * ) solution, indeed if z n is a minimizing sequence, it possesses a subsequence that converges strongly in C 0,α 0 (Ω) (and thus also in L m ′ with m ′ = m m−1 the conjugate exponent of m) to some z which obviously solves (3.3) since v ∈ (C 0,α 0 (Ω)) ′ .By Fenchel-Rockafellar Theorem, we also deduce that (3.3) (written as a convex minimization problem on L m ′ observing that Lip 1 is closed in L m ′ ) is dual to (3.1).Moreover, the solution z ∈ Lip 1 of (3.3) is related to the solution u of (3.1) by the extremality relation , then the corresponding solutions u i to (3.1) satisfy (3.4) Proof.It follows from Lemma 3.1, that for i = 1, 2, we have and since Dividing both terms by k, using the fact that At last, letting k → 0 and using Lebesgue's dominated convergence theorem and the fact that F ′ is increasing, we obtain (3.4).
As a consequence, we deduce that the discrete JKO scheme contracts the L 1 distance.Let us indeed consider the same JKO construction (2.5) as before but for two different initial conditions u 0 and v 0 , we denote by u τ k and v τ k the corresponding discrete in time sequences.We then have: Corollary 3.3.The discrete JKO scheme given by (2.5) satisfies the discrete flow equation (2.4) and contracts the L 1 distance (whatever the time step τ > 0 is).In other words, the sequences u τ k and v τ k constructed by the scheme (2.5) corresponding to the initial conditions u 0 and v 0 satisfy ) Now, in order to pass to the limit in (2.4) for the discrete JKO scheme, as τ → 0, we give in this paragraph the main a priori estimates on u τ and u τ .Lemma 3.1 first gives the estimate and since [F ′ ] −1 is C 0,β for the exponent β := min(1, 1 m−1 ), thanks to (2.13), we in fact have the Hölder bound: Using u τ k + τ f τ k as a competitor to u τ k+1 in (2.5), we first have: Thanks to (3.7) u τ k is bounded and thanks to (2.8) The mean-value theorem therefore enables to write which, together with (3.8), yields (3.10) Now, the right-hand side of (3.10) contains a telescopic sum and terms on which we have L 1 bounds thanks to (2.8).Hence, since F ≥ 0 and u 0 ∈ L m we get Next we observe that together with (3.11) and (2.8) we deduce The Euler-Lagrange equation of (2.5) from Lemma 3.1 reads Note that by the very construction of the interpolations u τ and u τ , (3.13) can be rewritten as for a.e..14)i.e.F ′ ( u τ ) ∈ Lip 1 and for every ξ ∈ Lip 1 and for a.e.time one has As already observed, given k > 0 and θ ∈ Lip 1 , ξ := F ′ ( u τ ) − T k (F ′ ( u τ ) − θ) belongs to Lip 1 , hence we have Now we observe that if t ∈ (kτ, (k + 1)τ ) by the strict convexity of F and the fact that With (3.16), this yields (3.17) Our aim now is of course to pass to the limit τ → 0 in (3.17).We first have: Proposition 3.4.There exist a vanishing sequence of time steps τ n → 0 as n → ∞ and u ∈ L ∞ ((0, T ), C 0 (Ω)), such that setting u n := u τn and u n := u τn one has: Proof.Thanks to (3.7), u τ is bounded in L 1 ((0, T ), C 0,β ) and ∂ t u τ is bounded in L 1 ((0, T ), Lip ′ ), since the embedding C 0,β (Ω) ֒→ C 0 (Ω) is compact and the embedding C 0 (Ω) ֒→ Lip(Ω) ′ is continuous (it is actually compact as well...), it follows from the Aubin-Lions-Simon Theorem (see [3], [16]) that {u τ } τ has a cluster point u in L 1 ((0, T ), C 0 (Ω)).For a suitable vanishing sequence of stepsizes we may thus assume that the corresponding sequence u n converges to u in L 1 ((0, T ), C 0 (Ω)) but also (up to a further extraction) that u n (t, .)converges to u(t, .) in C 0 (Ω) for a.e.t.Thanks to the uniform bound (3.7) with Lebesgue's dominated convergence theorem, we deduce (3.18).As for u τ , in addition to (3.7) and (3.12), we observe that where the last rightmost inequality follows from (3.12).Since u n obviously converges to u in L 1 ((0, T ), Lip ′ ), we deduce from the latter inequality that u n is relatively compact in L 1 ((0, T ), Lip ′ ), together with (3.7), the fact that the embedding C 0,β ֒→ C 0 is compact, that the embedding C 0 ֒→ Lip ′ is continuous and Lemma 9 in Simon [16], we can conclude that up to further extractions, u n converges to u in L 1 ((0, T ), C 0 (Ω)).Again, we may also assume as well that u n (t, .)converges to u(t, .) in C 0 (Ω) for a.e.t.This implies that F ′ ( u n (t, .))converges to F ′ (u(t, .)) in C 0 (Ω) for a.e.t which in particular implies (3.21).Thanks to (3.7), for every α ∈ [0, 1), F ′ ( u n (t, .)) is relatively compact in C 0,α and thus converges to F ′ (u(t, .)) in C 0,α for a.e.t, the L p ((0, T ), C 0,α (Ω)) convergence in (3.20) thus simply follows again from the uniform bound (3.7) and Lebesgue's dominated convergence theorem.
Proof of theorem 2.3
We are now ready to prove our main result which in particular implies existence of weak solution of (1.1)-(1.2) via convergence of the JKO scheme (2.5), namely Proposition 3.5.The limit function u from Proposition 3.4 is a weak solution of (1.1)-(1.2).
Proof.Let θ ∈ Lip 1 and φ ∈ C 1 c ([0, T ), R + ), multiplying (3.17) by φ and integrating by parts in time (observing that Ω u τ 0 T k (F ′ (s) − θ) ds is absolutely continuous) we first have: For the last term in this inequality, we remark that it can be rewritten as It follows from proposition 3.4 that Remarking then that φ τ −φ L ∞ → 0 as τ → 0, thanks to (2.7), and Lebesgue's dominated convergence theorem, we get Taking τ = τ n , letting n → ∞ and using proposition 3.4, we thus easily deduce that u is a weak solution of (1.1)-(1.2).
Remark 4.2.Exactly the same proof as above, gives the following stability result for weak solutions u 1 and u 2 associated to different (L 1 ((0, T ) × Ω)) sources, respectively f 1 and f 2 : and in particular
Variants and concluding remarks
We have proposed an elementary Euler scheme à la JKO with W 1 to deal with nonlinear evolution equations of the form (1.1)-(1.2) and addressed stability and uniqueness issues thanks to an L 1 -contraction argument.We presented the L 1 -contraction directly at the level of the PDE, but another approach, leading to the same conclusion, would have been to consider contraction at the discretized in time level (in a similar way as the estimate of Proposition 3.2) and conclude by the classical semi-group theory in Banach spaces of Crandall and Liggett [8].Indeed, thanks to Proposition 3.2, it is possible to handle the evolution problem (1.1)-(1.2) by using the classical semi-group theory in the Banach space L 1 (Ω), whenever the source term is L 1 in space.In particular, one sees that (4.2) is closely connected to the notion of integral solution in the sense of non linear semi-group theory in L 1 (Ω).If the source term is regular enough, we believe that it is possible to prove the existence of a weak solution in the standard sense (cf.remark 2.2).Thanks to remark 2.2, this solution coincides with ours and the uniqueness holds true if the source term remains in L 1 .Let us stress the fact that the W 1 -JKO scheme is constructive.We indeed believe that since the scheme consists in a sequence of relatively simple convex minimization problems, it is well suited for numerical purposes but we leave this aspect for future research.
An easy extension of the W 1 -JKO approach concerns the case of a reaction term in the right-hand side i.e. then, thanks to (5.2), one can obtain similar estimates as in section 3 to deduce convergence of the scheme (5.3) as τ → 0 + to a solution of (5.1)-(1.2).If, in addition f is Lipschitz, then it follows directly from (4.5) and Gronwall's Lemma that we also have uniqueness and stability in L 1 .To make things simple we have considered a power convex nonlinearity for F , but this is not really essential, what is important is that F ′ is an homeorphism.An interesting limit case, out of the scope of the present analysis, is when F ′ is a general monotone graph, possibly set-valued or empty-valued such as in the compression molding model of Aronsson and Evans [1].
We have also have left unanswered two questions that seem natural to us.The first one is what happens if the source term f is only L 1 ((0, T ), Lip ′ ): can one expect convergence of the JKO scheme, and more generally, does there exist a weak solution to (1.1)-(1.2) in this case?The second one is the uniqueness of weak solutions to the Cauchy problem when the source is not L 1 ((0, T ) × Ω) but only L 1 ((0, T ), Lip ′ ) ′ ) or L 1 ((0, T ), (C 0,α 0 (Ω)) ′ ), we actually suspect that uniqueness is false in such irregular cases but have not found any counterexample.
|
2019-01-08T12:21:48.142Z
|
2018-10-01T00:00:00.000
|
{
"year": 2018,
"sha1": "68e692c36dfd311027ce4ab4ae34970d7b121829",
"oa_license": "CCBYSA",
"oa_url": "https://basepub.dauphine.psl.eu/bitstream/123456789/17346/2/cocv170041.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "565aa950d4e62121e3d17de1880d068602d290aa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
59574878
|
pes2o/s2orc
|
v3-fos-license
|
Verbal Groups of Telic Action in Albanian Language
In this article I introduce and analyze the syntactic behaviour (compatibility and restrictions) of achievement and accomplishment verbs in standard Albanian, according to Aktionsart 1 . The Aktionsart is a system of classification of verbs into verbal classes morphologically distinct from each other, in which at the basic meaning of the verb are added different values of space, quality, etc. The accomplishments and achievements in Albanian have comparable action meaning and syntactic behavior, such as to justify their inclusion in the class of telic verbs. A telic verb is that one which presents an action or event as being completed in some manner. On the other hand, these two subclasses of telics are also characterized on the basis of a series of distinctive elements that lead us to lay a certain distinction between them. An accomplishment verb is a form that expresses that something or someone has undergone a change in state as the result of the completion of an event. On the other side an achievement verbs express an instant action that changes the state of the subject. By using the categories and procedures of textual linguistics I focus on the semantic and syntactic features of some groups of verbs.
(He/She) finished speaking / (He/She) stopped collecting all the things / (He/She) finished the chat.
Verbal groups of resultative action
The appropriate expression of resultative verbal action is realized through syntactic tools.Taken separately, these verbs do not belong to the class of durative verbs, but to another class, as will be shown later.
The groups of words of the type mbarova së foluri 'I finished speaking', zemra i pushoi së rrahuri 'His heart stopped beating', vajza përfundoi së shkruari ' the girl finished to write' etc. have a clear phraseology character.In these groups of words as the first element we find a terminative verb (mainly mbaroj 'finish', pushoj 'stop', përfundoj 'conclude/finish'); However, it is the second element that plays the main role in the general meaning of the word group, while the first is used more as a verb with aspectual value capable of expressing the conclusion of the action expressed by the second term1 : -mbaroj 'finish' (pushoj 'stop', sos 'finish', përfundoj 'conclude/finish') (usually in the simple past, and less in the other tenses of indicative and even less in the present tense of conjunctive) + a neutre noun in ablative, expressing the ending of the action.
The elder don`t cease swearing / He couldn`t stop spending/ The woman cried and couldn`t stop swearing / You should know that I haven`t stopped thinking about you.
In addition to the aforementioned groups of words with objective relationships and with terminative value, we also find those with causal relationships of the type u lodha së foluri 'I got tired of talking', expressing the value of action intensity -the first verb is used to express the intensity of the action expressed by the second member -such as u lodha së foluri 'I got tired of talking', plasa së qari 'I burst of tears', u ngjira së thirruri 'I hoarsen's calling' in the sense of: fola aq shumë (sa u lodha) 'I talked so much (as I got tired)', qava aq shumë (sa plasa) 'I cried so much (as I burst)', thirra aq shumë (sa u ngjira) 'I called so much (as I hoarsened)'.It must be emphasized here that, in addition to the intensity value of action, it is undeniable the resultative value of these buildings: -lodhem 'tire' (plas 'burst', këputem 'fatigue', tire', mekem 'weakening', ngjirem 'hoarsened') + a neutre noun in ablative, which expresses the ending of the action.Tired talking with him/her; When got tired of crying .../ Tired of going each summer/ The girl erupt laughing/ The italians erupted laughing/ Therefore erupted laughing/ She broke out crying all day Among the elements of these groups of words can be inserted any other member of the sentence.The close functional and significantive correlation between the two members of the groups of words, of the type in question, means that in the sentence they act as a single predicative member -the first member mainly plays the role of a semi-auxiliary verb.As a result, the complementary members, placed after the participal noun in undetermined ablative, essentially do not belong only to the second member but to the whole group of words, and therefore the terminative value is expressed by the entire group of words and not by his individual elements.
In addition to the group of words analyzed above, we also have other types where occur the repetition of the same verb in the simple past: -verb + conjunction sa 'how much' + verb [5] Punuan sa punuan, pastaj zunë të bisedojnë / Ai jetoi sa jetoi në shtëpinë tonë …/ Qeshi sa qeshi dhe iku .
Worked as worked, then began to talk / He lived as he lived in our house ... / Laughed as laughed and ran away.
Gave and gave and decided to give up.The examples in [5] indicate that after a certain continuation the action ends, while the cases in [6] and [7] indicate that at the end it is stopped with the intensive attempts and has started a new action.
To express the conclusion of the action are also used phraseolgycal groups with the coordination of synonyms verbs in the simple past, which are joined by the coordinative e: -verb + conjunction e (dhe) 'and' + verb [8] Rinia e saj shkoi e vajti.
The youth went and went.
The same value also presents the expression mori fund 'ended': [9] Mori fund përgjithmonë.
Ended forever.
Until now we have presented and analyzed verbal groups that indicate the risultative verbal action.Verbs that take part in these constructions, as we have seen, generally occur in the simple past.In Albanian, it is true that the two forms, imperfectaorist, provide the aspectual presentation of the action (in continuation, in summary or completed, respectively), but the form of simple past not always specifies and precise whether the conclusion of action has also reached the goal, the ultimate goal.The considered telic verbs are characterized by the ending point and the actions from them indicated necessarily lead to a result, in contrast to non-telic verbs.The distinction between these two groups of verbs does not belong to the formal structure of Albanian and is not expressed by any particular form, so the Albanian use other means which belong to verbal actions.The risultative verbal action in Albanian is accomplished through verbal groups (just discussed) and verbal syntagme (durative verbs in the simple past + lexical-grammatical means -where it is possible the realization of their meaning -).
In relation to this last point, the verbs ha 'eat', punoj 'work', laj 'wash', lexoj 'read', etc. in simple past alone give no indication with respect to whether we are dealing with the risultative verbal action -telic, or continuative -non-telic.These semantic aspectual colors do not occur only through this lexical enter of the verb, but through the whole predicate requirements 1 .
For example, according to the lexical-grammatical features of the object in the sentence (or of the verbal syntagme) follows the passage of the verb from telic to non-telic: -given object determined an undetermined quantity -also the presence of a complement in the predicate can transform a non-telic verb (and non-ergative) in telic (nonaccusative): According to statements made to date, even through the Albanian language is given evidence in favor of the conclusions of many contemporary linguists, that the opposition telic/non-telic is primarily semantic-aspectual, highlighted not through the information received by one lexical enter but mainly through the requirements of all the predicate, where the meaning of the verb is only a part.
[2] Alimhilli, Gj. (1995), Risultativi e trasformativi nell'albanese, Studi Italiani di Linguistica Teorica e Applicata XXIV/3, 557-565. 1 Arad's work (1995: 215-220) helps us to observe these phenomena in Albanian about the projection of arguments, that is, how the arguments of a predicate in a syntactic structure are integrated.The author defends the point of view that only the enter of such verbs as to run, to eat etc. it is not fully qualified as unaccusative or as unergative, what determines whether they are unaccusatives or unergatives it is the syntactic structure within which you have these verbs.It is pointed out by her observations, as well as by other researchers (Van Valin (1990), Dowty (1991)), that the definition unaccusative / unergative is related to the semantic and aspectual quality: unaccusativity is accompanied by non-agentive, telic feature, while unergativity is accompanied with agentive, not telic.
[ 7 ]
Pashë ç'pashë dhe ika / Ai bëri ç'bëri dhe iku.I saw what I saw and left / He does what he does and resign.The constructions of the type dha e dha 'gave and gave', pa ç'pa 'saw and saw' Beri ç'bëri 'does what he does' are equivalent to u përpoq e u përpoq (u përpoq sa u përpoq) dhe ... 'tried and tried (he attempted what he attempted) and ... '
|
2018-12-21T12:49:12.835Z
|
2017-01-21T00:00:00.000
|
{
"year": 2017,
"sha1": "90472618233f7bd9b5d9930b6110615f5f7fa085",
"oa_license": "CCBY",
"oa_url": "http://journals.euser.org/index.php/ejms/article/view/1681/1667",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "90472618233f7bd9b5d9930b6110615f5f7fa085",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
255013499
|
pes2o/s2orc
|
v3-fos-license
|
Does team diversity really matter? The connection between networks, access to financial resources, and performance in the context of university spin-offs
University spin-offs (USOs) are an important driver for innovation, along with economic and social development. Hence, understanding which factors help them perform successfully is crucial, especially regarding their peculiarities in a scientific environment. This study focuses on essential factors such as team composition and diversity in USOs in the biotech sector in 64 founding teams in Switzerland and Germany. By identifying the team composition, and going beyond the usual team characteristics, along with checking in parallel for network and financing effects, the paper adds empirical evidence to the ongoing debate if and how team diversity in USOs affects the performance of this special group of newly founded firms. We test our hypotheses with the partial least squares method (PLS). Our results from the mediation model show how the diversity of teams is related to networks and financial resources and affects the performance. In addition, our study reveals the direct and indirect effects of team diversity on success in USOs. This way we contribute to the ongoing discussion on performance investigating the sources of team effects more in detail.
Introduction
In the last two decades, research on team composition and the effectiveness of university spin-offs (USOs) in the entrepreneurship and management literature has increased (e.g., Cohen and Bailey (1997), Mathieu et al. (2008), Klotz et al. (2014), Hahn et al. (2019), Civera et al. (2020), Civera et al. (2019b)), due to the fact that USOs in knowledge-and technology-based industries have become an important wealth-creating factor as vehicles of technology transfer (Shane (2004)): they are often considered the cornerstone of innovation, growth, and social welfare by commercializing research results (Vohora et al. (2004), Hogan and Zhou (2010), Bolzani et al. (2014)) and by altering existing sectors or establishing new ones (Breznitz et al. (2008)). Pushed by policy support measures, a growing venture capital (VC) industry, and an increasing interest of researchers themselves (Mustar et al. (2008), Lam (2010), Venkataraman (2004), Horta et al. (2016) from a push perspective), there has been a substantial rise in the creation of USOs in the USA, Europe, and other industrialized countries, representing the majority of new ventures, as is characteristic in biotechnology (Bonardo et al. (2011)).
To understand better the drivers of performance of USOs, we have to define them: Compared to regular start-ups, USOs are founded by academics and researchers who transfer technology or technology-based ideas and inventions developed within a university to the private sector with the aim of transforming scientific findings into marketable processes and products (Helm and Mauroner (2007), Steffensen et al. (2000), Walter et al. (2011)).
However, does team diversity really matter in this context? The question if team diversity, namely team hetero-or homogeneity, affects performance in USOs is still unanswered and a clear effect cannot be found (Mathisen and Rasmussen (2019); Nikiforou et al. (2018), Chowdhury (2005)). Thus, an ongoing and still unresolved debate on whether heterogeneous teams are more successful and better than their homogeneous counterparts, or vice versa exists (e.g., Klotz et al. (2014)). Even observing some fruitful empirical insights concerning team diversity in general (Clarysse et al. (2007a), Grandi and Grimaldi (2003)) and especially in USOs (Ben-Hafaïedh et al. (2018)), the question about the most effective and promising team composition in USOs remains unanswered due to a paucity of research regarding team composition (Markman et al. (2008), Ferretti et al. 2018) and performance (Czarnitzki et al. (2014)), (Meoli et al. (2018)). Current research confirms that USOs often do not outperform innovative start-ups (Visintin and Pittino (2014), Civera et al. (2019a, b), Siegel and Wright (2007), Wright et al. (2007)). To get a better understanding of performance, a growing number of studies do analyze either organizational issues (institutional aspects like incubators) or environmental settings (e.g., financing and social capital) (Audretsch et al. (2016), Ferretti et al. (2019)) or the characteristics of the founders (Ben-Hafaïedh et al. (2018)), such as human or social capital, team size, and team composition (Ferretti et al. (2018), Huynh et al. (2017), Huynh (2016)), but could not find consistent results (see the overview of Mathisen and Rasmussen (2019)).
With this study, we want to link our research questions with the gaps of the current research and follow the advice of Mathisen andRasmussen (2019: 1909), explaining that: "The USO context is also well suited to study how new venture teams are able to connect with other actors that can provide access to the resources necessary to start, develop and grow a new venture. Promising theoretical perspectives include identity processes (Powell and Baker 2017) and social networks (Leyden et al. 2014), because these can go beyond the surface characteristics and structures of teams and investigate the sources of team dynamics. Because USOs typically relay on many different actors in their development, research that shed light on the relationship between USO teams and their support networks or ecosystem, would be of high practical relevance." Thus, finding an answer to this open research gap in the performance factors of USOs, especially the team composition, appears to be fairly challenging. Therefore, it could be important to work with data restrictions to analyze specific context factors in different kinds of start-ups, to get closer and more thoughtful insights in this unique empirical context, and to explore also questions of broader theoretical interest (i.e., in regard to general team composition and management (Fini et al. (2019)). This paper follows this approach by focusing on USOs as a very peculiar kind of newly started businesses, because they are located between a scientific and commercial context and have to deal with specific challenges (Visintin and Pittino (2014), Knockaert et al. (2009), Hahn et al. (2019)). To do so, "…will also improve the understanding of USOs performance by bringing in a broader set of theoretical perspectives…" (Mathisen andRasmussen (2019: 1917). Thus, we work with data from USOs in the life science industry, namely, the biotechnology sector, in Germany and Switzerland. This endeavor might help develop reasonable measures for team diversity variable(s) and find other models, such as the rarely used mediation models, that might lead to more satisfactory results (e.g., Ensley and Hmieleski (2005)) in this specific, highly innovative, and challenging context (i.e., Meoli et al. (2018)).
Until now, performance factors are often tested separately and in different ways, but rarely in interactions with one another (Ilgen et al. (2005)). Moreover, the team composition is commonly focused only on single aspects concerning diversity (either human or social capital tested with one variable or by age (Mathisen and Rasmussen (2019)); hence, a high probability for measurement errors and biased results exists (Carpenter et al. (2004)). These studies have asserted that a heterogeneous team is mostly conducive to the performance of young and developing USOs (Ben-Hafaïedh et al. (2018), Huynh et al. (2017), Huynh (2016)) and that the specific type of diversity matters, such as the proportion of academics and nonacademics in a spin-off. Therefore, less or controlled diversity might enhance performance (Ferretti et al. (2019), Ferretti et al. (2018), in parts also Knockaert et al. (2011), Visintin and Pittino (2014), Hahn et al. (2019)). Team composition is rarely tested in combination with other typical success factors of USOs embedded in the theory approach of the resource-based view, for example, by either the characteristics of the firm (resources including founders, strategies, capabilities, characteristics, initial competence endowments, sufficient or diverse human, social, and technological knowledge) , Colombo and Piva (2012), Cho and Sohn (2017), Hayter et al. (2017)) or other external factors (relationship with parent organizations, external supports) (Hossinger et al. (2019), Shane (2004)).
Therefore, this paper focuses on the effects of team diversity on performance in USOs, going beyond the general literature on team composition by combining this important factor of the involvement of multiple founders (e.g., Hahn et al. (2019)) with other imprinting factors, such as the founding team's network (Florin et al. (2003)) and access to financial resources (Hayter (2013)) trying to contribute new insights into the debate on USO performance (a more general overview offer Hossinger et al. (2019)), following the advice of Nikiforou et al. (2018) that "…also financing networks are very relevant largely discussed topics in the literature on USO teams…" Wrapping this up, with this paper, the following aspects are stressed: (1) team composition as a determinant of USO performance; (2) the puzzling relationship between team diversity and performance in USOs both from a theoretical and empirical perspective (why diversity is good and why is it bad for USOs?); (3) contrasting the findings and discussing biased measures of diversity; and finally (4) the need to introduce an overall measure of diversity and to consider mediators to explain the mechanisms through which diversity leads to performance in order to reconcile the puzzles of the diversity-performance relationship. To do so, this paper aims to capture a broad approach on diversity issues in teams and to understand the direct and indirect effects of this diversity on USO performance. Moreover, this study generates new insights in the interrelation of USO team composition and other success factors, such as social capital, networks, and access to finance on performance. Moreover, we enlarge this research by focusing on the biotech USOs, thereby delivering a specific context insight regarding a field in two different national environments (Switzerland, Germany). This idea that team diversity matters in USOs because it helps to access a variety of resources is another contribution to the literature with some important managerial and practical implications (Diánez-González and Camelo-Ordaz (2019)). One typical problem of USOs, in fact, is the inability to transition from the university to the business world; this feature makes them relevant for questions related to mainstream management (transfer) research (Fini et al. 2019).
Our data allows us to shed light on the interrelation among team diversity, success factors as social capital and access to finance, and performance for USOs using a suitable research method (PLS). The results show that team diversity is essential for firms' network and could enhance the possibility to procure finance and generate an indirect, significantly positive influence on performance. Additionally, team diversity has a positive direct impact on the access to financial resources that in turn lead to a higher performance of firms. Therefore, our results regarding USOs in biotechnology emphasize to choose a more heterogeneous team composition either from the founders themselves or from universities and other public supporters which help to set up diverse teams over time and to consider other success factors in parallel. Based on the results of this study, team diversity seems to overcome the typical difficulty to access resources and gain credibility which inherently characterizes USOs and hampers their success (Rasmussen et al. 2011).
The rest of this paper is organized as follows: In the next section, the theoretical background and effects of team diversity on firm outcomes, especially performance in USOs, are discussed. Then, we outline and develop our hypotheses along with the prior discussion thoroughly, followed by presenting our sample, data, and the chosen empirical method. Finally, we discuss our results and reflect them in relation to the current literature and offer some implications for future research, policy makers, and managers of universities.
Upper echelon theory and team effectiveness frameworks
We follow Roberts (1991) or Hossinger et al. (2019) and Hahn et al. (2019) who suggest that the team of founders plays a critical role in shaping USO performance (Ferretti et al. (2018), Hesse and Sternberg (2017)) due to their specific organizational conditions, such as combining science and commercialization (Knockaert et al. (2011), Mathisen and). Most team diversity research is based upon upper echelon theory (Hambrick and Mason (1984), Hambrick (2007)), theorizing that the management strategy and firm success or performance primarily depend on the composition, characteristics, and demographic of the top management team and that this effect is even stronger for smaller new companies compared with big ones (Greiner (1998), Ensley et al. (2006)). This is due to a primary lack of organizational structures in new venture firms that allow a greater latitude of the entrepreneurial team and therefore a higher influence on firm performance.
The analysis of direct effects and critical mediating mechanisms (indirect effects) with the other potential success factors of USO performance is needed to discover the relationship between team diversity and team outcomes. The upper echelon research on team diversity puts an emphasis on the direct effects of team composition on performance instead of indirect effects (Ilgen et al. (2005)). The analysis of direct effects is not satisfactory enough to open the black box between team inputs and performance (Klotz et al. (2014), Carpenter et al. (2004)). Meanwhile, in organizational behavior research, the relationship between team diversity and team outcomes is explained by the input-process-outcome (IPO) framework (McGrath (1964)) and the input-mediatoroutcome (IMO) framework (Ilgen et al. (2005)). These team effectiveness frameworks provide the foundation for entrepreneurship researchers to develop their studies about the relationship between teams and outcomes and are more capable of explaining this relationship. It must be clarified that these frameworks exclusively use mediation models to analyze the effects of team composition. The IMO framework that constitutes the advanced IPO framework provides that outcomes (O) are the result of inputs (I) and mediators (M) (for a detailed explanation, see Ilgen et al. (2005)). Following Klotz et al. (2014), these inputs consist of prior experience, social capital, personality, and general ability. The mediators are team processes (transition processes, interpersonal processes, action processes) and emergent states (collective cognition, cohesion, team confidence, psychological safety, and affective tone), while the outcomes could be sales growth, profitability, number of employees, innovativeness, satisfaction, and well-being. Our approach in this paper is to follow the requirement of a mediation model to explore the relationship between team diversity and performance in USOs, whereby the mediators used are the most common critical success factors instead of emergent states and team processes as highlighted in the IMO and IPO models. Therefore, the upper echelon approach and the IPO and IMO frameworks represent the theoretical fundament in two ways: to identify prior research results regarding team diversity effects in general and for USOs in particular and to develop our hypotheses and empirical testing (see the following section).
Teams, team diversity, and outcome
When dealing with USOs, a common finding in the literature is that they are mostly founded by teams (Roberts (1991), Knockaert et al. (2011), D'Este et al. (2012), Hayter (2013, Visintin and Pittino (2014), Ciuchta et al. (2016), Huynh et al. (2017), Ferretti et al. (2018). Before diving into the discussion and analysis of prior studies analyzing the impact of team composition on USO performance, we provide a definition of a team in an organizational context. According to Mathieu et al. (2008) and Kozlowski and Bell 2003: Teams are "collectives who exist to perform organizationally relevant tasks, share one or more common goals, interact socially, exhibit task interdependencies, maintain and manage boundaries, and are embedded in an organizational context that set boundaries, constrains the team, and influences exchanges with other units in the broader entity." (Kozlowski and Bell (2003), p. 6). Additionally Ensley et al. (1998) stated that entrepreneurial team members must have (1) established the firm, (2) a financial interest, and (3) an influence on strategic choices. Ucbasaran et al. (2003) corroborated these aspects of founding teams. In our analysis, we follow Kozlowski and Bell (2003) and Ensley et al. (1998) and concentrate on teams and team members fulfilling these requirements.
To start, the effects of team diversity on firm performance in general are the topic of a controversial debate (Webber and Donahue (2001)). This is not surprising, regarding the two diametrically opposed theories of Byrne (1971) and Horwitz (2005). The latter emphasized the superiority of heterogeneous teams, while Byrne (1971) showed that homogeneous teams perform better because the similarity-attraction paradigm states that homogeneity pushes team cohesion, motivation, and interaction among team members. On the contrary, on the basis of the theory of cognitive resource diversity, Cox and Blake (1991) and Horwitz (2005) demonstrated that heterogeneous teams are more powerful than homogeneous ones because they are more innovative and creative and able to solve problems much easier due the strength of their diversity. However, in the end, most studies have claimed that USOs with a founding team do outperform companies started by a single founder (Roberts (1991), Ensley and Hmieleski (2005)). Some articles have confirmed these findings, emphasizing the importance of homogeneous and balanced composition and structure of a team, and others have focused on the diverse expertise of heterogeneous teams in USOs as the best way to achieve success in start-ups in general and USOs in particular, because of their specific challenges (Mathisen and Rasmussen (2019), Knockaert et al. (2011)).
University spin-offs and diversity of teams-positive and negative effects
A growing body of literature focusing on the effects of team diversity on the performance for USOs (e.g., Hahn et al. (2019)) has three main directions-the founder or inventor, the team, as well as the skills and networks (Mathisen and Rasmussen (2019), Diánez-González and Camelo-Ordaz (2019)). In this paper, we focus on the effects of team composition, because this literature is still a bit scarce and "one-dimensional." Earlier studies have focused more strongly on this one or two dimensions of diversity or measured only by team size and the effects on USO performance. Team diversity is commonly measured by human capital in terms of team members' previous industrial or management experience, with industrial experience serving as a key predictor for firm performance (Diánez-González and Camelo-Ordaz (2016), Delmar and Shane (2006)). Several studies have confirmed that the composition of a team significantly improves the USO performance when complementary human and social capital, such as business management expertise or market and technological knowledge, is important in the founding team, explaining that team heterogeneity is important for success without controlling for other factors, such as team size or deepness of diversity (Toole and Czarnitzki (2009), Gimmon and Levie (2010), Wennberg et al. (2011), D'Este et al. (2012, Borges and Jacques Filion 2013), Criaco et al. (2014), Fernández-Pérez et al. (2014, Nielsen (2015), Ciuchta et al. (2016), Helm et al. (2018)). For instance, Diánez-González and Camelo-Ordaz (2016) validated that the recruiting of non-academic individuals in the management team of USOs could outweigh the missing experience of the academics; hence, they are in favor of the heterogeneous combination of these skills and manager types to enhance USO performance. The same holds true for the study of Huynh et al. (2017) showing that the capabilities of the founding team have a positive influence on USO performance. Kilduff et al. (2000) similarly find that, in the case of demographic diversity, heterogeneity in the age of founder team members has a positive influence on performance (units sold, market share, other performance indicators) as well as differences in tenure (Jehn and Bezrukova (2004)). Furthermore, Eisenhardt and Schoonhoven (1990) affirmed that a growing team size leads to higher sales growth or productivity (Campion et al. (1993), Magjuka and Baldwin 1991). McGee et al. (1995) and Eisenhardt and Schoonhoven (1990) show that functional diversity (industry and work experience) affects performance positively, as well as heterogeneity in the proportion of different job categories (Magjuka and Baldwin (1991)). Regarding personality traits, Mohammad and Angell (2003) and Neuman et al. (1999) showed that diversity in team extraversion and emotional stability affects performance positively.
Conversely, studies like Shane and Stuart (2002) have delivered the insight that the industry experience of a USO founding team has no effect on the survival of the USO as a success measure. Instead, they show when some team members have different and higher levels of industry experience the time to market is shortened if the USOs survive the seed stage. Furthermore, entrepreneurial experience has no additional impact on the new venture success of USOs (e.g., Nerkar and Shane (2003)). For other measures, too, such as team diversity of age, religion, and family background, the results are not robust in delivering positive or negative performance effects (Roberts (1991)), but are rather inconsistent. Therefore, when considering even more the specific USO context, it has to be emphasized that the business idea is mostly created around a technological idea or very specific knowledge, often embedded or tacit in the head of one scientist or a team doing research on this issue (Markman et al. (2008), Clarysse et al. (2007b)). Thus, when starting a business, these team members are commonly needed to transfer the invention into an innovation and a market-ready prototype or product (e.g., Knockaert et al. 2011, Di Gregorio and, Zucker and Darby (1998)). Therefore, this literature typically finds that a homogeneous team might be more promising due to the overlap of knowledge and (technological) understanding (Knockaert et al. (2011)). Amason et al. (2006) found that overall the increase of team heterogeneity leads to a decrease in new venture performance, processes, and effectiveness, as well as functional diversity (Carpenter (2002), Jehn and Bezrukova (2004), Pitcher and Smith (2001), (Knight et al. (1999), , Ancona and Caldwell (1992), (Hambrick et al. (1996)). Diversity in extraversion as a personality trait (Mohammad and Angell (2004)) inhibits processes and diversity in neuroticism to performance (Halfhill et al. (2005)). Specifically, with regard to demographic aspects, such as race/ethnicity, gender, age, tenure, and even education, Jackson et al. 2003, Kirkman et al. (2001, Leonard et al. (2004), Li and Hambrick (2005), Mohammad and Angell (2004), Simons et al. (1999), Timmerman (2000), Townsend and Scott (2001), and Watson et al. (1998) verified that these diversity measures diminish performance as well and make processes more complicated. Webber and Donahue (2001) as well as Campion et al. (1993) found no effect of demographic diversity or skill heterogeneity on performance Thus, we can observe fairly inconsistent results (see again Mathisen and Rasmussen 2019).
Other research has verified that, in this case, the practical expertise is missing to commercialize the invention successfully. Therefore, a mix of both types of expertise would be better in a more heterogeneous team (Hahn et al. (2019), Knockaert et al. (2011), Visintin andPittino 2014). Some of these studies show that the balance or rate of diversity has to be controlled for to minimize the problems in heterogeneous teams (Hahn et al. (2019), Knockaert et al. (2011), Visintin and Pittino (2014)). A further solution can be a specific mix of characteristics (Ben-Hafaïedh et al. (2018)). If not, the already discussed negative effects of team heterogeneity will exceed and lead to problems and negative performance in the case of USOs (Ferretti et al. (2018)).
Hence, Knockaert et al. (2011) found in their indepth qualitative analysis of nine cases of USOs that these spin-offs were mostly founded around the research team and thus very homogeneous from the technological background and somehow the science fields but at the same time heterogeneous in terms of the age of the members along the hierarchies in the team (experienced researchers and PhDs or PostDocs). This is crucial for the initial success to develop the idea to a marketable level. At the same time, they can show that, for further development, bringing in other individuals with more commercial experiences is crucial; that is, a heterogeneous team is necessary but in a balanced way so as not to cause misunderstandings. They emphasized that tacit knowledge among the team members is the most important aspect; thus, the homogeneity and understanding of these team members appear to be more crucial. Visintin and Pittino (2014), who analyzed 103 Italian USOs, similarly observed that a team with both academic and non-academic backgrounds fosters the performance of USOs positively (sales and employment growth) but only when the duality of experiences is tempered by other characteristics such as sharing a common background (same university or field) and a smaller team size (Visintin and Pittino (2014)). Ferretti et al. (2018) also showed quite distinctive and differentiating results in their panel study on 138 Italian USOs from 1999 until 2009. They found some positive effects of the academic and nonacademic heterogeneity in teams on performance (sales). The ratio of academic to non-academic team members in a USO is an important success factor. Furthermore, Ben-Hafaïedh et al. (2018) showed with their analysis of 165 USOs in Italy from 2000 to 2007 that, for the subgroups of the academic and non-academic founders of USOs, the mix or the homogeneity is substantial for the different performance measures (innovation or sales growth). Thus, for innovation, having a pure academic team appears to make more sense, while for sales growth, the balanced mix and size of the subgroups have a positive impact. The authors moderate this effect by implementing the stakeholder effects of either university or commercial stakeholders. Summing up, prior research results indicate that the relation of team composition on outcomes is unclear. The effects of team composition depend on the input variables, the embedding context (Jackson et al. (2003)), time (how long team members stay together) (Harrison et al. (1998)), and organizational culture (Brickson (2000), Ely and Thomas (2001)). The diverse research results lead to the conclusion that the effect of team diversity on performance is difficult to grasp, and the question rises whether uncovering the relationship of team diversity and performance on a direct level is possible at all either for general start-ups or for the context of USOs (for an in-depth overview, see Mathisen and Rasmussen (2019)). Moreover, most of these studies take place in an Italian context, are focusing USOs from all types of disciplines, and rarely combine team composition and other potential success factors as recommended for further research (see again Mathisen and Rasmussen (2019) or Nikiforou et al. (2018)). On the basis of this controversial debate about the effectiveness of homogeneous or heterogeneous teams in USOs and their inconsistent results on performance, we hypothesize the following, measuring performance as growth in sales and employees following standard measurement processes in USO research (McKelvie and Wiklund (2010), Mathisen and Rasmussen (2019): Hypothesis 1. Team diversity in USOs has no direct impact on firm performance.
Team diversity, networks, and performance
We observe a research gap for the analysis of team diversity and the interrelated effects with critical success factors for USOs (Mathisen and Rasmussen (2019)). Some studies on USO performance control for the university or commercial stakeholders, or they control for the size of the team or check for technology transfer offices or other institutional settings (e.g., Ben-Hafaïedh et al. (2018)). On the basis of the mentioned literature and the following data, we address that the access to financial resources and specifically team members' network are the most critical success factors. As we will explain in the following, we assume that these success factors are interrelated with the team composition, specifically the diversity of USO teams. Due to the network success hypothesis and social capital theory (Granovetter (1973), Brüderl and Preisendörfer (1998)) and the fact that informal and formal networks serve to embrace entrepreneurial opportunities (Baron and Tang (2009), Baron (2006), Ozgen and Baron (2007), Florin et al. (2003)), we posit that firm and specifically team member networks are among the most important success factors for new venture firms and USOs. Shane and Stuart (2002) postulated that the direct and indirect contacts of the founding team with venture capitalists in their social network reduce the likelihood of failure. Furthermore, Grandi and Grimaldi (2003) confirmed that the frequency of interaction with externals before founding the firm has an impact on the new venture's network and interaction frequency that boosts firm performance. Another reason why firm networks serve as a major success factor is that the effect of social capital could be more important than teamwork capabilities (Brinckmann and Hoegl (2011)) and could enhance performance (Vissa and Chacar (2009), Balkundi and Harrison (2006), Walter et al. (2006)). Mosey and Wright 2007 addressed the notion that differences in the existing social capital and networks of academic entrepreneurs help overcome barriers to new venture development. Academics who have business ownership experience are more adept at building relationships with experienced managers and potential equity investors (Mosey and Wright (2007)), which might be helpful and supportive to gain better performance.
Other studies have confirmed that USO innovativeness and performance are also positively associated with the networks of academic founders in a team with different backgrounds ( 2015)). Thus, team diversity leads to a more diversified and greater network (e.g., Hossinger et al. (2019), Reagan et al. (2004), Burt (1992), Granovetter (1973)). Additionally, networks to other individuals may also enhance the entrepreneurial orientation and performance of USOs (Knockaert et al. (2011), Hayter (2013, Diánez-González and Camelo-Ordaz (2016), Prencipe (2016)). A higher degree of different external networks of team members who are less overlapping should provide more unique information inflows (e.g., Granovetter (1973), Reagan et al. (2004)) and lead to a larger pool of external advisers and more innovation (e.g., Hambrick (1994), Hansen (1999), Alexiev et al. (2010)), which in turn is conducive to a stronger performance of USOs. Vissa and Chacar (2009), Balkundi and Harrison (2006), and Walter et al. (2006) argued that a greater and more diversified network should permit more business activities and therefore enhance USO performance. Moreover, Huynh et al. (2017) and Huynh (2016)found only the indirect (positive) effects of USOs' network on performance. Regarding the relevance of network and social capital, we posit that team diversity has a strong impact on firm networks, and in turn the firm's network has an impact on the access to resources and firm performance (see Hypotheses 2 and 3): Hypothesis 2. A heterogeneous team composition has a positive impact on the USOs' network. Hypothesis 3. USOs' diverse team network has a positive impact on performance.
Team diversity, access to finance, and performance
A positive impact of network on the access to financial resources was found by Jarillo (1989), Birley (1986), and Starr and MacMillan (1990). In a more recent critical review of networks in the entrepreneurship literature, Hoang and Antoncic (2003) showed that a developed network could be an advantage for spin-offs or new venture firms to obtain access to financial resources. Furthermore, Brüderl and Preisendörfer (1998) and Zhao and Aram (1995) confirmed that network ties could enhance the access to financial resources. Lindstrom and Olofsson 2001 also validated that USOs are suffering due to greater difficulties in obtaining finance than start-ups from other origins. Therefore, several researchers have, especially for USOs, searched for supportive factors for obtaining finance and found that the team founders' human capital (commercial experience, technical knowledge, and academic status) (e.g., Huynh (2016) focusing on the impact of industrial, managerial, and entrepreneurial experiences of founding teams to enhance the financing of USOs) and social capital, such as number and density or broadness of networks (Nahapiet and Ghoshal (1998), Huynh (2016)), could increase the chances of getting externally financed (e.g., Gimmon and Levie (2010)). Similarly, Shane (2004) and Vohora et al. (2004) explained the quality of a USO team's network as an external resource, having a strong impact on the financing process (seed, starting, and growth) (Lindstrom and Olofsson (2001)). Effective financing supports USO founders in bringing the idea to market, and thus, this financing has a positive effect on performance (Powers and McDougall (2005)) and growth (Rosman and O'Neill (1993)), such as Wright et al. 2006 andShane (2004) also show for USOs. With regard to the resourcebased view (Wernerfelt (1984)), the financial resources of new venture firms or USOs constitute a critical success factor. This leads to the next two hypotheses that the access to financial resources can be enhanced by the diverse networks of USO teams and that having an effective financial resource enhances the performance of USOs: Hypothesis 4. The extent of diversity in a team's network has a positive impact on the financial resources of USOs. Hypothesis 5. Financial resources have a positive impact on USO performance.
According to pecking-order theory (Myers and Majluf (1984)), venture capitalists tend to invest in USOs after the seed stage whereas business angels or universities during the seed stage of USOs. Hence, the financing of USOs with venture capital is considered to be the most important funding source ). Thus, several studies have analyzed how USOs are evaluated by investors and financing institutions and whose evaluation criteria must be fulfilled to obtain financing. One of the most important evaluation criteria concerns the entrepreneurial team (e.g., Silva (2004)). The most frequently mentioned team characteristics are industry experience, leadership experience, managerial skills, and engineering/technological skills that attract venture capital (Franke et al. (2008)). Human capital can serve as a signaling effect; therefore, heterogeneous teams are preferred because of their functional diversity (Franke et al. (2008)). This again is found in other studies such as Huynh (2016), where the capabilities of the USO founding team lead to financing in different stages. The same holds true for the studies of Clark (2008) or Muzyka et al. (1996) where investors require sufficient business skills as the main aspect to finance a USO. Thus, the stage of a USO's diverse team skills is highly valued by financing institutions (Shane (2004)). These findings lead to the following hypothesis: Hypothesis 6. A heterogeneous USO team composition has a positive impact on the access to financial resources.
2.2.4 Interrelation of team diversity, networks, access to finance, and performance The previous hypotheses have focused on the direct effects of team diversity, networks, and access to finance as important success factors for the performance of USOs. The indirect paths of our research approach must also be analyzed to understand how USO teams can exploit networks and access to finance to enhance the impact of team diversity on performance. Hence, the mediating effects of networks and access to finance on team diversity are included in the analysis. This has been rarely undertaken in the USO research. Exceptions include, e.g., Huynh (2016) and Huynh et al. (2017) for networks and team capability. It is important not to ignore possible mediating mechanisms that could explain the impact of team diversity on performance more in-depth and shed light as to why the direct diversity impact generates inconsistent results. We build a mediation model that is able to investigate direct and indirect effects. We suppose that the direct effect of team composition on firm performance is mediated by the firms' network and financial resources. Why should there exist indirect effects of networks and access to finance that serves to clarify the nature of relationship between team diversity and performance? We posit that a combined analysis of success factors and their interrelated (indirect) effects can clarify the relationship between team diversity and performance better than the analysis of single direct effects. This can be explained through the fact that USO performance depends on a huge amount of factors so that entrepreneurial resources and personal characteristics in a diverse team could be of secondary importance (Stringfellow and Shaw (2009)) if we measure them as direct effects (Huynh (2016), Huynh et al. (2017)). We therefore focus on the indirect effects of the success factors as well, but the selection of these variables must be carefully considered. Shane (2004)and Knockaert et al. (2011) verified that USOs are very specific in terms of their business idea, background, and development, because they are created around research inventions, new solutions, and ideas solving a problem. Thus, at the beginning, having a team of founders with scientific and technological knowledge is crucial to drive this idea into a marketable innovation. At this point of development of a USO, the common understanding of the involved team members as a kind of homogeneity is important (Knockaert et al. (2011)). At the same time, researchers in this field have found that it is important to identify how founding teams in USOs co-evolve with the stages of firm development and that this kind of change might have an impact on USO performance or survival (Clarysse and Moray (2004)), often driven by context or other success factors like financing (e.g., Huynh (2016), Huynh et al. (2017)), Vohora et al. (2004), Wright et al. (2006)).
The same holds true regarding the network of USOs as 2017) found. Their studies highlighted how social capital and networks of a USO team develop and change over time due to more market contacts and different kinds of the involvement of university institutions (e.g., TTOs) and governmental support programs. The composition and diversity of a USO team and thus the direct effect might influence these contacts and network relations, bringing in nonacademics or academics with commercial and market experiences (Clarysse and Moray (2004), Vohora et al. (2004), Vanaelst et al. (2006)). This leads to our Hypotheses 7 (a*b) and 8 (c*d).
Hypothesis 7. The direct effect of team diversity on firm performance is mediated by the firm's network. Hypothesis 8. The direct effect of team diversity on firm performance is mediated by the firm's financial resources.
Summarizing the discussion and bringing together the hypotheses, the mediation model is developed (Fig. 1). Following the former results, discussion, and analysis, we create a model including four latent variables: Team Diversity, Network, Finance, and Performance, which are measured with 31 items (for a detailed explanation, see the next section). The paths between the four latent variables represent the hypotheses in our model. We assume that the variables Network and (access to) Finance are mediators for the effect of Team Diversity on Performance. These hypothesized causal chains, in which team diversity affects financial resources and networks that, in turn, affect performance, are derived from theoretical considerations explained above. Figure 1 shows the model. The paths in the figure are labeled to distinguish easily between direct and indirect effects.
Sample
We obtained the addresses of all USOs in Switzerland and Germany from the address pools of the agencies of biotech companies in both countries. We controlled for these addresses online and then sent out an online survey in German/Swiss-German and English to contact all existing biotech companies at that time without any sampling or selecting. A standardized questionnaire was used and sent to 900 USOs in 2008 (return rate 15%) and subsequently in 2012 and 2013 to keep in touch with the respondents online/via the web. The respondent was always one of the founders or top management team members of the USOs. The survey includes 60 questions. Our empirical study consists of 131 USOs in the German and Swiss biotechnology sector, whereby 64 are founded by teams that are used for the analysis. Our pure focus on biotech USOs specializing in this field has the advantage of avoiding or diminishing the effects of field differences in the USO team composition, networks, financing, and performance (e.g., Knockaert et al. (2011)). A total of 78% of the companies are from Germany and 22% from Switzerland. Although a clear distinction of the business activities of the companies in the biotech business is not always possible, we categorized them according to the main business content. A total of 30% of the respondents produce pharmaceuticals; 25% work in genetic engineering, 22% in laboratory testing/innovation, and 14% in medical technology; and 4% produce chemicals and 3% biotech-related software.
The information from 31 items is used to estimate the path coefficients. Each latent variable in the structural model ( Fig. 1) is measured by a block of items (measurement models) that are asked for in our questionnaire. To measure Team Diversity, we use typical items discussed in the theoretical background section. These 10 items explain functional and demographic diversities and personal traits, such as study programs and degrees, doctorates/PhDs, other titles, soft skills (e.g., leadership experience), industry experience, character aspects, contacts, age, nationality, and size of the team, because Visintin and Pittino 2014, Knockaert et al. (2011), Scholten et al. (2015, Criaco et al. (2014), and Gimeno et al. (1997) have already used these measures. The latent variable Finance is measured by asking for the usual financing issues for new ventures and USOs in particular taking into account the different kinds of financial support USOs can obtain (Shane (2004), Beckman et al. (2007), Zimmerman (2008), Franke et al. (2008), Huynh (2016), Clarysse and Moray (2004)). To measure the firm's network and social capital of the team members adequately, the latent variable network is measured by 12 items that consider formal and informal contacts whereby we focused strong ties. To reflect the special USO context with new ventures from the biotechnology sector, we design the items correspondingly, which means using network contacts USOs have in common (Shane (2004), Vohora et al. (2004), Ferretti et al. (2019)). On the basis of the assumption from the upper echelon theory that firm performance is directly influenced by team effectiveness (e.g., Amason et al. (2006), Brinckmann andHoegl (2011), Sine et al. (2006)), the performance variable is measured by usual items from the management and USO performance literature (Unger et al. (2011), Klotz et al. (2014), Visintin and Pittino (2014), Ben-Hafaïedh et al. (2018)). Our study uses the common measures of growth rates for sales or employment (McKelvie and Wiklund (2010), Mathisen and Rasmussen (2019)). Except for the items for measuring the performance variable, we use 5-point Likert-type scales for all items, ranging from totally agree to totally disagree or for the team diversity construct ranging from totally homogeneous to totally heterogeneous. Table 1 lists the items, and the Appendix shows the descriptive statistics of the items.
Partial least squares model
Following Carpenter et al. 2004, Ferrier (2001, and Kor (2003), we use structural equation modeling, especially the partial least squares method (PLS) (Wold (1966) and Wold (1974)) to test our hypotheses. Carpenter et al. (2004) argued that, if the theoretical construct is top management team diversity, more sophisticated methodologies, such as structural equation modeling, should be used.
"The advantage of such an approach is that measurement error becomes less of a factor and the odds of generating spurious results from single item demographic variables is significantly reduced." (Carpenter et al. (2004), p. 772) Furthermore, we use the PLS method because it has proven capable of handling small-and medium-sized samples (Chin and Newsted (1999), Chin (1998)) where a sample size of 20 observations could be appropriate (Henseler et al. (2009)). As a heuristic rule, Chin (1998) recommended multiplying the highest number of the measured items of one of the constructs in the model with five to obtain the minimum observation requirement for the data. Following this rule, we need at least 10*5 = 50 observations in the data; hence, our analysis with 64 teams can be confirmed as satisfactory concerning sample size. Other reasons why we choose PLS is the absence of distribution assumptions for the data (e.g., Lohmöller (1989)), and testing mediation directly in the model is possible.
Our model in general shows the interaction among the Team Diversity, Finance, Network, and Performance of USOs. These are the latent variables (see Fig. 1) representing the structural model. Network, Finance, and Performance are endogenous variables; Team Diversity is exogenous because this construct is based of team variables that consist of sociodemographic variables (age, doctorates, industrial experience, nationality, other titles, study programs, and degrees) that are given by the socialization and education processes of team members. These sociodemographic variables are given and cannot be influenced by the other model variables. We also include soft factors (character, contacts, and soft skills) and the number of team members. The team composition takes place before the team searches for the financing (finance construct) or firm contacts and partners (network construct) so that the soft factors and the number of team members are also exogenous. The operationalization of these latent variables can be made by reflective and formative measurement models. Given that Petter et al. (2007) showed that 30% of the measurement models in information system research are faulty, the use of formative or/and reflective measurement models should be evaluated carefully (for a detailed analysis, see Bollen and Lennox (1991), MacCallum and Browne (1993), Edwards and Bagozzi (2000), Jarvis et al. (2003)). We decide to measure Team Diversity and Finance formatively and Network and Performance reflectively.
As an example, we consider the variable Team Diversity in detail. This variable is operationalized by 10 items that measure the diversity of the observed teams in the data. Instead of an indexing approach that often leads to biased results, the Team Diversity construct in our PLS model shows how the team diversity items influence the team diversity, specifically a homogeneous or a heterogeneous team composition. We based our questions in the survey (a) on a thorough literature analysis on already used different diversity variables (Visintin and Pittino (2014), Knockaert et al. (2011), Scholten et al. (2015, Criaco et al. (2014), and Gimeno et al. (1997)) and (b) on testing the meaning of the questions beforehand by interviews and pretests. In contrast to reflective measurement models, the formative indicators cause variance in the construct and can be individually evaluated on the basis of their contribution to the construct (latent variable) analyzing their path weights and their loadings (Cenfetelli and Bassellier (2009)). The novelty using this approach for measuring diversity effects is that the PLS model makes it possible to obtain information of different diversity items during the estimation minimizing the problem that the results are biased due to data that is measured on an aggregated level. Additionally, the effect and the absolute and relative importance of each diversity item can be analyzed. Similar approaches can be found by Talke et al. (2010) and Naranjo-Gil et al. (2008).
Empirical findings
The empirical results for the model estimated with the PLS method are undertaken by a two-step analysis. First, we analyze the results for the formatively Fig. 1 Structural model and hypotheses: The paths are labeled with small letters (a, b, c, d, e, and f) representing the direct effects captured by Hypotheses 1 to 6. The indirect effects can be analyzed by Hypotheses 7 and 8 or a*b and c*d representing the mediation effects measured team diversity construct to obtain a more sophisticated view on the effects of the different team diversity items for the construct and the model. This means we are able to observe (a) the relative importance and (b) the absolute importance of the diversity items for the construct (e.g., Cenfetelli and Bassellier (2009)). Second, we examine the path coefficients between the latent variables to examine the validity of our hypotheses. According to Lohmöller (1989), they must be greater than 0.1 to constitute statistical evidence. Due to the lack of distribution assumptions in PLS models (e.g., Vinzi et al. (2010), Chin and Newsted (1999)), the statistical significance for the measurement model weights and path coefficients is tested with the (1993)). Table 2 shows the result for the team diversity construct. Six of the ten items of the team diversity construct are statistically significant on a 10% significance level. The items have positive and negative weights (standardized regression coefficients). A positive path weight indicates a positive impact on team diversity, while a negative path weight implies a negative impact on team diversity. Hence, a positive weight increases the diversity of a team, whereas a negative weight creates none of the heterogeneous effects mentioned in the former team composition debate. These results for the formatively measured team diversity construct show that study programs and degrees, industrial experience, and nationality have a statistically positive impact on team diversity. By contrast, the items doctorates, age, and team members (size) generate a negative impact on team diversity, meaning that they detract from the heterogeneous effect. These negative effects must be discussed carefully. In a formative measurement model like in the case for team diversity, the measured items are equal to predictors in a multiple regression.
Team diversity construct results
The team diversity construct results show how the different diversity items influence diversity in our model for USOs in the biotechnology sector. In addition to the statistically significant items for the diversity construct, the nonsignificant items are also interesting. We cannot observe a significant contribution to the diversity construct by the items other titles, soft skills, contacts and network, and character. The diversity of these items therefore has no relevance for our model, but might have when tested in a different context or method.
Structural model results
The PLS estimation process aims to maximize the correlation between the construct variables (Team Diversity, Network, Finance, and Performance) where the construct values are framed by their formatively or reflectively measured items. Figure 2 illustrates the path coefficients and t-statistics for the structural model following the bootstrapping process, and the Appendix depicts the entire results including the measurement models.
As suggested from the findings in the literature, the impact of team diversity on firm performance is nearly zero. The path coefficient c takes on the value of − 0.058 (t = 0.377). Thus, we cannot observe a direct effect of team diversity, specifically a heterogeneous team composition on performance. The direct effect from Team Diversity to Network (path a, Hypothesis 2) is statistically significant with a positive path coefficient (0.421, t = 3.129). Therefore, our assumption that team diversity affects firm networks positively can be confirmed. We observe a positive impact of Team Diversity on Finance; hence, Hypothesis 6 (0.457, t = 2.784) can be confirmed. The access to financial resources is therefore influenced by team diversity within our data.
The impact of the firm's network in our model is represented in Hypotheses 3 and 4. There is a statistically significant positive impact of the firm's network on the access to financial resources (0.438, t = 2.586). Thus, Hypothesis 4 can be confirmed. A greater network enhances the probability of obtaining access to financial resources. The direct effect of network to firm performance captured by Hypothesis 3 cannot be confirmed (− 0.110, t = 0.919). This result regarding the network success hypothesis and social capital theory is fairly surprising. We observe a positive impact of financial resources on firm performance (0.434, t = 2.081). Hypothesis 5 (path d) can be confirmed. The access to financial resources leads to higher firm performance.
The results reveal two indirect effects. Instead of the assumed two mediation effects a*b and c*d, we only observe c*d as statistically significant mediation. Thus, Hypothesis 7 must be rejected, and Hypothesis 8 can be confirmed. The second mediation we observe concerns the relationship between the firm's network and
Model evaluation
To check the validity of our approach, we evaluate the structural model and the measurement models. The quality of the structural model can be described by parameters R 2 , f 2 , and Q 2 . The R 2 statistic is well known from OLS regression and is calculated with the endogenous and exogenous variables as dependent and independent variables. Chin 1998 identified R 2 ≥ 0.67 as a substantial and R 2 ≥ 0.33 and R 2 ≥ 0.19 as an average result. To analyze the substantial impact of an exogenous variable on an endogenous variable, the effect intensity f 2 is used. According to Cohen (1988), f 2 > 0.35 describes a large intensity, f 2 > 0.15 a medium intensity, and f 2 ≥ 0.02 a small intensity. Stone-Geisser's Q 2 is determined by a blindfolding process (Chin (1998)) and evaluates the forecast relevance of the dependent variables in a structural model (Chin (1998), Tenenhaus et al. (2005)). It should be greater than 0 (Fornell and Cha (1994)). Figure 3 exhibits the R 2 , f 2 , and Q 2 values. With regard to the recommendations of Chin (1998), the R 2 values for Network (R 2 = 0.177) and Performance (R 2 = 0.117) can be considered to be weak results. These variables cannot have a greater R 2 value because the exogenous variables apparently are not able to explain the majority of the total variance of the endogenous variables. The network variance cannot be explained entirely by Team Diversity. By the same token, the performance variable cannot be explained perfectly by the firm's network, team diversity, and finance. R 2 = 0.569 for Finance could be stated as moderate. Generally, a small R 2 value does not necessarily imply faulty model assumptions. Where the research field of success factors is concerned, small R 2 values can be evaluated as substantial as well (e.g., Bauer (2002)). The Q 2 > 0 criterion is fulfilled for each variable. The strongest effects with respect to f 2 are observed for the impact of Team Diversity on Network (path a, f 2 = 0.215) and Finance (path c, f 2 = 0.332) and for the impact of Network on Finance (path e, f 2 = 0.341). A small effect intensity could be observed for the impact of Finance on Performance (path d, f 2 = 0.091). Consistent with the PLS coefficients, the impact of Team Diversity on Performance (path f, f 2 = 0.007) and Network on Performance (path b, f 2 = 0.010) carries the lowest influence in the model. The structural model quality criteria confirm that the structural model is valid, although a slight weakness due to the two average R 2 values in the model is inevitable.
Regarding the measurement models, there are two methods with which to determine the latent variables, namely, a reflective and a formative one. For reflectively measured latent variables, we control the average variance extracted (AVE) (Fornell and Larcker (1981)) and the composite reliability (Chin (1998)). According to Chin (1998), composite reliability should be greater than 0.6 and the AVE greater than 0.5. Furthermore, the factor loadings of the reflectively measured variables should be greater than 0.707 if they are to make an explanatory contribution to the latent variable (e.g., Johnson et al. (2006)). Formatively measured latent variables have to be tested for multicollinearity. We thus analyze the correlations between the measured variables and the variance inflation factor (VIF) of the team diversity and finance items. Henseler et al. (2009) considered VIF values greater than 10 as critical, whereas Diamantopoulos et al. (2008) found multicollinearity problems for VIF values greater than 5. The VIF values for the items of the two formatively measured constructs in our model do not exceed 2.9; hence, we do not see difficulties with multicollinearity. The Appendix (Table 4) shows the results. Table 3 shows the AVE, composite reliability, and factor loadings for the reflective measured constructs. The factor loadings of the items for the network construct do not always achieve the minimum requirement (0.707), and thus, the AVE criterion will not be met (AVE = 0.237). The PLS model allows the omission of items in reflectively measured constructs if their loadings are not high enough to increase the validity of the construct. Therefore, we omitted all variables with loadings smaller than 0.707. In sum, our model is valid (AVE = 0.681). The variables that are ultimately used in the analyses are marked with a star.
Robustness checks
To test for the existence of nonlinear relationships, we run the regression equation specification error test (RESET) by Ramsey (1969). No significant results indicate nonlinear effects. To test for endogeneity, we adapt the procedure proposed by Hult et al. (2018) using Gaussian copulas into a PLS-SEM framework. The bootstrapping results show no significant Gaussian copulas. We consequently conclude that there is no endogeneity issue in the data. To check for unobserved heterogeneity, we use the finite mixture PLS (Hahn et al. (2002)) that tests if subgroups exist that lead to substantially different model estimates. The one-segment solution reveals the highest AIC value. A two-segment solution also shows a low AIC value with a segment size of 0.95% for segment one and 0.05% for segment two. Following Sarstedt et al. (2011), this result leads to the conclusion that unobserved heterogeneity does not significantly affect the data because the 0.05% segment has no management relevance due to its small size.
Discussion and limitations
The results in this study examine the impact of team diversity on performance and important success factors, in response to the plea by Mathisen and Rasmussen (2019) and Nikiforou et al. (2018) for a greater focus on team diversity and its effect of firm performance. We obtain interesting and novel insights that are helpful to understand better the effects of diversity in teams.
In the following, we highlight (1) our detailed results and how they relate to the previous literature, (2) the specific results and our contribution to research on academic entrepreneurship and entrepreneurial teams, and finally (3) limitations (especially those related to the research design) and (4) implications for theory and practice.
Direct effects
Prior literature findings are ambiguous concerning the direct effect of team diversity on the firm performance of USOs (e.g., Knockaert et al. 2011, Visintin andPittino (2014), Huynh et al. (2017), as well as Ensley and Hmieleski (2005), Amason et al. (2006), Webber and Donahue (2001)). Our study emphasizes that the investigation of a direct effect is insufficient to analyze the effect of team diversity on USO performance. Thus, more sophisticated models in regard to measurements or methods are necessary to open the black box of team diversity effects (Hahn et al. (2019), Carpenter et al. (2004, Mathieu et al. (2008), Klotz et al. (2014)), and we provide the requisite empirical evidence to this debate.
The current study finds statistically direct positive effect of team diversity on the access to financial resources for USOs. This is in line with other studies, where the team composition is considered to be an important signal to potential investors to trust the team to bring an invention to the market and run a USO successfully. When team diversity is built on a balanced mix of academics and non-academics, investors (either VCs or business angel of universities) are willing to finance the USO or to invest higher and sufficient amounts of money (Beckman et al. (2007), Zimmerman We further find a direct positive impact of team diversity on the network. This relationship is not surprising regarding network theory (e.g., Burt (1992), Granovetter (1973)) and is in line with USO research as well. Rasmussen et al. (2011), Rasmussen et al. (2015, and Scholten et al. (2015) revealed in their studies that a USO team obtaining a mix of skills, including entrepreneurial experience and innovative capabilities, will positively affect the networks of the USO team. Similarly, Mosey and Wright (2007) showed that the diversity of human capital skills in a team enhances the ability of a team to build social capital and networks, especially when there exists prior experience in owning a firm and having commercial experiences in the team. This could lead to easier network building with business managers and financial investors.
The positive and highly statistical significant impact of the firm's network on the access to financial resources shows that network ties are highly relevant in obtaining finance. This result confirms previous findings concerning the interaction between network and financial resources (Hoang and Antoncic (2003), Zhao and Aram (1995)). Thus, our results are in line with the outcomes of prior researchers, which as well underscores the important effect of the founding team's networks on financing using the reputation (capabilities) of the team and the team member networks (Heuven and Groen (2012), Rasmussen and Sørheim (2012)) and Shane and Cable (2002), Shane and Stuart (2002)). Our results partially contradict the results of Huynh (2016) who did not find a direct but rather an indirect effect of networks on financing. This might be because the USOs we analyze are in a later stage than those in the study of Huynh (2016). Thus, our USOs might have a chance to obtain different forms of financing. Finally, the indexing of financing forms might explain this difference as well.
Another direct effect is reflected by the relationship between network and performance. We posited a positive relationship because a stronger network should increase productivity and thus performance (Reagans and Zuckerman (2001)). Instead, our results reveal no significant impact of networks on performance. The effects of the network variable show that USOs in the high-tech sector should have a great diversified network, having other positive effects on essential resources like financing. However, network diversity does not influence performance directly in our study. This is in line with the results of Huynh et al. (2017) who do not find a direct effect of the network of USO teams on performance but rather an indirect effect. This result limits the network success hypothesis for USOs in the biotechnology sector in some way. Unsurprisingly, the impact of the financial resources of USOs on performance is significantly positive and confirms the importance of financial resources of firms (Wernerfelt (1984)). Thus, our study confirms former results and research that the more diverse financial resources a USO can gain, or the more effective the fundraising is, the better will be the performance of the USO (Powers and McDougall (2005)). If this is not the case, USOs will tend to exit from the market at any stage (Rosman and O'Neill (1993)).
Indirect effects
The upper echelon approach (Hambrick and Mason (1984)) states that team effectiveness leads directly to firm outcomes. Our results are in line with the organizational behavior and entrepreneurship literature suggesting a more complex relationship between team diversity and outcomes (e.g., Knockaert et al. (2011)). Hence, it seems reasonable to test team diversity effects in mediation models (e.g., Huynh (2016), Huynh et al. (2017), or as recommended by Mathisen and Rasmussen 2019). We do not find direct team diversity effects on USO performance. Nonetheless, does this really mean that team diversity has no impact at all on USO performance? To answer this question, we test and identify two mediation effects in our model creating two sources of the distinctiveness and originality of our paper. One of these is the mediation of the relationship between team diversity and performance by the access to financial resources. USOs are specific due to their business idea and development because this is related to a unique technological or research invention (Shane (2004), Knockaert et al. (2011)). In general, the teams of USOs start with academic founders incorporating this specific knowledge to develop USOs to enter the prototyping or market stage. Thus, involved team members show a kind of homogeneity (Knockaert et al. (2011)). Prior research has confirmed that, often, founding teams in USOs do evolve in parallel with the stages of USO development because of financing issues, or other context and success factors, i.e., universities providing access to staff triggering this (Clarysse and Moray (2004), Huynh et al. (2017), Huynh (2016), Vohora et al. (2004), Wright et al. (2006)). As a consequence, often the structure of the USO team changes and is influenced to procure fundraising or financial support, implying that the direct effect of team diversity might be affected by the access to financing. Thus, the fine tuning or balancing of the USO team has a strong relationship with financial issues (Vanaelst et al. (2006)).
The second mediation concerns the relationship between network and USO performance-mediated by access to finance. The analysis of the indirect effects shows that team diversity has a positive indirect impact on USO performance and that the relevance of firm networks and social capital must be emphasized. Our results can be linked to the work of Brinckmann and Hoegl (2011), Vissa and Chacar (2009), Balkundi and Harrison (2006), and Walter et al. (2006) who consider firm networks and social capital as two of the most important success factors. In our model, networks are highly relevant for the access to financial resources and for firm performance. The results show that team diversity is positively related to firm performance, and we therefore conclude that a heterogeneous team composition is favorable for USOs, because of the link between network ties and the access to financial resources.
The same holds true regarding the network of USOs, as Ferretti et al. 2019, Ben-Hafaïedh et al. (2018, and Huynh (2016) found. They show that ways that are different from social capital and networks of a USO team develop and change over time due to more market contact or different kinds of the involvement of the university institutions (e.g., TTOs) and governmental support programs. These network relations might influence the team composition and diversity of a USO team and thus the direct effect by changing or enlarging the team composition, bringing in non-academics or academics with commercial or market experience (Clarysse and Moray (2004), Vohora et al. (2004), Vanaelst et al. (2006)). Huynh et al. (2017) found that, in the USO development stage, the networks of a founding team indirectly affect performance. Universities commonly provide the chance to "staff" a USO team with new members with industry background (Clarysse and Moray (2004)), fine tuning and balancing the USO team composition (Vanaelst et al. (2006)), which enhances later performance.
Team diversity construct results
A main contribution of this paper is the multifaceted measuring of diversity working with the PLS method: this makes it possible analyzing the importance of diversity items simultaneously. The diversity of teams measured by item study programs and degrees, doctorates, industrial experience, age, nationality, and the quantity of team members has a significant impact in the model where the items other titles, soft skills, contacts and network character have no significant impact. This delivers new insights to better understand the effects of diversity in teams.
To discuss this more in-depth, does the negative significant values of the items age and doctorates mean that an increase of heterogeneity in these items leads to lower team diversity? At first glance, this appears to be a paradox, but a more detailed analysis provides a compelling resolution. Team members with different ages are more similar to one another than team members with equal ages. In other words, the probability for diverse human and social capital is higher if team members have a homogeneous age. This can be explained by the specific USO context and the formation of a team: As Knockaert et al. (2011) and Visintin and Pittino (2014) and other studies (Mathisen andRasmussen (2019), Hossinger et al. (2019)) have found, the starting team of a USO is often built around the technology or invention and typically created in a research group or project (Clarysse et al. (2007b), Markman et al. (2008), Heirman and Clarysse (2004)). In these groups, the scientific and technological background is more or less the same, due to the research aspect. At the same time, these teams generally consist of different hierarchy levels in the scientific context, e.g., research group leaders and professors/chair holders, PostDocs, PhDs, or technical assistants; thus, the technological background of team members might be close, but age levels might be different. The explanation for the diversification of doctorates is almost the same: in these research teams, all people are interested in the same technology or research focus, especially in biotechnology; often professors or PhDs from different departments or fields from academia are involved, e.g., biology, medicine, or physics. Thus, the more team positions are blocked from the beginning with these different PhD academics, the less diversity might exist regarding non-academic experience or hierarchy levels. Another explanation for these negative path weights in formatively measured constructs could be suppressor effects (Cohen and Cohen (1983)). In this case, one or more of the predictor variables explains the variance in other predictor variables and thus reduces or reverses the path weight of these predictors with the construct variable even if there are no great problems of multicollinearity (Cenfetelli and Bassellier (2009)). We see no indications for a suppressor effect; hence, we analyze the absolute importance of an indicator to its construct with the help of the zero-order correlation of the item with the construct (loadings in Table 2). The absolute importance of an indicator helps us identify how the item correlates with the construct value. The correlation for doctorates is nearly zero; this item is not of such an absolute importance for the construct. The relative importance measured by the negative path weight occurs if doctorates are estimated in the multiple regression controlling for all other predictors in the measurement model.
We contribute as well to the discussion on entrepreneurial teams with our results for the formatively measured team diversity construct. We find that study programs and degrees, industrial experience, and nationality have a statistically positive impact on team diversity. The industrial experience result underlines the result of other studies that a balanced combination and mix of the former homogeneous team of academics needs a nonacademic component to develop an USO and enhance the performance (e.g., Knockaert et al. (2011), D'Este et al. (2012), Hayter (2013, Visintin and Pittino (2014), Ciuchta et al. (2016), Huynh et al. (2017), Ferretti et al. (2018). The result of an increase in nationality diversity delivers helpful insights, too, to generate effective mixed USO teams because this diversification might open new kinds of social capital and enhance the chances to develop an internationalization path for USOs (Burer et al. (2013), Civera et al. (2019a), Ferretti et al. (2018), Ferretti et al. (2019). Moreover, bringing together international researchers might enlarge diversity regarding research or management culture, methods how to deal with questions, and approaching a problem (Civera et al. (2019a), Ben-Hafaïedh et al. (2018)). Thus, this delivers a real effect on diversity. The study programs and degrees as positively enhancing diversity deliver a new insight, too, that when team members still have a PhD in the same field, their study field and degree could be different ones. Thus, the spread in study fields like in the case of nationality can increase the diversity in the way that methods to approach a problem or the point of views and research tools are different (Ferretti et al. (2019)). Therefore, this also contributes to increase the heterogeneity of teams.
Finally, the negative path weight of the quantity of team members also appears to be surprising at first glance. Does this mean that a higher number of team members leads to a lower team diversity? We explain this effect in two ways. (1) Imagine that a team with two team members has completely different soft skills. One of them is a professional in LaTeX, and the other knows Microsoft Word well. The two founders intend to expand their team with two more colleagues. Suppose that the two new team members are both professionals in Microsoft Word as one of the originally team members. In this case, the relative team diversity concerning soft skills decreases if the number of team members increases. (2) Moreover, imagine how teams are built: in general, individuals prefer other individuals when they have something in common (Zacharakis (2010), Jungwirth and Moog (2004)). This means that, even increasing the team size, it does not at all mean increasing the team diversity in the same ratio and effect. Thus, we can imagine that there might exist a number of team members that is most effective or diminishing effects of team size regarding the advantages of big versus small teams (e.g., Backes-Gellner et al. (2006)).
Keeping in mind that we observe the relation of team diversity, network, finance, and the performance of USO in the biotechsector, the team diversity construct results confirm that the diversity of "soft characteristics" does not play a significant role. Those characteristics are difficult to observe for investors and could be secondary for network building and maintaining ties. It seems that, for USOs diversity, the "hard characteristics" of the team have a significant stronger influence on performance as well as the indirect effects. These insights are new.
We observe the same for the quantity of team members. Teams with a high number of team members are more homogeneous than work groups with fewer team members. These results indicate that age heterogeneity and the quantity of team members should be used carefully as a proxy for team diversity in single item or simple index measures. We must distinguish here between age and team size as single proxies for success (e.g., Kilduff et al. (2000), Eisenhardt and Schoonhoven (1990)) and age and team size as items that are a component of a team diversity construct. The question that occurs here is if investors or network partners evaluate this situation equally. In other words, do investors or network partners realize team diversity as a whole construct, or do they concentrate on selected diversity items to evaluate diversity of the team? We cannot answer this question at this point, but we find that team size and age diversity could stand for a homogenous team composition and therefore emphasize the use of more complex statistical methods to analyze diversity effects. The use of the PLS method and the analysis of the team diversity construct make it possible to detect relationships of different diversity items in an entire model and minimize the probability of biased results.
Contribution
With regard to the controversial results and prior literature findings of the impact of team diversity on USO performance (Mathisen and Rasmussen (2019)), there are several key original contributions of the paper: A first new and exciting contribution to the literature, with some important managerial and practical implications (cf. Diánez-González and Camelo-Ordaz (2019)), is that team diversity matters in USOs because it helps to access a variety of resources. So, from a managerial perspective, team founders and members should be much more aware of this effect and leverage the strength of their network much more.
Moreover, as a second contribution to academic entrepreneurship and entrepreneurial team research, our results provide insights that team diversity seems to overcome the typical difficulty to access resources and gain credibility which inherently characterizes USOs and which hampers their success (Rasmussen et al. (2011)). This helps to overcome a typical problem confronting USOs-the inability to transition from the university to the business world; the results make these outcomes relevant for questions related to mainstream management research (Fini et al. 2019).
Third, the single effects of diversity aspects on the overall diversity of a USO team provide new insights for research and practitioners, because it is neither the size of a team nor the accumulation of multiple different team aspects, but rather the focus on some specific ones and their combinations, making the difference on the overall diversity and direct and indirect effects. Thus, increasing team size has a negative impact, whereas more nationalities deliver a positive influence on overall diversity, thereby creating positive effects on networks, financing, and indirectly on performance. This means that great is not always best, but a focused team composition can deliver advantages for USOs in their development and performance, especially with respect to industry and academic experience, mixed teams of research field backgrounds, and nationality. For all our other tested aspects, a USO team should think carefully about how to generate more heterogeneity into the team.
Fourth, our unique data set of 64 USOs in the biotechnology area in Switzerland and Germany provides new insights regarding the direct and indirect effects of the overall diversity of USO teams. The indirect effects that are caused by team diversity are highly relevant for USO performance. Thus, we find that, mediated by financing and networks, the diversity of USO teams has a positive impact on performance. This means, especially in developing USOs, a change of the diversity and homogeneity of a team toward a more elaborated heterogeneous team contributes to a positive performance of USOs.
A sixth contribution is that the selection of an adequate statistical method is crucial for the new insights regarding the subject of team diversity (Barrick et al. (1998)). The PLS method we choose makes observing a bunch of diversity items in one model minimizing the probability of biased results possible (Carpenter et al. (2004)). Furthermore, looking at the importance of different diversity items and analyzing mediation effects directly are possible. Because the PLS algorithm maximizes the correlation of the construct variables in the model that are framed by their measured items, we are able to investigate how diversity interacts with networks, access to financial resources, and the performance of the firm. We show with our study how context-sensitive diversity research could emphasize the use of sophisticated statistical methods to open the black box of diversity research a bit more.
In summary, providing these new insights, our paper contributes to the team composition literature in an innovative way discussing the single effects of the different items of diversity and disentangling the direct effects of team composition from indirect effects mediated by two other important success factors.
Limitation and future research
The present study has several limitations, which in turn suggest ideas for future research. We use data from USOs in the life science field and industry. In addition to the positive effects mentioned to focus on this specific group, this special context could have an influence on the research results and reduce the generalization of our research results for USO team diversity in other industry contexts. Thus, we propose either to replicate this kind of focus in other university and academic fields, such as applied physics or mechatronics or sensoric, IT, to identify if these team diversity effects can be found in the same way with these direct and indirect effects or to replicate this kind of study on team diversity over a broader spectrum of USOs, controlling for the different academic fields.
Moreover, our study provides insights in a specific German-speaking context in universities, specifically Switzerland and Germany. Regarding the different policies and management cultures in universities, i.e., in other countries, our study is not comparable with those and limited. However, an international comparison might deliver interesting results, controlling for university cultures and political settings.
Our data are cross-sectional; hence, the interesting point is to study team development and team diversity effects in USOs over time from the seed phase over the market entry and growth or even an exit like being sold or bringing the USO to stock market. The interesting question is how and why teams change in their diversity aspects, what are drivers and antecedents, and what might be effects of these changes.
A methodical limitation refers to the need of imputing missing values because all survey participants are personally contacted and denied some answers. Due to the small sample size in the special context of USOs in the biotech industry, excluding the missing values is impossible. Another limitation concerns the concentration on the most crucial success factors where a more complex model including the other important success factors of firms could lead to more convincing results.
Finally, we focus on traditional performance measures. However, following Beck et al. (2019), it would be interesting to measure and acknowledge the diversity impact on other outcome or performance variables like non-pecuniary benefits when undertaking USO activities as a researcher as a so-called form of outboundpecuniary innovation. Not all USO activity is knowledge commercialization for profit but may also aim at engaging with different stakeholders for the public good or fulfill individual or team values. Future research should check for the impact of diversity on these outcomes (goals) or analyze different value settings as part of analyzing the impact of diversity on performance.
Implications
Providing new results regarding the different aspects triggering the diversity effects of USOs and the insight on indirect effects underlines the importance of the team composition of USOs, either for the early stage or for the development phases. These results have important implications for the management of USOs themselves and institutions working and supporting USOs, such as universities and policy makers. For financing institutions, there can be some observed interesting and helpful suggestions, too.
Universities, which can serve as the "breeding" institutions of USOs, have an important role to play in supporting USOs to find their way to the market and achieving success , Meoli and Vismara (2016)). The importance of a diverse team in a specific composition for USOs might help bring together the more homogeneous funding teams with "fitting" potential new team members to generate a more heterogeneous team. This could be done either by enlarging the network of USOs or by offering training and workshops to enable USO teams to understand that they must generate a more heterogeneous but balanced team. Thus, universities should try to set up an even more supportive environment for USOs and team-up possibilities , Meoli et al. (2019)). For instance, universities could start a general PhD or PostDoc policy enabling new faculty and junior or assistant and associated professors to take part in courses offered in spin-off creation and intellectual property. Such a program would bring together researchers from different fields and enable them to become acquainted with each other. Business planning courses or competitions or teaching in joint master programs might support this by bringing together researchers and getting to know one another, which is commonly the first step for a team building activity. In the same line, a support by TTOs or university management to bring in industry experienced managers or USO managers might help search and find new potential USO team members (e.g., Muscio and Ramaciotti (2019) For USO teams themselves, this study offers them a starting point to reflect the current team situation regarding diversity and ways how that team composition could be changed to facilitate USO development (e.g., Shane (2004)). We provide critical insights to discuss characteristics and the mix of the team to understand if the current team diversity is sufficient, and if not how it could be changed; i.e., regarding a USO to become a born global or early internationalizing founded company, the effect regarding diversity and positive heterogeneity effects in networking or getting financed is now evident. Therefore, they might search for a more international team member fitting in to the team, regardless of field knowledge or parallel industry experience (e.g., Zacharakis 2010, Jungwirth andMoog (2004)). Thus, the team members of USOs become more aware of the need to develop a team dynamically. On top of this, the study suggests to team members how to better use the existing network, i.e., in search for a new member or other resources.
Finally, for VCs, the implications are quite similar. As former studies have shown (Zacharakis (2010), Jungwirth and Moog (2004)), VCs like to invest and support USOs or other firms, where team founders have similar skills or backgrounds. In this case, homogeneitypositive effects and trust building will prevail. However, VCs should know that to bring in a new team member, for example, with industry experience, requires that the new team member must provide a better fit with the existing team members (e.g., same skill, same university, or same research field). At the same time, this new team member should still bring in a very important component of diversity, such as industry experience. Thus, VCs might learn about the most effective aspects of team diversity and how to figure this out and in a later stage, enabling them to create a more effective USO team (Zacharakis (2010), Jungwirth and Moog (2004)).
For public entities wanting to increase the number of USOs in general or successful USOs, this study shows the necessity of supporting universities and their institutions, such as TTOs, accelerators, or incubators, to bring together USOs with potentially fitting new team members. Thus, financial support should offer the generation of the mentioned courses or events. Moreover, due to the importance of USOs revitalizing technological fields and parts of the economy, our study provides some practical social and policy implications. To make it easier for USOs to bring in new diverse team members, with different experiences, policies could allow universities and other potential shareholders to invest money and to obtain shares and in doing so bring in important team members. Moreover, labor law restrictions for university researchers and contracts could be developed in a more flexible way to make it easier and more appealing to researchers to stay in USOs and engage more strongly (regarding working time), thus helping to make USO more successful due to the fact that research teams can stay in USOs easily.
Beyond the limitations discussed, we contribute new insights regarding team diversity and USO performance to the ongoing discussion on team effects and hope to emphasize the need for sophisticated mediation models within a different academic field context or over time to open the black box between team diversity inputs and USO performance.
Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Fig. 4 Results of structural model and measurement models: Each arrow contains (a) the path weight, (b) the t value. The limits for statistical significance are the following: t > 1.645 = p ≤ 0.1, t > 1.960 = p ≤ 0.05, t > 2.576 = p ≤ 0.010, t > 3.291 = p ≤ 0.001
|
2022-12-24T14:46:24.288Z
|
2020-11-04T00:00:00.000
|
{
"year": 2020,
"sha1": "8a64a536cb0aaffd47ebf467e520399c6ad2ffe9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11187-020-00412-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "8a64a536cb0aaffd47ebf467e520399c6ad2ffe9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
17790545
|
pes2o/s2orc
|
v3-fos-license
|
Activation of hypoxia signaling induces phenotypic transformation of glioma cells: implications for bevacizumab antiangiogenic therapy.
Glioblastoma (GBM) is the most common and deadly primary brain tumor in adults. Bevacizumab, a humanized monoclonal antibody against vascular endothelial growth factor (VEGF), can attenuate tumor-associated edema and improve patient symptoms but based on magnetic resonance imaging, is associated with non-enhancing tumor progression and possibly gliosarcoma differentiation. To gain insight into these findings, we investigated the role of hypoxia and epithelial-mesenchymal transition (EMT)-associated proteins in GBM. Tumor markers of hypoxia and EMT were upregulated in bevacizumab-treated tumors from GBM patients compared to untreated counterparts. Exposure of glioma cells to 1% oxygen tension increased cell proliferation, expression of EMT-associated proteins and enhanced cell migration in vitro. These phenotypic changes were significantly attenuated by pharmacologic knockdown of hypoxia-inducible Factor 1α (HIF1α) or HIF2α, indicating that HIFs represent a therapeutic target for mesenchymal GBM cells. These findings provide insights into potential development of novel therapeutic targeting of angiogenesis-specific pathways in GBM.
INtrODUctION
Glioblastoma (GBM) is the most common adult primary nervous system tumor. Despite advances in surgical resection, radiation and chemotherapy, GBM remains one of the most deadly human neoplasms. GBM patients have a median survival of 12 to 15 months and new therapies are desperately needed [1]. Bevacizumab, a humanized monoclonal antibody against vascular endothelial growth factor (VEGF), has been shown to improve progression-free survival in patients with recurrent glioblastoma [2][3][4]. As one of the most highly vascular cancers, GBMs express high levels of VEGF, particularly in areas of necrosis and hypoxia [5,6]. The increased levels of VEGF expression and vascular density in GBM make angiogenesis an attractive therapeutic target. Clinical trials have demonstrated that bevacizumab is a therapeutic option for recurrent GBM patients who have failed previous radiation and chemotherapy [3,7].
Angiogenesis inhibitors, including bevacizumab, produce demonstrable transient clinical and radiological benefits for patients with a variety of cancer types including GBM [8]. However, in 40 to 60% of cases, initial responses are frequently followed by dramatic progression of disease [2,9]. Consequently, overall survival has not been significantly improved with antiangiogenic therapy and is associated with an increased rate of transformation to secondary gliosarcoma [2-4, 9, 10]. Recent data indicate that resistance to bevacizumab antiangiogenic therapy can be due to evasive (upregulation of alternative pro-angiogenic pathways) or intrinsic (genomic constitution) changes within the neoplasm [11]. These findings potentially make combinatorial strategies, specifically integration of both anti-angiogenic therapy and anti-resistance mechanisms, particularly attractive for managing GBM.
Critical to a deeper understanding of the pathobiology of therapeutic resistance and progression will be insights into the effects of anti-angiogenic therapy in GBM. To better understand the mechanisms that underlie tumor cell invasiveness and progression of disease during/following anti-angiogenic therapy, we examined the phenotypic changes of GBM cells in the setting of induced hypoxia. Specifically, bevacizumabinduced inhibition of VEGF can trigger intratumoral hypoxia and initiate compensatory survival pathways, namely upregulation of hypoxia-inducible factors (HIFs) [12]. Data indicate that HIF stabilization enhances tumor cell invasion, cell growth and cell survival and thus serves a critical role in modulating tumor aggression [13][14][15][16][17][18][19][20][21][22]. This may underlie the clinical and radiographic findings associated with anti-angiogenic therapy in GBM patients.
Based on the emerging clinical and imaging findings in recurrent GBM patients treated with bevacizumab, we hypothesized that the lack of improved overall survival in these patients is modulated through the activation of HIFmediated survival pathways. To test this hypothesis, we analyzed expression levels of HIF down-stream effectors and epithelial-to-mesenchymal (EMT) markers as well as microfluidic invasion assays of GBM cells under normoxic and hypoxic conditions. Moreover, glioma cell phenotype and migration were analyzed following HIF inhibition and gain-of-function to investigate the role of HIFs in tumor cell aggressiveness/progression. Finally, these findings were correlated with comprehensive immunohistochemical (IHC) analysis of recurrent GBM patients treated with bevacizumab via comparative analysis of tumor tissue before and after treatment.
Hypoxia and mesenchymal transition in human GbM after anti-angiogenic therapy
Bevacizumab treatment of recurrent GBM is commonly associated with a decrease in intratumoral enhancement and peri-tumoral edema. The reduction in edema results in alleviation of tumor-associated symptoms (Fig. 1a). However, these effects are transient and the tumor eventually becomes refractory to therapy, demonstrates increased infiltration of surrounding brain. and is associated with transformation to gliosarcoma [10]. To test the hypothesis that anti-angiogenic therapy can induce an EMT-like process through hypoxia in GBM, we analyzed tumor tissues from three recurrent GBM patients for markers of hypoxia and EMT before and after bevacizumab treatment. Tumor histology from Patient 1 was most consistent with GBM before bevacizumab therapy but showed histologic changes consistent with transformation to gliosarcoma after treatment (Fig. 1b). Tumor tissues revealed markedly elevated expression of HIF1α and EMT markers Slug and Snail (diffuse pattern), suggesting that the hypoxic microenvironment activated an EMT-like process post-bevacizumab therapy.
Brains from Patients 2 and 3 were examined postmortem. While both patients received radiation and temozolomide chemotherapy, Patient 3 also received bevacizumab (Fig. 1c). Compared to the tumor from Patient 2, the bevacizumab-treated tumor (Patient 3) exhibited a marked increase in cellularity, cell proliferation and spindle-shaped mesenchymal morphology. Furthermore, the bevacizumab-treated tumor contained significantly more tumor cells that stained for the EMT markers matrix metalloproteinase 2 (MMP2), Zinc-finger E box-binding homeobox 1 (Zeb1), Zeb2, Snail, Slug and Twist. Sections taken 5 cm away from the tumor mass revealed occasional single infiltrating tumor cells positive for Ki67 and each of the EMT markers, but the scarce number of infiltrating cells precluded any between-tumor comparisons.
Hypoxia enhances GbM cell proliferation
Given that antiangiogenic therapy can be associated with induction of a hypoxic phenotype and development of gliosarcoma, we hypothesized that exposure to hypoxia would enhance proliferation and mesenchymal change in GBM cell lines. U87 and U251 human GBM cells and C6 rat glioma cells were cultured in 21%, 1% or 0.2% oxygen for 24 or 48 hours. Expression of the HIF target genes coding for erythropoietin (EPO), vascular endothelial growth factor A (VEGFA), endothelin 1 (EDN1) and glucose transporter 1 (GLUT1) were markedly increased in both a time-and oxygen concentration-dependent manner, demonstrating robust activation of hypoxia signaling (Fig. 2, Fig. S1). Alternatively, expression of HIF targets was reduced by knockdown of HIF1α or HIF2α using small interfering RNA (siRNA) or pharmacologic inhibition (demonstrative of a robust blockade of HIF transcriptional activity).
To assess the effect of hypoxia on glioma cell proliferation, cell proliferation assays on glioma cells exposed to various oxygen concentrations for 24 or 48 hours were performed (Fig. 3). Under 1% oxygen tension, hypoxia reliably doubled the proliferative capacity of all glioma cell lines tested. This effect was reduced by pharmacological inhibition of HIF1α and HIF2α.
Notably, inhibition of HIF2α more potently inhibited cell proliferation at 24 hours, but this effect was diminished by 48 hours. However, incubation at a very low oxygen concentration had the opposite effect with cell proliferation decreasing by roughly 50% at both time points. Consistent with a regulatory role for HIFs in GBM cell proliferation under hypoxic conditions, treatment with a HIF inhibitor partially rescued cell proliferation. Glioma cells exposed to 0.2% oxygen for 48 hours and incubated with the HIF2α inhibitor demonstrated very similar proliferative capacity to cells incubated under normoxic conditions. Consequently, the effect of hypoxia on glioma cell proliferation is both time-and concentration-dependent.
Figure 1: Mr imaging and immunhistochemistry of glioblastoma before and after bevacizumab therapy. (a) MR findings
in glioblastoma before and after bevacizumab therapy. T1 post-contrast and fluid-attenuated inversion recovery (FLAIR) axial MR images before, after one cycle, two cycles and three cycles or bevacizumab therapy. An initial reduction in the enhancing portions of the residual tumor involving the left frontal lobe and corpus callosum is seen, and a decrease in overall cerebral edema is observed. However, these radiographic improvements are temporary, and later imaging demonstrates increased enhancement and edema. (b) Immunohistochemical staining of HIf1α and the EMT inducers Slug and Snail in a glioblastoma surgical specimen before (left) and after (right) bevacizumab therapy. Following antiangiogenic therapy, an increase in HIF1α expression is seen concomitant with increased expression of Slug and Snail. Magnification 400 ×. (c) Hematoxylin and eosin (H&E) and immunohistochemical stains of multiple EMT inducers in glioblastomas treated with temozolomide and radiation therapy with and without adjuvant bevacizumab. Compared to the tumor treated with radiation and temozolomide, the bed of the bevacizumab-treated tumor exhibits markedly increased cellularity, Ki67-positivity and a significant increase in the number of spindle-shaped mesenchymal cells. The tumor bed also demonstrated an increase in the number of cells positive for the EMT inducers Zeb1, Zeb2, Slug, Snail, Twist and the EMT marker matrix metalloproteinase 2 (MMP2). Stains performed on sections five centimeters from the tumor bed revealed periodic single infiltrating cells that were often positive for each of the EMT inducers, but no discernible difference was seen between the tumors. Magnification 400 ×. www.impactjournals.com/oncotarget
Hypoxia induces mesenchymal change in glioma cells
To test the hypothesis that hypoxia induces mesenchymal change in glioblastoma, we evaluated the expression of the mesenchymal marker vimentin using immunofluorescence under various oxygen concentrations at 24 or 48 hours (Fig. 4a and 4b). Low levels of vimentin were detected under normoxic conditions in each of the cell lines evaluated, indicating some degree of mesenchymal change at baseline. At 24 hours, there was no appreciable change in vimentin expression under 1% oxygen concentration, and a slight increase was detected under 0.2% oxygen in U251 cells only. However, hypoxia consistently upregulated vimentin at 48 hours, indicating an oxygen concentration-and time-dependent acceleration of mesenchymal change under hypoxic conditions. This effect was reduced by treatment with a HIF inhibitor. This effect did not appear to differ between HIF1α and HIF2α.
Mesenchymal transition is governed by a defined set of transcription factors including Slug, Snail and Twist that repress genes coding for cell adhesion proteins, increase expression of MMPs and increase cell motility. To assess the effect of hypoxia on the expression of these transcription factors, we measured mRNA levels of Slug, Snail, Twist, MMP2 and MMP9 under various oxygen concentrations for 24 or 48 hours ( Fig. 4c and S2). Exposure to either 0.2% or 1% oxygen saturation consistently upregulated genes associated with EMT in a time-and concentration-dependent fashion, demonstrating robust activation of the EMT program under hypoxia. Upregulation of EMT inducers and MMPs was consistently reduced following siRNA-mediated knockdown of HIF1α or HIF2α and after treatment with a HIF1α or HIF2α inhibitor.
To determine the time course of EMT-associated protein expression under hypoxia, immunoblots against Twist, Slug, MMP2 and MMP9 were assessed (Fig. 4d and S3). Each investigated target was expressed at high levels at baseline, indicating EMT inducers are present in glioma cell lines under normal conditions. Despite transcriptional upregulation of each target at less than 24 hours of hypoxia, there was no consistent change in Twist, MMP2 or MMP9 expression at 24 hours. However, at 48 hours under hypoxia, each EMT-associated protein was upregulated, and expression was blocked by treatment with a HIF inhibitor. These findings indicate that hypoxia enhances the expression of proteins that induce EMT and expression of these proteins is subject to inhibition by pharmacologic blockade of HIFs.
Hypoxia enhances the migratory capacity of glioma cells
Mesenchymal transformation confers cells with enhanced migratory ability and is consistent with findings of diffuse spread of non-enhancing GBM on magnetic resonance (MR) imaging. To investigate whether hypoxiainduced mesenchymal transformation of glioma cells results in a migratory phenotype, we evaluated U87 GBM cell migration using a microfluidic chip invasion assay under 21% or 1% oxygen saturation (Fig. 5a). For this assay, cells were plated on a two-dimensional main channel, and invasive cells were allowed to migrate into a three-dimensional collagen matrix that more accurately simulated the microenvironment of the cell. Immunofluorescence was then performed to determine the expression of HIF1α, HIF2α and vimentin in migrated cells. Hypoxia significantly enhanced tumor cell assay. Cells are placed in a chamber containing either 21% or 1% oxygen and allowed to migrate along a three-dimensional collagen-based surface. (b) U87 glioma cell migration following exposure to 21% (top row) or 1% oxygen saturation for 24 hours. Immunofluorescent stains against HIF1α, HIF2 or vimentin are shown in red and DAPI is blue. Cells in the bottom five rows were treated with a HIF1α inhibitor, HIF2α inhibitor, non-targeting siRNA (nt-siRNA), HIF1α-siRNA or HIF2α-siRNA, as indicated. Hypoxia increased the distance and number of vimentin-positive cells that migrated onto the collagen surface. Total migration was significantly reduced after treatment with a HIF inhibitor and attenuated following HIF knockdown by siRNA. Treatment with a HIF2 inhibitor or siRNA knockdown of HIF1α or HIF2α significantly reduced the number of migrating vimentin-positive cells. Scale bar = 100 μm. www.impactjournals.com/oncotarget glioma cells were transfected with GFP alone or GFP with a wild-type hemagglutinin-tagged (HA)-HIF1α gain-of-function construct. HIF1α gain-of-function cells were either transfected with HA-HIF1α alone or with nt-siRNA or HIF1α-siRNA. Cell cultures were then placed in the microfluidic chip assay, exposed to 21% (a) or 1% (b) oxygen saturation and imaged at 0, 6, 12, 18 and 24 hours, as indicated. The same set of experiments were performed using HA-HIF2α and HIF2α-siRNA in (c) and (d). Exposure to either hypoxia or transfection with HIF1α and HIF2α gain-of-function constructs enhanced cell migration, and migration was significantly reduced by siRNA knockdown of HIF1α and HIF2α, respectively. Scale bar = 100 μm. (e) Quantification of microfluidic chamber invasion assays shown in panels a-d. The y-axis reflects the number of invaded cells at each distance at 24 h (standard deviation [S.D.]). migration, and invasive cells exhibited stronger vimentin expression than stationary cells (Fig. 5b). Although siRNA knockdown of HIF1/2α reduced glioma cell migration to levels seen under normoxia, the remaining migratory cells did not express the mesenchymal marker vimentin. Pharmacologic HIF blockade reduced cell migration even further, and this effect was more pronounced following HIF2α inhibition.
Hypoxia is associated with changes in gene expression and metabolism that collectively contribute to its effect on cell phenotype. To determine the relative contribution of HIF1α and HIF2α to glioma cell migration, U87 GBM cells were transfected with hemagglutinintagged HIF α-subunit constructs with or without siRNA to a HIF α-subunit and then placed under 1% or 21% oxygen saturation for 24 hours (Fig. 6). Under normoxic conditions, cells that overexpressed HIFα migrated further than cells transfected with green fluorescent protein (GFP) alone. There was no discernible difference between HIF1α-and HIF2α-over-expressing cells. Migration was reduced following HIF1α knockdown and was nearly abolished following HIF2α knockdown. These findings indicate that both HIF1α and HIF2α are individually sufficient to enhance GBM cell migration and that HIF2α, in particular, is necessary for hypoxia-induced cell migration.
To determine whether HIF overexpression is sufficient to induce migration in GBM stem cell populations, we performed a migration assay on neurospheres derived from U87 glioblastoma cells under 21% or 1% oxygen that selectively overexpressed HIF1α or HIF2α and then determined vimentin expression by immunofluorescence (Fig. 7). Overexpression of either HIF α-subunit was sufficient to enhance migration away from the neurosphere, indicating that HIFs are sufficient to stimulate migration out of GBM stem-cell populations.
DIscUssION
Hypoxia is a characteristic event in the progression of GBM. Rapidly proliferating tumor cells outstrip their blood supply leading to intratumoral necrosis and induction of the hypoxia signaling pathways. Bevacizumab is an anti-angiogenic therapeutic strategy that can further exacerbate hypoxia in solid tumors. Clinical data indicate that bevacizumab treatment can result in reduced tumor permeability (reduced enhancement on MR imaging) through VEGF-mediated mechanisms [23]. The reduction in vessel permeability results in reduced peritumoral edema and symptom improvement [24,25]. Emerging data indicate that bevacizumab therapy is associated with non-enhancing (MR imaging) tumor progression and increased risk of gliosarcoma differentiation [10].
The results of the current study demonstrate that hypoxia activates mesenchymal transition and enhances cell motility in GBM in a HIF-dependent manner, and this process can be attenuated by pharmacological blockade of HIFα. Moreover, these data suggest that while HIF1α may preferentially enhance cell migration, HIF2α is necessary for this event to occur.
The term "epithelial-mesenchymal transition" was originally used to describe the process by which a cell adopts a fibroblast-like morphology and migratory phenotype in the process of normal embryonic development [26,27]. Many of these features are recapitulated in cancer, which led to the observation that EMT is a critical process to cancer progression [26,27]. Mesenchymal transition (and its converse, mesenchymalepithelial transition) is now a well-described event in epithelial tumors and is increasingly recognized as a key event in non-epithelial cancers such as GBM [26][27][28][29][30][31][32][33][34]. Mesenchymal transition confers multiple oncogenic properties to cancer cells. In addition to enhancing their invasive capacity, EMT endows differentiated cancer cells with the capacity to become stem cells and provides a mechanism for their continued production [26,27,34,35]. EMT also plays a key role in chemoresistance: Zeb1, for example, controls the susceptibility of GBM to temozolomide by disinhibiting the transcription factor c-MYB, a transcriptional activator of the DNA repair enzyme O-6-Methylguanine DNA Methyltransferase (MGMT) [32].
In the context of new therapeutic approaches to GBM, mesenchymal transition is a key event in resistance to anti-angiogenic therapy [10,36]. Recent studies have demonstrated that VEGF antagonists, although providing a temporary reduction in enhancing tumor volume and peritumoral edema, ultimately fail and can induce a mesenchymal phenotype that is more invasive and resistant to therapy than the original tumor [10]. Mechanisms of resistance to VEGF antagonists are multiple. VEGF blockade disinhibits the HGF receptor c-MET, resulting in constitutive activation of tyrosine kinase-ras signaling in GBM [36]. VEGF-resistant cancer cells also secrete significantly more IL-17, resulting in the recruitment of immature immune cells to the tumor microenvironment [37]. VEGF inhibition directly induces tissue hypoxia by relative vasoconstriction, disinhibition of endothelial cell apoptosis and subsequent vessel collapse [38][39][40]. Intratumoral hypoxia has long been recognized as a sign of successful antiangiogenic therapy in solid tumors but ultimately limits its therapeutic potential by enhancing the proliferative, invasive and stem-like properties of tumor cells [13].
Hypoxia is a potent inducer of EMT. Hypoxiainducible factors activate transcription of multiple EMT-related proteins that leads to a loss of cell-cell adhesion, increased cell motility and evasion from cell cycle arrest [41,42]. This process is crucially dependent on HIF stabilization, as EMT in VHL-null renal cell carcinoma occurs in the absence of true hypoxia [41]. Under normoxic conditions, the E3 ubiquitin ligase von Hippel Lindau protein (pVHL) ubiquitinates hydroxylated HIF α-subunits, targeting them for degradation by the proteasome [43]. Under hypoxic conditions or conditions where HIF α-subunits are otherwise stabilized, the inducible HIFα-subunits translocate to the nucleus where they form transcriptionally active heterodimers with constitutively expressed HIF α-subunits [44]. Activation of the hypoxia signaling pathway is associated with increased activation of the EMT program, increased cell motility and adoption of stem-like traits and is thus an attractive therapeutic target for solid tumors [27,35,41,42,[45][46][47][48][49].
Our data demonstrate that hypoxia is a potent enhancer of mesenchymal transition and cell motility in GBM. These events can be significantly attenuated by treatment with a pharmacologic inhibitor of HIFs. Given that anti-angiogenic therapy can be associated with an increase in intratumoral hypoxia, these findings provide a mechanism for the observed non-enhancing progression of tumor and the increase in frequency of secondary gliosarcoma transformation following bevacizumab therapy. These data offer the possibility that adjuvant therapy with a HIF inhibitor can delay GBM progression during anti-angiogenic therapy.
Human tissue collection
Surgical specimens from patient 1 were obtained from the Department of Neurosurgery at Huashan Hospital of Fudan University, Shanghai, China. Informed consent was obtained according to Institutional Review Board at Huashan Hospital. Brain autopsies of patients 2 and 3 were performed at The Ohio State University and tissue was obtained through The Ohio State University Department of Pathology. Informed consent was obtained from both patients and approved by the Institutional Review Board of the Office of Responsible Research Practices at The Ohio State University Wexner Medical Center. All specimens were subjected to fixation in 10% formalin and subsequent paraffin blocking. 5 µm-thick sections were obtained from each paraffin block and utilized for hematoxylin and eosin (H&E) and immunohistochemical staining.
HIF knockdown and gain-of-function conditions
Small-interfering RNA and oligonucleotides against human HIF-1α, human HIF-2α, rat HIF-1α and rat HIF-2α and hemagglutinin-tagged gain-of-function constructs were purchased from Sigma-Aldrich (St. Louis, MO). Sequences are listed in Table S1
Microfluidic chip construction
The microfluidic chip consisted of a bilayer of polydimethylsiloxane (PDMS) and glass substrate. All chips were designed using an SU8-2035 negative photoresistor (Micro Chem Corp., Newton, MA) molded from a master PDMS layer. Negative chip molds were spun onto a glass wafer and patterned photolithographically in duplicate (100 mm thickness for lower layer and 190 µm thickness for upper layer), producing unique microstructures with different heights. PDMS base and curing agent (Sylgard Silicone elastomer 184, Dow Corning Corp., Washington, D.C.) were mixed thoroughly 10:1 by mass, degassed under vacuum and poured onto the master. The mold curing process was conducted for 1 h at 80 o C. After cooling, the PDMS layer was gently peeled off of the master and trimmed to size. Inlet and outlet holes were created by punching through the PDMS with a razor-sharp punch. The PDMS mold was decontaminated with oxygen plasma for 15 s, bonded to a glass slide and sterilized with UV light for 30 m. Rat Tail High-Concentration Type-I Collagen solution (BD Biosciences, Franklin Lakes, NJ) was compounded according to an alternate gelation procedure, aseptically added into collagen chamber of the chip and allowed to gel at 37°C for 30 m.
Microfluidic chip invasion assay
Cells were seeded into the main channel of the microfluidic chip at a density of 5 × 10 4 cells/cm 2 with medium containing 10% FBS. The chip was then turned on its side for 5 m to allow the cells to adhere to the surface of the gel. Each chip was then incubated for either 24 h at 37°C, 21% O 2 , 5% CO 2 in a humidified incubator (normoxic condition) or at 37 o C in an electric microscope stage incubation chamber (Okolab, Ottaviano, Italy) flushed with a gas mixture of 1% O 2 , 5% CO 2 , 94% N 2 or 0.2% O2, 5% CO2, 94% N2 (hypoxic conditions). Cells were allowed to invade for 24 h, and images were recorded every 6 h by an Olympus IX71 fluorescence microscope equipped with a CCD digital camera (Olympus, Tokyo, Japan). After 24 h, each chip was stained with primary antibody and prepared for immunofluorescent imaging as described above.
cONFLIcts OF INtErEst stAtEMENt
The authors declare no conflicts of interest.
Editorial note
This paper has been accepted based in part on peerreview conducted by another journal and the authors' response and revisions as well as expedited peer-review in Oncotarget.
|
2017-12-22T08:48:36.286Z
|
2015-03-14T00:00:00.000
|
{
"year": 2015,
"sha1": "8bebb1f65103f7f4e4e8fc2c57c3ca74fa7b5050",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=3592&path[]=7288",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bebb1f65103f7f4e4e8fc2c57c3ca74fa7b5050",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
217552613
|
pes2o/s2orc
|
v3-fos-license
|
A Corpus-based Analysis of the Constructions Have/Take/Get a Bath and Have/Take/Get a Rest in British English
Light verb constructions have been studied in different languages and from different perspectives by a number of scholars. The present research focuses on the constructions with light verbs have, get, and take followed by deverbal nouns a bath and a rest and attempts to answer the questions to what extent light verbs have, take, and get are interchangeable when followed by the same deverbal nouns and what influences their choice. The study aims at giving a qualitative analysis of the structures in question in British English on the basis of corpus data. All the data for analysis are collected from the BNC corpus. Constructions have/take/get a bath and have/take/get a rest are investigated in terms of grammatical and lexical features which include morphological forms, immediate collocations of deverbal nouns, combinability patterns as well as distribution across registers. The analysis shows that light verbs have, take, and get could be interchangeable in some situations as they have similar morphological forms, combinability patterns, collocations with adjectives, but show some differences in expressing modality and the distribution across registers.
Introduction
Light verb1 constructions (LVCs) (e.g., have a swim, take a walk, get a look at) have received much interest in modern linguistics for posing many challenges in semantics (Wierzbicka, 1982;Plante, 2014), syntax (Grefenstette, Teufel, 1995;Kearns, 2002;Butt, 2003;Bannard, 2007), corpus-based analysis (Stevenson et. al., 2004;Bannard, 2007;Tu, Roth, 2011;Vincze, 2012;Rácz et al., 2014).Light verb constructions can be described as constructions that consist of a verbal and a nominal component 2 where the meaning of the construction is derived from the noun whereas the meaning of the verb is bleached (cf.Plante, 2014, 82;Vincze, 2012, 238;Kearns, 2002, 1).Though the meaning of light verb constructions is almost equivalent to the meaning of the verbal complement, the light verb also contributes to the meaning of the construction as there are constraints on which light verb can occur with which complement.In addition, light verbs give a certain semantic aspect to the construction (cf.Wierzbicka, 1982, 791).
The verbal component of the light verb construction is expressed by a light verb; however, there are different views on how the verbal complement is expressed.Wierzbicka (1982), Stevenson, Fazly, North (2004), among others, claim that the verbal complement in the constructions of the type have a swim, take a walk, get a look at is manifested by an infinitival stem.Following this view, Wierzbicka distinguishes three different types of light verb constructions (cf.Wierzbicka, 1982, 755, 756): 1. NP human + have + N (deverbal noun) (e.g., have a quarrel): the pattern refers to continuous purposeful reciprocal actions; 2. NP human + have + N (action noun) (e.g., have a visit): the action in the constructions of this type is attributed to someone other than the subject; 3. NP human + have + a V-Inf (e.g., have a swim): such constructions imply a subjective and experiential perspective.The present study analyzes constructions of the third type; however, the view proposed by Jespersen (1965, 117) and supported by Grefenstette, Teufel (1995, 98), Höche (2009, 233), Plante (2014, 82), Tu, Roth (2011), and others who describe the verbal complement as a deverbal noun3 is adopted.
Lingvistika / Linguistics
Syntactically, light verb constructions differ from similar structures 4 in that they cannot be passivized (e.g., *A groan was given by the man on the right.),do not allow wh-extraction (e.g., *Which groan did John give?), and cannot be pronominalized (e.g., The deceased gave a groan at around midnight, and gave another one just after two.) (Kearns, 2002, 2, 3).
Light verb constructions are productive 5 and favoured (cf.Stevenson et al., 2004;Plante, 2014, 82;Halliday, Matthiessen, 2004, 193); however, they cause problems for language learners.On the one hand, their productivity is restricted by the light verb that selects only a certain type of deverbal nouns.On the other hand, the same deverbal noun can be used as a complement of a few light verbs (cf.Grefenstette, Teufel, 1995, 98;Stevenson et al., 2004).
The present research deals with the constructions with the light verbs have, take and get which may select the same deverbal nouns.The similar nature of the three light verbs 6 raises the following questions: (1) To what extent are the light verbs have, take, and get interchangeable when followed by the same deverbal nouns?(2) What influences the choice of the three verbs in light verb constructions?In this article I will confine myself to these two questions.The questions are partly answered by Wierzbicka (1982) and Höche (2009) who demonstrate the differences between light verbs have and take on the basis of semantic analysis 7 .However, language learners sometimes find it difficult to grasp semantic differences.Thus this paper attempts to view similarities and differences between different light verbs in terms of grammatical and lexical features.The aim of the research is to perform a qualitative analysis 8 of the constructions with light verbs have, take, and get followed by deverbal nouns a bath and a rest in British English on the basis of corpus data.For this purpose, the light verb constructions are analysed in terms of morphological forms, immediate collocations of deverbal nouns, combinability patterns as well as distribution across registers. 4Light verbs can be contrasted with vague action verbs (VAVs) (cf.: give a groan vs. give a demonstration) (cf.Kearns, 2002, 1). 5 In some languages such as Persian, Urdu, and Japanese, LVSs are very productive, whereas in languages such as French, Italian, Spanish and English, they are semi-productive (cf.Stevenson et al., 2004). 6According to Trudgill et al. (2002), the verb have may have "the dynamic senses such as 'receive', 'take', 'experience'" (Trudgill et al., 2002, 3).Wierzbicka (1982) claims that "Have a V belongs to a family of constructions which includes at least two other members: take a V and give a V" (Wierzbicka, 1982, 794). 7According to Wierzbicka (1982) and Höche (2009), the light verbs have and take differ semantically in that have refers to the action of a limited time which is aimless, requires no physical effort, not necessarily complete, whereas take expresses a unitary action which has no limit in time, requires physical effort; the agent is an active initiator and experiences beneficial effect of the action (cf.Wierzbicka, 1982, 794, 795;Höche, 2009, 246).However, the data collected for the research cast some doubts on these statements.The question needs further study, but semantic analysis is outside the scope of the present research. 8The qualitative analysis is supplemented with simple descriptive statistics.
Lingvistika / Linguistics
The research is descriptive, comparative, and corpus-based.The data for analysis are collected from the British National Corpus (BNC) (http://corpus.byu.edu/bnc/), which is a 100 million word representative electronic database of spoken and written English.The data collected are further described, grouped and compared to show similarities and discrepancies in the use of light verbs have, take, and get deverbal nouns a bath and a rest.
Related Work
Constructions with different light verbs have been investigated by a number of scholars.Wierzbicka (1982) studies the constructions with light verbs have and take from the semantic perspective with a particular focus on the former.She attempts to extract semantic rules and conceptualization patterns on the basis of semantic features of the light verbs.The research demonstrates that light verb constructions of the frame have a V "exhibit orderly and systematic behavior", and, though their structural descriptions are closely related, they do not follow one formula and must be ascribed to different subtypes (Wierzbicka, 1982, 788).
A semantic analysis of light verb constructions is also performed by Plante (2014) who describes light verbs have and do as compared to take and give in terms of telicity.In addition, the study examines how telicity of light verbs is influenced by deverbal nouns and additional arguments.On the basis of the collected data, Plante comes to a conclusion that telicity cannot be considered a general property of light verbs in English (cf.Plante, 2014, 90).Trudgill et al. (2002) conduct a contrastive study of the constructions with the light verb have in British and American English.They explore the degree of dynamism of the light verb have in the two languages on the basis of morphological and diachronic analyses.It is argued that in North American English the light verb have failed to acquire the full range of dynamic meanings due to a large-scale language contact accompanied by language simplification tendencies (cf.Trudgill et al., 2002, 14).
Using the database from the BNC, Tu and Roth (2011) research the constructions with light verbs have, take, give, do, get and make on the basis of specific local contexts and informative statistical measures.They focus on the interaction of contextual and statistical features and analyse the effectiveness of these features within the learning framework.It is claimed that the two groups of features demonstrate similar characteristics in the learning framework; however, in problematic cases contextual features show better performance (cf.Tu, Roth, 2011).Stevenson et al. (2004) perform a corpus-based analysis of the constructions with light verbs take, give, and make.Using corpus statistics, they investigate the productivity of LVCs and try to determine "how well particular light verbs and complements go together" ISSN 1392-8600 E-ISSN 1822-7805 Žmogus ir žodis / Svetimosios kalbos / 2015, t. 17, Nr. 3 Lingvistika / Linguistics (Stevenson et al., 2004).A special focus is laid on the productivity of "LV a V" pattern.The conclusion is made that light verbs show systematic behavior in terms of their ability to combine with different complements (cf.Stevenson et al., 2004).
Corpus-based Analysis of Constructions Have/Take/Get a Bath/ Rest
In this section the light verb constructions have/take/get a bath and have/take/get a rest are studied in terms of grammatical and lexical features using the data collected from the BNC corpus.Due to the limited size of the article only two deverbal nouns are chosen for the investigation.The nouns a bath and a rest are selected on the basis of the parameters of raw frequency and Mutual Information (MI) score 9 which are summarized in Table 1.The parameters show a strong collocational probability and relatively high frequency.
Grammatical Features of Constructions Have/Take/Get a Bath/Rest
The grammatical features chosen for the analysis of the light verb constructions have/ take/get a bath and have/take/get a rest include morphological forms and combinability patterns.The study of grammatical features can demonstrate to what extent light verbs have, take, and get are interchangeable.
The description of morphological forms includes forms of both light verbs and deverbal nouns.The summary of morphological forms of the light verbs in the constructions have/take/get a bath and have/take/get a rest in Table 2 demonstrates that these light verbs have similar forms.All the three light verbs are used in both simple and continuous aspects with the exception of the light verb get when it is followed by the deverbal noun a rest.However, when the deverbal noun a rest is modified, it can be found in the simple aspect as well.As can be seen in Table 2, non-finite forms of the light verbs in the constructions have/take/get a bath and have/take/get a rest are more frequent than finite forms except for take a rest where finite and non-finite forms are equally represented.To-infinitive forms of all three light verbs are used after modal auxiliaries (a) and in different types of infinitival clauses (b) with the exception of the construction get a bath: Bare infinitive forms are found with all three light verbs after modal auxiliaries (a), future tense auxiliaries (b), and in imperative sentences (c): (3) a.
Lingvistika / Linguistics
The analysis of infinitival forms of the three light verbs under investigation shows that there are significant differences in their use after modal auxiliaries, and in imperative sentences.Light verbs have and get combine with a far greater variety of modals than the verb take.In this respect the construction take a rest differs especially as it is found only after the modal auxiliary have to.The light verb take is also very rare in imperative sentences: only one example is found with both deverbals a bath and a rest, and the only instance of take a bath refers to instruction.There seems to be a tendency to express modality of the light verb take using verbs expressing suggestions or commands: (4) a.And another recommends the visitor to take a bath.
<…> now my assistant tells me to take a bath.b. <…> was advised by the Archbishop of Canterbury to take a rest from his official duties.<…> Everett, who had been ordered to take a rest at 98 trips, was testing aircraft <…>.
Non-finite -ing and -ed forms are rare with all three light verbs, but with the verb get in particular, though a few instances of -ing forms of get a rest and get a bath as well as -ed forms of get a bath can be found when the deverbal noun in the constructions is modified: (5) a. <…> try reading a magazine, having a bath -anything that you personally enjoy <…>.The study of morphological forms of deverbal nouns shows that the nouns a bath and a rest can have a plural form when they follow light verbs have and take: Žmogus ir žodis / Svetimosios kalbos / 2015, t. 17, Nr. 3 Lingvistika / Linguistics (6) a.They argued while eating their meals, having baths and in their sleep too you could hear them shouting.Bremner has had more early baths than a miner on night-shift.b.I think these days people take baths and showers quite often <…>.
<…> insist on using the same boring toothbrush day and night, and like to take baths alone <…>.c.Some species, indeed, take rests at night and slumber on the sea floor.
The fact that deverbal nouns in light verb constructions can have a plural form combined with the fact that such nouns have determiners and can be modified12 can serve as sufficient evidence for describing the verbal complement of light verb constructions as a deverbal noun as only nominals have an inflection in the plural form, are used with determiners and may be modified (refer to Introduction).
The syntactic analysis summarized in Table 3 demonstrates that constructions have/ take/get a bath show similar syntactic behavior as they can combine with another noun phrase joined by the conjunction and or or (a), another verb phrase joined by the conjunction and (and + VP) (b), adverbials of time (ADV T) (c) and place (ADV Pl.) (d), and a clause (S) (e), with the exception of get a bath which is not found in combination with another noun phrase and the adverbial of place: (7) Syntactically, the constructions have/take/get a rest behave similarly to have/take/get a bath.They can combine with another noun phrase joined by the conjunction and or or (a), another verb phrase joined by the conjunction and (b), adverbials of time (c) and place (d), a clause (e) , and, in addition, with a prepositional object (Prep O) (f).Here again, the exception is the construction get a rest as it is not used in combination with the adverbial of place.The construction take a rest cannot be followed by another NP joined by a conjunction.
(8) a. <…> turned over, to sit on while having a rest and a cup of tea. Get
Lingvistika / Linguistics
Summing up, Table 4 demonstrates that constructions have/take/get a bath and have/ take/get a rest have more similarities than differences in their grammatical features.However, some discrepancies can be observed as well and they need further explanation.The analysis of finite and non-finite forms clearly shows a few differences.First, the light verb in the construction get a rest has no continuous aspect; however, the same light verb get has a continuous form when it is followed by the deverbal noun a bath.Thus the explanation for the absence of continuous with get a rest could be related to the nature of the noun rest which, differently from bath, i.e. the act of bathing, refers to a state13 .Second, the light verb get in the construction get a rest cannot be found in the -ed form either.This can be explained by the fact that the structure have got refers to possession, and get a rest is a dynamic construction.Third, when used in infinitival forms, the light verb take is very rarely used after modals and in imperative sentences, and its modality is usually expressed by verbs expressing suggestions or commands.This might be due to the fact that the infinitival forms of the light verb take are usually used in situations when the agent of the action is expressed by proper or common nouns.
Lingvistika / Linguistics
VP, adverbials of time, and a clause.Constructions have/take/get a rest are, in addition, used in combination with a prepositional object and only of one type -introduced by the preposition from.There are slight differences in the other combinability patterns.First, no examples are found where get a bath and take a rest combine with another noun phrase.Second, get a bath and get a rest seem not to need the specification of place.However, in both cases the possibility of such combinations cannot be excluded.
All in all, the comparison of morphological forms and combinability patterns in particular shows that, despite slight differences, light verbs have, take, and get could be interchangeable, though there are more common features between have and get, whereas take differs from the two in the ways of expressing modality.
Lexical Features of Constructions Have/Take/Get a Bath/Rest
The analysis of lexical features of constructions have/take/get a bath and have/take/ get a rest includes the study of immediate adjective collocations of deverbal nouns and the distribution across registers.The comparison of lexical features of the structures in question can show what factors influence the choice of one or another light verb.
The summary of adjective collocations of the deverbal noun a bath in Table 5 shows that the noun a bath can be used with both descriptive and classifying adjectives.The variety of adjectives that collocate with a bath differs depending on the light verb.All 9 types of adjective collocations are found only with the verb have.The light verb take is not used when a bath collocates with quantity, evaluative, and quality adjectives; however, the adjectives that are found in constructions with take are similar to those used with the verb have.Finally, the verb get tends to be used when a bath collocates with evaluative adjectives, though only one example with such a collocation is found.The analysis shows that, in constructions with get the deverbal noun is quite frequently used with possessive determiners.
Most adjectives that collocate with a bath are semantically related as they show duration, speed, frequency, temperature, time of bathing.Though both verbs have and take are used with the plural form of the deverbal noun a bath, quantity adjectives are found only in constructions with have.When a bath is used in the plural form, it can, in addition, collocate with such quantity adjectives as several, enough: (9) a.I have enough baths, but I don't feel clean.
b. <…> she could ride alone and was not fit to be near anyone until she'd had several baths <…>.
Table 6 shows collocational possibilities of the noun a rest in constructions with light verbs have, take, and get.Differently from the noun a bath, collocations with a rest are almost equally distributed among all three verbs.It is with the light verb get that the deverbal noun a bath collocates with all 5 types of adjectives.Only size adjectives are not found in the constructions with light verbs have and take.However, the corpus gives only one example with the size adjective (<…> they got a big rest <…>.) which belongs to the spoken register and does not seem to be typical of constructions with the noun a rest.All other adjectives, i.e. quantity, evaluative, quality, duration, time, are semantically related to the noun a rest.In Table 7, we find the distribution of constructions have/take/get a bath and have/ take/get a rest across registers.In the use of the light verbs with the noun a bath distributional tendencies are very clear.The constructions with light verbs have and get are mostly used in the spoken register, though have a bath is also widely spread in the fiction and miscellaneous registers and take a bath is popular in the miscellaneous register.The distribution of constructions have/take/get a bath among other registers does not vary ISSN 1392-8600 E-ISSN 1822-7805 Žmogus ir žodis / Svetimosios kalbos / 2015, t. 17, Nr. 3 Lingvistika / Linguistics much (from 1 to 7 examples).The situation with constructions have/take/get a rest is a little different.Only with the light verb have a similar distribution is observed, i.e. spoken and fiction registers are most popular, though in this case the distribution among the two registers is more or less equal.Constructions with the light verb take are mostly used in the newspaper register with the miscellaneous register being in the second place.There is no clear distributional preference for constructions with the light verb get as it is equally distributed across the spoken, fiction, newspaper, and magazine registers.This is due to the low number of examples with the verb get.To sum up, the description of lexical features of constructions have/take/get a bath and have/take/get a rest shows that in many respects they are similar in terms of collocations of deverbal nouns, but differ in their distribution across registers.The analysis of collocations with deverbal nouns a bath and a rest demonstrates that the constructions with all three verbs show similar features with the exception of the construction get a bath.There is only one collocation with the noun a bath in the latter construction which may signify that the construction is still not fully incorporated into the language as the total number of examples with it amounts only to 20, whereas the total number of examples with get a rest exceeds 60 14 .The variety of morphological forms and combinability patterns with get a bath only confirms this fact.Constructions have a bath and take a bath differ in this respect as in both of them the deverbal noun collocates with all semantically related adjectives.Similarly, the deverbal noun a rest is found in all types of adjective collocations; however, in this case collocations are equally distributed among all three light verb constructions.
The distribution of the constructions in question across registers shows different tendencies in the use of the three light verbs.The light verbs have and get tend to be used in the spoken register, whereas the verb take is more characteristic of the fiction register, especially if the examples with the deverbal nouns modified are taken into account.Thus, it could be claimed that the choice of the light verb is influenced by the register.
14 The total number of exemples includes samples where the deverbal noun is modified.
Concluding Remarks
The analysis of constructions have/take/get a bath and have/take/get a rest in terms of grammatical and lexical features demonstrates that the behavior of the light verbs have, take, and get is similar in many respects.They have much in common in terms of morphological forms and combinability patterns.The three light verbs are found in the simple aspect and may have to-and bare infinitive forms as well as -ing forms.The continuous aspect and -ed forms are characteristic of all the light verb constructions under investigation except for get a rest.The constructions have/take/get a bath and have/take/get a rest can combine with another VP, adverbials of time, and a clause.The constructions have/take/get a rest can, in addition, be used in combination with a prepositional object.The combinability patterns with another noun phrase are found only with constructions have a bath, take a bath, have a rest, and take a rest, and the adverbial of place combines only with the constructions with light verbs have and take.
Lexical features of the constructions in question show that the deverbal nouns a bath and a rest, when combined with light verbs have, take, and get, collocate with similar semantic groups of adjectives.The exception is the construction get a bath which tends to be used with possessive determiners rather than adjectives as only one adjective collocation with this construction can be found.The distribution of constructions have/take/get a bath and have/take/get a rest across registers demonstrates that light verbs have and get are mostly used in the spoken register, whereas the verb take is more characteristic of the fiction register.
On the basis of the study of grammatical and lexical features, it could stated that light verbs have, take, and get, when followed by deverbal nouns a bath and a rest, could be interchangeable; however, taking into account the fact that verbs have and get have more in common as the verb take has different ways of expressing modality and tends to be used in the fiction register.This view is based on the exploration of only two deverbal nouns and needs further analysis which could include more nouns with the same verbs.Further research of the light verb constructions could also focus on the study of the semantic context, comparison of data in British and American English.
Table 1 : Raw frequency and Mutual Information score of constructions have
/take/get a bath and have/take/get a rest 10 <…> last night who went and got a bath and left every door in the house open <…>.
Žmogus ir žodis / Svetimosios kalbos / 2015, t. 17, Nr. 3 Lingvistika / Linguistics (1) a.I have a bath every day.I like a good sing-song when I'm having a bath <…>.Liz automatically had a rest after New York as it was the last race of the season.Mummy's having a rest.b.Once we had a girl who never, but never, took a bath <…>.Luke is here, but he's tired and he's taking a bath.When Mr Rowse took a rest, the line stayed as it was, sometimes for hours.Edwards, Olver, Mullins and Skinner are all taking a rest.c.And, then he's coming home, getting a bath, and we're getting ready and going.<…> stop using this eight o'clock so as you get a good rest.
Table 2 :
Forms of constructions have/take/get a bath and have/take/get a rest <…> the students will have to take a rest as Glenn jets out to do battle for glory in the world championships.She has to get plenty of rest.<…> Nails thought he was going to get a rest, but she immediately sent him off in the other direction.b.He went off to his room to have a bath and dress.
Žmogus ir žodis / Svetimosios kalbos / 2015, t. 17, Nr. 3The absence of get a bath in infinitival clauses can be explained by a lower number of examples with this construction.
You will have had a bath before going along to the morning service of Chiropody <…>.
She had never imagined him doing the ordinary things of life: taking a bath, shaving, going shopping <…>.Steve's getting a sod with him getting bath every night.I know Miss Mates won't mind you having a rest before that long walk back.<…> as he sipped iced water, taking a brief rest.You can do this by eating well, getting enough rest and relaxing.b.Mrs Popple had just taken a bath.I've had a rest, I'll take over from her.Too many players had taken a rest after the World Cup and there's no substitute for playing <…>.<…> stop here on Sunday and not come home so that he's got ample rest <…>.
Table 3 : Combinability patterns of constructions have/take/get a bath and have/take/get a rest
<…> she said she'd had a rest in one of the shops over there on the way <…>.
enough rest and sleep.b.Have a rest and leave the washing up till later.Perhaps take a noonday rest and sip some wine.<…> getting enough rest and cutting out the things that can harm your baby.c.Have a rest today, take things easy tomorrow, that's all.Crawford took a rest after six exhausting years <…>.We can get a bit rest in the winter <…>.d.I shall join you, for I intend to take a long rest in your arms <…>.f. <…> you can have a rest from writing and reading.In 1989 he took a rest from running junior soccer teams <…>.<…> which enabled senior and long serving people to' get a well-deserved rest from banking.Žmogus ir žodis / Svetimosios kalbos / 2015, t. 17, Nr. 3
Table 4 : Comparison of grammatical features of constructions have/take/get a bath and have/take/get a rest
In terms of combinability patterns, constructions have/take/get a bath and have/take/ get a rest again are more similar than different.All the constructions combine with another Žmogus ir žodis / Svetimosios kalbos / 2015, t. 17, Nr. 3
|
2019-05-07T14:22:44.385Z
|
2015-12-15T00:00:00.000
|
{
"year": 2015,
"sha1": "e608f18dc6af8210986a03b41d79ee4d41ee7419",
"oa_license": null,
"oa_url": "https://doi.org/10.15823/zz.2015.10",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "59bdbbc5d321dfb24d6db5c8fc6ffd1cdaffe1ba",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"History",
"Sociology"
]
}
|
16499357
|
pes2o/s2orc
|
v3-fos-license
|
Radiological anatomical consideration of conjoined nerve root with a case review
Nerve root anomalies are frequently underrecognized regardless of the advances in imaging studies; they are also underappreciated and underreported when encountered surgically. The classification of conjoined nerve roots is based on whether the nerve root emerges at an abnormal level or from an anastomotic branch. In the present report, we describe case with a conjoined nerve root that emerged at a more caudal level than that normally observed that was an undiagnosed on preoperative imaging studies. We also discuss the atypical imaging features obtained through preoperative imaging studies. As observed in the present case, preoperative recognition and diagnosis of such anomalies offer the best opportunity of performing a successful procedure and preventing inadvertent damage to nerve roots intraoperatively.
occurs close to the intervening pedicle, after which they emerge through their respective foramina [6]. Abnormal root anastomosis may result from the connection of a band of nerve fibers or a complete distal union in a common sheath [7]. The classification of conjoined nerve roots is based on whether the nerve root emerges at an abnormal cranial level or from an anastomotic branch.
Conjoined nerve roots are frequently undiagnosed prior to an operation and may cause considerable difficulty during spinal procedures during nerve root mobilization [1]. Accordingly, several imaging and physical examination techniques have been proposed to diagnose and address anomalous roots [4]. Although there is some controversy concerning the reliability of these signs, recent findings suggest that magnetic resonance imaging (MRI) can accurately identify the presence of conjoined nerve roots [8,9]. Here, we present the case of a man with acute low back pain radiating down the right leg, which was undiagnosed on preoperative imaging studies.
Introduction
In general, the spinal nerve root exiting the spinal cord is surrounded by a root sleeve extension of dura mater [1]. It runs from the medial to lateral direction along the inferior surface of the corresponding pedicle, exiting through the intervertebral foramen. A conjoined nerve root is defined as two adjacent nerve roots that share a common dural envelope at some point during their course from the thecal sac [2]. The prevalence of lumbosacral nerve root anomalies in autopsies is 8.5-30%, which is considerably greater than the values (1.9-4%) reported in previous imaging studies [3]. When present, conjoined nerve roots emerge most commonly from L5-S1 [4]. One study reported that dorsal roots are more frequently affected by this anomaly than ventral roots, although the cause is not clear [5]. Bifurcation of conjoined nerve roots and radiating pain in the right leg, with a positive straightleg raising sign at 30 o . He had a history of minor trauma suggestive of a lumbar sprain. His symptoms had gradually worsened, and he was unable to walk due to aggravation of pain in the back and leg upon standing or walking. On manual muscle testing, muscle weakness was not detected in the extremities. His patellar tendon reflexes were normal. On neurological examination, L5 nerve root compression was identified. On plain lumbar spine radiographs, spinal abnormalities, except for spondylolisthesis, were difficult to detect. Slight disc space narrowing at L4-5 was noted in lateral view (Fig. 1). Computed tomography showed no definite bone abnormalities. MRI revealed disc herniation at both root exits at the L4-5 intervertebral disc level (Fig. 2). Based on these observations, we diagnosed the patient with spondylolisthesis and lumbar disc herniation at L4-5.
Intraoperatively, disc sequestration was noted, and the herniated disc was found to be incarcerated to the adjacent conjoined L5 nerve roots. Upon further investigation, we discovered that the L5 nerve root originated from the caudal level of the L5 pedicle and was conjoined with the S1 nerve root (Fig. 3). The disc herniation at L4-5 migrated bilaterally and was found beneath the abnormal conjoined nerve roots. After removal of the disc herniation and unroofing of the nerve root, we did not observe any obvious mobility of the conjoined nerve root. After surgery, the patient's leg pain immediately disappeared and no muscle weakness was recorded. Upon retrograde imaging review, we observed several signs of conjugated nerve roots on routine MRI images ( Fig. 4): the "sagittal shoulder sign, " a vertical structure connecting two consecutive nerve roots and the overlying herniated disc on the parasagittal MRI which represents a combination of a protruded or extruded disc adjacent to a conjoined nerve root; "corner sign, " asymmetric structure of the anterolateral corner of the dural sac with one side being angulated compared with the other; "fat crescent sign, " the presence of extradural fat between the conjoined nerve root and the asymmetric dural sac; and "parallel sign, " an unusual course of the entire nerve root at the disc level, running parallel to the disc plane.
Discussion
A conjoined nerve root is defined as two adjacent nerve roots that share a common dural envelope at some point during their course from the thecal sac [2]. General information about conjoined nerve root is summarized in Table 1. Several authors have proposed classifications for conjoined nerve roots, but the classification proposed by Postacchini et al. [3] is most commonly used. According to this classification, type I refers to one or more roots that emerge at an abnormal cranial level; type II refers to one root that emerges at a more caudal level than normal; type III refers to two or more nerve roots that emerge through closely adjacent openings of the dura; type IV refers to two nerve roots that emerge from the dural sac in a common nerve trunk; and type V refers to an anastomotic branch that connects two nerve roots in their extrathecal course. These anomalous roots may or may not leave the vertebral canal through their correct intervertebral foramina [1]. Type II is the most common type [3] and corresponds to the type illustrated by the present case-the L5 nerve root originated from the caudal level of the L5 pedicle and conjoined with the S1 nerve root.
MRI is the gold standard for differentiating conjoined nerve root anomalies from other space-occupying processes. Although a T2-weighted coronal MRI is the best method for tracking the course of a conjoined nerve root, such imaging studies are not usually conducted. This sequence displays images of the course of the roots under their corresponding pedicles and their sleeves, similar to a myelogram [8]. Kang et al. [9] reported that the "sagittal shoulder sign" occurred in 90.9% of surgically documented cases of conjoined lumbosacral nerve roots compromised by herniated discs. This sign was detected on both T1-and T2-weighted MRI sequences and is defined as "a vertical structure connecting two consecutive nerve roots and the overlying herniated disc on the parasagittal MRI, which represents the combination of a protruded or extruded disc adjacent to a conjoined nerve root" [9]. We also observed this sign described by Kang et al. [9] in the present case (Fig. 4).
Additionally, Song et al. [10] described three radiological signs on standard axial MRI at the level of the disc, all of which are more easily detected on T1-weighted images. The "corner sign" is described as an asymmetric structure of the anterolateral corner of the dural sac, with one side being angulated compared to the other. This sign may also be observed in other conditions such as epidural lipomatosis and spinal stenosis. The "fat crescent sign" refers to the presence of extradural fat between the conjoined nerve root and the asymmetric dural sac. A ''parallel sign'' denotes an unusual course of the entire nerve root at the disc level, running parallel to the disc plane. Such observations suggest that T1-weighted MRI axial images can be routinely reviewed to assess nerve root anomalies, owing to a more intense contrast between the dural sac and intervening dural fat. All the above signs described by Song et al. [10] were identified in the present case. Despite the established description of these typical signs of conjoined nerve roots, MRI diagnosis is subject to several limitations. Imaging in the most useful planes is not routinely conducted in many studies, and falsepositive findings are also possible with this method. However, preoperative diagnosis of a conjoined nerve root, or even a high index of suspicion of an anomaly, may greatly aid the spinal surgeon and decrease the chances of operative complications [1]. Conjoined nerve roots are frequently undiagnosed prior to an operation and may cause considerable difficulty during spinal procedures involving nerve root mobilization. Medical history and physical findings often vary and mostly resemble those of a herniated disc with symptoms not necessarily presenting in a dermatome distribution. Moreover, there are no pathognomonic signs associated with lumbar conjoined nerve roots. Conjoined nerve roots are generally fixed, and are consequently difficult to retract during spinal surgeries. The roots could be irritated and inflamed during forceful retraction, which may increase tension on the nerve roots and cause postoperative neurologic deficit and neuropathic pain. A nerve root anomaly could be mistaken for a portion of protruding or herniated disc and incised inadvertently, resulting in iatrogenic neural injury. Thus, great care should be taken before surgery to minimize perioperative complications in cases with conjoined nerve roots. Several radiographic signs of conjoined lumbar nerve roots have been described using standard MRI techniques. Coronal MRI, with T1-and T2-weighted images, should be performed if a conjoined nerve root anomaly is suspected.
|
2018-04-03T04:57:13.284Z
|
2013-12-01T00:00:00.000
|
{
"year": 2013,
"sha1": "06124fb74454b48eec58334c25a723274ee23d56",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3875847?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "06124fb74454b48eec58334c25a723274ee23d56",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212812725
|
pes2o/s2orc
|
v3-fos-license
|
Investor Relations, Ownership Concentration, and Company Profitability Evidence from Chinese Listed Firms
The study tested the relationship between investor relations (IR), ownership concentration, and company profitability in China stock market, applying the data from 2014 to 2016. Through the empirical test, the investor relations positively correlated with the profitability of enterprises. Further empirical tests show that equity concentration as a moderating variable, when it is too high, it will weaken the contribution of IR to corporate profitability. Therefore, the conclusion of this paper can play a revelation to the listed companies in China. Listed companies should recognize the importance of investor relations. Investor relations should be implemented as an important business strategic decision. Managers should find a suitable equity concentration and achieve the company's sustainable development. Keywords—investor relations; ownership concentration; company profitability
I. INTRODUCTION
Investor relations are a result of mature capital markets. It is now an integral function of listed companies all over the world. To deal with the relationship between the company and investors, General Electric was the first firm that established investor relations (IR) department (J. Brennan & Tamarowski, 2000). IR was first put forward by the chairman of General Electric -Ralph Cordiner in 1953 (J. Brennan & Tamarowski, 2000). After that, in 1969, National Investor Relations Institute (NIRI) was set up, now it has about 4200 members. Investor relations contribute to the development of enterprises; it also plays a significant role in capital market function. The level of investor relations is closely related to the capital source, capital cost, corporate value, and company profitability in the firm.
The evaluation of the profitability of listed companies has attracted much attention from the academic community in recent years. In assessing the profitability of listed companies, the evaluator should consider the financial indicators and non-financial indicators, the level of investor relations is an essential position (Hoffmann & Fieseler, 2012). Scholars argue that investor relations help to stabilise the stock price of listed companies and make them more reasonable to reflect the real value of the company. Therefore, most scholars (Gul et al., 2010; J. Brennan & Tamarowski, 2000;Esterhuyse & Wingard, 2016) have studied the relationship between listed companies and investors to explore the role of investor relations in improving corporate performance. At the enterprise-level, a good investor relation can maintain stock price stability, effectively reduce the risk of investors, enhance investor confidence, broaden corporate financing channels, stabilise existing investors and attract potential investors. It maintains a good relationship between companies and investors and enhances the company's image so that enterprises can better base on the highly competitive capital market and achieve its maximum value (Deng, 2008). From the function of the capital market, a good investor relationship management can help to build the formation of price signal and an optimal allocation of market resources, like capital and human resource (Deng, 2008). This can reduce the shady, fraud, manipulation and speculation in the capital markets. So, in the following years, many types of the research study on IR theory. Most of them focus on information disclosure and the influence of stock price. There are fewer papers about the relationship of IR, ownership concentration and company profitability.
Based on previous studies, this dissertation will focus on how the IR influence company profitability from empirical evidence in China. In the same time, there are many studies, and researches show that ownership concentration has the influence on IR and company profitability. The subject of ownership concentration has been an interests issue in the developed economic society because it has a double side effect on investor relations and company profitability (Demsetz & Villalonga, 2001). Ownership concentration can bring benefits, but it may also bring agency problems. Therefore, I choose equity concentration as an intermediary variable in the research model.
Based on the study, I hope to contribute as follows: Firstly, through the study of the relationship between investor relations, ownership concentration and company profitability, this paper can rich investor relations theory literature. Secondly, this paper makes innovations in the construction of investor relationship indicators.
A. Investor Relations and Company Profitability
In each process of companies from the bud to the maturity, managers will develop the expected results of an anticipation of corporate strategic business activities to provide the right direction for the future development of the enterprise. Each stage of the enterprise to develop strategic business activities is expected to achieve the primary results of the expectations. The purpose is to enable enterprises to see their strengths and weaknesses, to see the good conditions in the outside world or the difficulties in the environment (Dul & Neumann, 2009). So, the company can adapt the uncertainty environment. If the development of corporate strategic business objectives can be completed or even over-fulfilled, companies can get great benefits. This will not only get more money but also maximize the profitability of the enterprise itself. Investor relations is an indispensable part of a business or organization's strategy of global, long-term direction, goals, tasks and policies, and resource allocation in a given period. It is through the active communication with investors crosses all sectors and segments of society, so that enterprises and the community to optimize the relationship between investors to achieve the maximum profitability (Esterhuyse & Wingard, 2016).
Investor relationship is a part that cannot be ignored in the company governance structure. Corporate governance through a series of laws and regulations on the relationship between enterprises and stakeholders to conduct a scientific division of the enterprise to develop a more rational, legitimate, and comprehensive program. Ultimately, it protects the interests of the company's internal and external stakeholders (Abe de Jong et al., 2007). Investor relations is to adjust the relationship between business and shareholders and achieve its optimisation. From the case study in Royal Ahold, Abe de Jong et al. (2007) find that a perfect corporate governance structure provides a strong guarantee for the improvement of corporate profitability. First of all, a sound corporate governance structure can significantly improve the profitability of enterprises; similarly, a higher profitability company have more opportunities to optimise the governance structure. Second, when a company has a sound governance structure, it will have an ability to achieve each stakeholder's power and interests, which improve people's trust in the enterprise. Shareholders will also put forward the scientific and practical advice to management. Managers will also develop a better business strategy and management programs to enable the management and governance structure of enterprises to be optimised so that this company will have more advantages than other companies. Finally, the majority of investors can put forward better comments and suggestions to business management and plan of activities, which can make the corporate governance structure get real optimisation and maximise the governance efficiency. Investor relationship management and corporate governance can improve the profitability of enterprises; they have a unique position.
About the relationship of IR and company profitability, Kirk and Vincent researched in 2014, showing IR firms have a higher market evaluation and disclose more information and has high stock liquidity. They model a hypothesis that there is a direct path from stock turnover to book-to-market value since increasing the stock liquidity. IR builds a credible information platform that increases the visibility of the company and improves the liquidity of the stock, thereby reducing the cost of stock transactions, increasing the volume of transactions. This increases the stock price and decreases equity capital costs, so that help company to keep the stock at a high-level price. So, with the increase of the level of investor relations, investors will have a roughly correct inference on the development trend of the company. The major investors are more inclined to spend more money to invest in their optimistic about the company's stock so that the corporate profitability straight up. The cost of a company in acquiring and using capital will reduce. Higgins (1992) also suggests that IR can improve the level and quality of information disclosure and reduce the information asymmetry between listed companies and investors. It enhances mutual trust between listed companies and investors, improves investor satisfaction and loyalty and reduces equity capital costs. Also, it reduces the undervalued risk of listed companies, and thus maximises the value of listed companies. On the other hand, from Kelly et al. (2010), investor relations have importance on company strategies and will lead to managers significantly increased input their time and energy. In addition to financial aspects, its communication functions should be committed to meet a higher level of information needs such as corporate strategy information and communication activities.
Therefore, based on the mentioned research above, I propose Hypothesis 1: the investor relationship can affect the profitability of the company, and the high level of investor relations can improve the company profitability.
B. Ownership Concentration and Company Profitability
There are two popular views on the relationship between ownership concentration and the company's profitability. One is a related theory that two variables have the relationship; the other is irrelevant that two variables are not related.
Berle (1931) argue that there is a potential conflict between the interests of a company manager who without the company share and decentralised minority shareholders. In the case of a decentralised share of the company, it does not optimize the company's performance. The more diversified ownership in the company, the worse the company's operating performance and profitability. On the contrary, the company's business performance will be improved if they have a relative concentration of the ownership. Shleifer and Vishy's models show that certain ownership concentration is necessary (1986). The point that company's profitability has a more positive result if the company exist large shareholders under the same conditions. Because the major shareholder has the economic incentive ability to restrict the management from sacrificing the interests of shareholders and seeking their interests, it can monitor the manager's behaviour more effectively, help to enhance the effectiveness of taking over
Advances in Economics, Business and Management Research, volume 110
the market and reduce the agent cost. Also, the ratio of stock price to corporate earnings increases as the increase of ownership concentration.
Pedersen and Thomsen conducted a survey in Europe in 2000. They survey 435 large companies in 12 European countries and found that there is a significant positive correlation between the concentration of equity and the return on net assets (ROE) of the company. They have conducted an in-depth study of this conclusion and found that the relationship between ownership concentration and performance of 435 large European companies is nonlinear. So, ownership concentration will have a negative influence on firm performance when it exceeds a certain point. In the follow-up empirical test in my dissertation with reference to previous studies, I use return on asset (ROA) to measure the company's profitability.
On the other hand, concentrated ownership means interests conflict between controlling shareholders and minority shareholders, and it is difficult to protect minority shareholders (La Porta et al., 1999). La Porta et al. think (2002) in emerging markets, weaker external regulatory mechanisms and less developed institutions may exacerbate the risk of major shareholders against the interests of minority shareholders. Wang and Shailer conducted an empirical research in different countries in 2015: Chile, Brazil, Turkey, China, Poland, Colombia, Hungary, Korea, Thailand, and Jordan. They divide the ownership concentration into three levels, low, medium, and high. Compared with countries with the lower concentration, the high degree and medium degree of ownership concentration are relatively lower in the firm's performance. And there is no evidence of nonlinear effects.
Based on China's economic research background, Yan (2006) find that non-state-owned final holding company performance is higher than the state-owned final holding company. The existing listed companies in China are mostly from the whole original state-owned enterprises or part of the restructured state-owned enterprises. Ownership structure in these companies shows the high proportion of state-owned holding, the stock is too concentrated and cannot flow. Stateowned shares in the control position will bring a long agent chain is incomplete information, insider control and other issues. For example, If the principal holder of the shares is not clear, the right of the "resources" is not very clear, the operator may use this property to maximize their interests, which may easily form a strong insider control. In addition, she also suggested that a high degree of decentralization of equity may not be the best choice to improve and optimize the ownership structure, the more feasible way is to form a number of listed companies in the relative holding of the ownership structure, to maintain a moderate equity concentration, and through the active participation of corporate shareholders to improve the corporate governance structure. According to Yan's market research in the China, I add "state holding" control variable in the later variable design to make a more accurate result.
According to previous studies, the company that maintains an appropriate ownership concentration structure can conduct to a better company's profitability. But the ownership structure cannot be too concentrated or too scattered
C. Investor Relations, Ownership Concentration and Company Profitability
Through the literature review about investor relations, ownership concentration and company profitability, I find they have relationships between each other. Under the view of "law and finance" theory, the ownership concentration is the substitute for the lack of legal protection of investors mechanism (Rubin, 2007). Equity is the right of the shareholder to obtain economic benefits from the company and obtains the company's management rights based on the amount of capital invested by them (A. Gul et al., 2010). According to Berle (1931), the concentration of ownership will lead an agency problem. This agency conflict mainly concentrated in the large shareholders and small shareholders outside the company. So,the high ownership concentration will make the effect of IR worse. Rare people did the study about the relationships between them. I research IR, ownership concentration and company profitability. In the process of verifying the hypothesis that IR has a positive impact on the company's profitability, finding what the impact of the role of ownership concentration.
Hypothesis two: ownership concentration plays a moderating role in the process of investor relationship management affecting the profitability of the company. In a certain range, the profitability of the company with low equity concentration is higher than that of the company with high equity concentration.
1) Investor relations
Company website as a communication medium has specific advantages to companies and investors. For companies, it is cost-effective and flexible to publish information on a website; while, for investors, it is easy, fast, and cheap to get the reliable and up-to-date source of information. With the development of internet technology, more and more companies tend to use the corporate website as a communication channel for investor relations (Nel & Brummer, 2016). At present, almost all the listed companies in China have their corporate websites with dedicated investor relations sections. So, this paper uses the scoring system which based on internet information to measure the level of a company's investor relations.
According to UK Investor Relations Society (IRC), best practice guidelines (2013), the measurement of investor relations are divided into 11 categories: accessibility, navigation, timeliness, company information, financial information, relevant news, investment case, shareholder information, bondholder information, corporate governance, and corporate responsibility. Every category has different attributes. For example, the accessibility means that the website of this company should be fully accessible to all
Advances in Economics, Business and Management Research, volume 110
investors. Attributes in accessibility are like website entry and different formats of investor relation.
From the guidelines, the scoring system to measure the level of investor relations in this paper divided into four aspects. The first one is investor relations management basic information that relate to accessibility and navigation. The second one is strategic information that relates to investing case and financial information. The third one is interactive communication information that relates to shareholder and bondholder information. The last one is other information that including company information, voluntary information disclosure and social information. If the relevant information was disclosed on the website of a company, then this attribute assigned 1 point, otherwise 0. The scoring system has 25 attributes. To calculate the total score of IR in one company, this paper refers to previous researchers (Aitken, Hooper &Pickering 1997, Zhao 2011) on the construction of investor relations index model. Firstly, sums up all the scores and then averages them. IRI= IRI is investor relations index. Scores represent the actual score for each investor relationship indicator (i = 1, ... 25). 25 is the largest possible score for investor relations index. In this case, the IRI is a value between 0 and 1, with a maximum of 1 and a minimum of 0.
2) Ownership concentration
The ownership concentration is a quantitative indicator that shows the concentration of ownership or the decentralization of ownership as a result of the difference shareholding ratio of all shareholders. The institutional investors and institutional entities hold the proportion of corporate stocks continue to be on. The main body of a big company's shareholders is transformed from many individual shareholders to a small number of institutional shareholders. So, the top five major shareholders are institutional investors, the company's top-five shareholders of the shareholding ratio can fully represent a company's ownership concentration. Many papers adapt the top 5 shareholders holding ratio as an alternative variable to the ownership concentration (Gul et al., 2010;Gaur et al., 2015). This paper also uses the proportion of the top 5 shareholders to measure ownership concentration in a company.
3) Company profitability
This paper uses the Return on Total Assets (ROA) to measure company profitability.
ROA is a ratio that calculated by comparing the net profit with the average total assets in a period.
In other words, the return on assets measures the effectivity of a company to produce their profits by managing its assets in a period. (Herold, et al., 2007) Advances in Economics,Business and Management Research,volume 110 ROA is one of the most frequently used indicators that measure the company's profitability. A higher ROA indicates a better use of corporate assets and a good result of increasing income and saving money or other aspects (Wijayanto, 2010).
4) Control variables
Some control variables are also needed to be considered into the regression model. There are two types of control variable: one is related to company profitability, other is related to ownership concentration. The first type of control variables: company size, financial expense ratio, Assetliability ratio. The second type of control variables: the nature of the property, management shareholding ratio.
C. Sample and Data Source
Based on the timeliness and availability of data, this paper selected the listed companies from Shanghai Stock Exchange and Shenzhen Stock Exchange in China as the research samples. The observation time was from January 1, 2014, to December 31, 2016. All the data are from CSMAR database that is one of the most popular economic databases in China.
According to the research, some conditions and limitations should be considered in the process of choosing the suitable samples. The data were screened in the following order: firstly, to ensure the continuity of the sample, I delete the sample companies which has the label of S (unfinished split share reform), ST (ordinary risk warning), * ST (company Risk, generally for two consecutive years of losses). Second, due to the particularity of financial and insurance company's financial system, it is necessary to remove the financial and insurance companies to ensure data comparability.
In addition, when calculating and measuring the level of investor relationship, the company's annual report and the company's website are used as a platform for listed companies to carry out investor relationship management activities. Finally, this paper includes 3371 A-share research samples.
IV. EMPIRICAL RESULT
The relationship between company profitability, investor relation and the ownership concentration for the public firm will be analysed by using multivariate regression model in this section. mechanism in the capital market should be improved. Descriptive statistics also show that the score of investor relations in some companies' is 0. This indicates some of them did not build investors relations system. They do not even pay attention to this matter.
A. Descriptive Statistics
Relevant indicators of the company's ownership structure are property nature, top 5 shareholders holding ratio and management shareholding ratio. The mean of property nature is 0.62 that indicates most Chinese companies are stateowned. From top 5 shareholders holding ratio, the maximum is 98.47% while the minimum is 0.81%. There is a large difference between sample companies' equity concentration. The management shareholding ratio is very low, the maximum is only 0.81%, and the mean is 0.02%. These number is close to zero. The proportion of the company's management shareholding ratio is too low, which indicates the company may not consider taking enough equity incentives (A. Gul et al., 2010).
B. Correlation Matrix
" Table IV" shows the correlation coefficient analysis of each major variable. As can be seen from this table, the correlation coefficients of the main variables are less than 0.35, indicating that there are no serious multiple collinearities between the variables of the regression model.
C. Regression Analysis
The empirical results support the Hypothesis one. From the regression analysis, the first column of model 1 in the table, investor relations is positively correlated with the profitability of the company at 1% significant level. ROA will increase 0.21% when the company's IRM score increase 0.1. This shows that the company in the capital market can earn more money with the higher level of company investor relations. Meanwhile, the level of investor relations can be reflected in the capital market. Therefore, listed companies can improve the company's profitability by improving the number and quality of information disclosure, and enhance the interaction between investors to attract investors. From another point of view, investor relations improve the company's profitability because company's managers can extract information from the feedback about corporate value creation strategies of securities brokers and investors (Higgins, 1992). Management can better understand how investors respond to some company specific actions. This has a direct impact on the company's strategic decisions and stock prices. This also reflects the two-way communication of information in investor relations, which is good for both investors and companies. Standard errors in parentheses * p < 0.10, ** p < 0.05, *** p < 0.01 The cross term IR * Stoh5 is added to Model 2 in " Table VI". The IR and Stoh5 have more positive impact on the company's profitability. IR for each additional 0.1, ROA will increase 0.72%. At the same time, when Stoh5 increase 1%, ROA will increase 0.071% naturally. However, the cross term IR * Stoh5 has a negative influence on company profitability. This indicates that IR will reduce the degree of promotion of ROA with the higher ownership concentration. Standard errors in parentheses * p < 0.10, ** p < 0.05, *** p < 0.01
Advances in Economics, Business and Management Research, volume 110
In the company's ownership concentration conditions, the major shareholders have the power and ability to influence the composition of management and supervise the management decision-making process, to effectively solve the decentralised share of the management agent problem (Omran, et al., 2008). But when the concentration of ownership is too high in the company, the agency problem will appear between the controlling shareholder, small and medium-sized investors (Fama & Jensen, 1983). The company's management will be useless; there is no ability to form a right balance between the management and major shareholders. Large shareholders use the control of the company to pursue their interests. According to Rubin (2007), large shareholders are more informed, if a minority holds the information rather than a lot of people, the possibility that any insider will trade on such information will be greater. They screened the information disclosed by the company and violated the Investor Relations Management Treaty. In short, from the company's profitability, if the ownership concentration is too high, it may make the major shareholder limit the protect function of the investor relation which belong to corporate governance to obtaining more invasive interests. It inhibits the role of investor relations in enhancing the company's profitability. The result is to make the profitability of listed companies lower.
V. CONCLUSION
This paper examines the relationship between investor relations, ownership concentration and corporate profitability of Chinese listed companies. The total sample eliminates the existence of business risk or delisting risk of the company, so we can identify the sample is a good business performance and stability of the company, the empirical results are true and reliable. After research, it concluded that the investor relations of Chinese companies positively correlated with the profitability of enterprises, which shows that raising the level of investor relations of listed companies will help to enhance the profitability. Investors make investment decisions through information disclosure and interactive communication of listed companies. Through the investor relations management activities, enterprises can let existing, and potential investors understand the business situation, development prospects, and thus obtain the recognition and trust of investors, which the company to improve its value. Based on results, listed companies and regulators need to understand the importance of investor relations. Specifically, the following aspects can be used to improve the level of investor relations of listed companies: the investor relationship management should be regarded as a strategic decision of the company. The company develops rules and regulations on investor relations and hires experts from investor relations management to train employees in the company, creating a top to down atmosphere of learning investor relations.
|
2020-01-16T09:06:37.816Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "50e823bc4c7d25cdc1ace7d4e26391f3be0f2323",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125931448.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2180f3a3217543ef5b72b47747941f7297cb8bca",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
237948684
|
pes2o/s2orc
|
v3-fos-license
|
Resource Control and Secessionist Movements in Nigeria: Implications for National Unity and Development
The geographical entity called Nigeria came into existence on January 1, 1914, when the then Northern and Southern protectorates were merged. Since then, successive governments in the country have been trying to unite the diverse elements that make up the country, all to no avail. From the North and South, there have been called for the dismemberment of the country due to the failure of successive administrations to address the national questions. It is against this backdrop that this paper examines the issues confronting Nigeria’s unity and suggests a way forward. The paper is anchored on elite and frustration-aggression theories and relies on secondary sources of data. The paper contended that injustice, high-handedness, and marginalization of certain sections or regions of the country in the governance of the country accounted for resource control and secessionist movements in the country. The paper suggests justice and inclusiveness of all sections of the country in the affairs of the state, among others. Review Article Adeosun; ARJASS, 14(4): 47-66, 2021; Article no.ARJASS.69590 48
INTRODUCTION
Nigeria came into existence on January 1, 1914, when the Northern and Southern protectorates were merged and since then, Nigeria has been a mere geographical expression because the British created it for administrative and economic conveniences and not for internal coherence [1]. According to Lord Lugard, the amalgamation was aimed at unifying administrations and not peoples [2]. Besides, the various ethnic groups that were lumped together were never consulted on the merger [3]. This and subsequent colonial policies that followed have been responsible for acrimonies relationship that existed among the various groups that make up the country.
From the Northern and Southern parts of the country, there have been called for secession by different groups championing the cause of their people. In the South-South, groups such as the Movement for the Survival of Ogoni People (MOSOP), the Movement for the Emancipation of the Niger Delta (MEND), the Niger Delta Avengers, among others, have campaigned for self-determination, justice, and resource control. In the South-East, Movement for the Actualization of the Sovereign State of Biafra MASSOB), Biafra Zionist Federation (BZF), and Indigenous People of Biafra (IPOB). The South-West paraded groups such as the Oodua Peoples' Congress (OPC); the Oodua Republic Front (ORF). While in the Northern part of the country, there exist groups such as the Middle Belt Federation (MBF) also agitated for autonomy on grounds of the ''unfair provisions of the 1999 Constitution [4].
The call for resource control and dismemberment of the country have been attributed to marginalization [5]. In a like manner, [6] attributed the root causes of separatists agitation to political and economic marginalization as well as the government hardline stance. By the same token, [7] adduced the causes of threat to national unity to poor national governance, leadership, over-centralization of power and resources, corruption, poverty, lack of patriotism, among others. Similarly, [8], attributed secessionist activities to an inability on the part of the government to foster a sense of common identity and national consciousness among the diverse groups that make up the country, bad governance, and continuing promotion of inter-ethnic hatred and unhealthy rivalry. An eminent Nigeria historian attributed secessionist threats to the heterogeneous ethnic composition, varied administrative practices, and controversial political and constitutional arrangements, cultural diversity, vast size, a problem associated with Nigerian federalism, personality clashes between Nigerian leaders before and after independence, and the absence of a strong ideological magnet [9].
The agitation for resource control and call for the dismemberment of the country has resulted not only in the loss of lives but also created disharmony among the diverse groups that make up the country. It is against this backdrop that this paper examines resource control and secessionist movements in Nigeria and the implication for national unity and development. It addresses among others, resource control, their types as well as the rationale for resource control, secessionist movements, and the reasons for agitations for secession and the implications of resource control and secessionist movements for national unity and development and suggests a way of addressing the problems.
The paper is organized into seven segments of which this introduction is a part. The second section is the conceptual clarification and it dwells on the concepts that are germane to the study. The third part analyses the theories on which the study is anchored. The next segment is an overview of secession threat in the country while the fifth section takes a cursory look at secessionist movements in the south-east geopolitical zone of Nigeria. The sixth part examines implications of resource control agitation and secessionist movement activities on national unity and development and the last section is the conclusion and recommendations.
Conceptual Clarification
It is imperative to clarify the concepts used in this discourse and these concepts are resource control, secession, secessionist movement, national unity, and development.
RESOURCE CONTROL: AN EXPLORATION
The term resource control has attracted different interpretations among scholars, politicians, activists, and policy analysts. As [10] rightly noted: "the quest for resource control by the people of Niger Delta lies at the heart of the violence in the region".(p.42) For A [11] resource control has been a recurrent decimal in the history of the Niger Delta of Nigeria and attributed this to the historical importance of the region. [12] defines resource control from four perspectives, politicians, militants, ordinary 'Deltans, and non-Deltans'. For the politicians from the Niger Delta region, resource control means personal enjoyment of the benefits of oil at the expense of most people they represent. Whereas, the militants see it as a way of recovering through armed struggle the petroleum resources that have been supposedly taken by the country's power elite through political manipulation and those outside the region view it as refusing other parts of the federation the benefits of federalism by insisting on the control and enjoyment of a natural resource which should be the patrimony of all Nigerians. And to the average Niger Deltans, it means environmental degradation, poverty, and hunger amid plenty. For [10], resource control refers to the "desire that the region is left to manage its natural resources, particularly its oil and pay taxes and or royalties to the federal government." (p.42).
Following [13,10] resource control can be categorized into three absolute resource control, principal resource control, and increased derivation, and these are discussed below:
Absolute Resource Control
It is a resource control in which all the resources of the region are owned and controlled by the people of the region. This kind of resource control is included in the Kaiama Declaration, which is contained in paragraph 5 of the Declaration and its states that "every region should control its resources 100 percent of which it will allocate funds for running the central government" (cited in [10] p. 42.
Similarly, [14] defined resource control as the total takeover of the resources situated in the oilbearing states by the people of the state. For Ifedayo [cited in 15] resource control entails the access of communities and state governments to natural resources situated within their frontiers and the liberty to develop and utilize these resources without allusion from the central government.
Principal Resource Control
This is a type of control in which the oil-bearing communities play a key role or participate actively in exploration, exploitation, marketing, and sales of the products [16,17,18]. For instance, [16] sees resource control as a "compelling desire to regain ownership, control, use and management of resources for the primary benefit of the first owner (the communities and people) on whose land the resources originate." [cited in 10 p.42].
For [18], resource control means "a direct and decisive role in the exploration for, the exploitation and disposal of, including sales of the harvested resources." He identified three components of resource control as.
The power and right of a Community or State to raise funds by way of tax on persons, matters, services, and materials within its territory. The exclusive right to the ownership and control of resources, both natural and created within its territory. The right to customs duties on goods destined for its territory and excise duties on goods manufactured in its territory [18].
The seventeen states chief executives (governors) of the Southern part of Nigeria in their communique at the end of their summit in Benin, Edo state, define resource control "as the practice of true federalism and natural law in which the federating units express their rights to primarily control the natural resources within their borders and make an agreed contribution towards the maintenance of common services of the government at the center." [cited in 19, p. 1] [20], define resource control as the right of the Niger Delta to take possession and manage the revenue accruing from oil and other natural resources by the tenets of true federalism. Equally, [17] defines resource control as the control and management of resources by the state or local government where the resources are found, under the guidance of the central government, and then pay an agreed percentage to the central government.
Increased Revenue
Resource control from the perspective of increased revenue involves a rise in the present derivation percentage from 13% to 25% as demanded by the elite of the region in the 2005 National Constitutional Reform Conference. [15] see resource control as the way and manner the government revenue is distributed among the different tiers of government namely the federal, state, and local governments.
[21] defines resource control as "the substantive powers for the community to collect monetary and other benefits accruing from the exploitation and use of resources in its domain and deploy same to its developmental purposes." (p.46).
In the light of the above definitions, resource control entails ownership, control, management of a natural resource by a community, and payment of an agreed percentage of the proceeds of the natural resources by the owners (community/state) to the central government for the overall running of its assigned duties by the constitution.
THE RATIONALE FOR RESOURCE CONTROL
Several reasons have been advanced for agitation for resource control and they include among others, environmental degradation, lack of infrastructure, poverty, and unemployment, poor corporate social responsibility, and domination by the major ethnic groups.
Ako 2011 [10] attributes the demand for resource control to perceive political and economic marginalization of the people of the region by the majority ethnic group leaders at the helms of affairs in Nigeria. Omoweh [cited in Dibua, 2005 [22], contends that the denial by succeeding administrations in Nigeria to increase the level of participation of the oil-bearing communities over their natural resources as well as the environmental and social impacts of oil exploration necessitated the need for resource control. Corroborating this [23], argues that government neglect of environmental management in the Niger Delta accounted for the demand for resource control and violence in the country. For [24], the demand for resource control is to encourage the practice of fiscal federalism as the most effective means of liberating Nigerians from the result of authoritarianism and misrule.
It is important at this juncture to examine some of these factors stated above and how they contributed to agitation for resource control.
The environment of the oil-bearing communities in the Niger Delta has been debased due to oil spillage, gas flaring, and other activities such as oil exploration and exploitation by transnational oil companies. The inhabitants of the Niger Delta are concern about the environmental degradation of their region because it is their source of living. They depend on the environment and rivers for subsistence, socio-cultural survival, food, and shelter.
Thus, a conflict of interest exists between the indigenes of the region and the Nigerian state concerning the environment [25]. Consequently, the people have no access to land where they can farm, they can fish because the rivers have been polluted and the fishes destroyed in the process. This created frustration and anger among the people and they not only demanded the control of their resources, but also employed violent means to show their displeasure [26][27][28][29][30][31][32][33][34][35].
Another reason adduced for resource control is the lack of infrastructural facilities in the oilbearing communities. The argument is that successive administrations in the country have neglected the region where the bulk of resources of the nation is derived from. The oil-bearing communities lack basic amenities like roads, hospitals, electricity, schools. The inhabitants of the region accused the central government of using their resources to develop other regions [26]. contended that the paucity in social amenities and harsh socio-economic conditions fueled alienation among the people of the region and accounted for the agitation for resource control.
Furthermore, the nature of the Nigerian political system contributed to the agitation for resource control. The Nigerian federal structure is centralized with powers and resources vested at the center. Also, Nigeria is a mono-cultural economy with the bulk of the revenue coming from oil, and the region where the resources are derived from felt that they were being shortchanged by the successive administrations because of the principles used in the allocation of revenue to the component units. The structural defects in the Nigerian federal system have been attributed to long years of military rule which was dominated by the majority ethnic group particularly the north, which used its position to advance the interest of the group and denied the rest of the federation especially the region that produced the golden eggs the fruit of its labor in terms of resources for its development [36].
Interestingly, several studies [37,38,39,40,41,42,43,44,45,46,47] have pointed to the defective federal system and absence of equity in the disbursement of revenue among the federating units as the cause of agitation for resource control in the Niger Delta. The people of the Niger Delta are dissatisfied with how rents accrue from oil proceeds are distributed among the component units of the federation. This is one of the major grievances of the people and is responsible for the armed conflict against the Nigerian state.
SECESSIONIST MOVEMENTS
Nigeria, like other parts of the world, has been and is witnessing the proliferation of secessionist movements and persistent demands in certain regions of the country for the creation of an independent new state. But before defining the secessionist movement, it is imperative to explain the concept of secession. The term secession is a contested concept in social sciences. Indeed, there is no uniformity in the meaning of the concept among scholars. Secession has also been used interchangeably with self-determination but they are not the same. While the former can be realized within the borders of the existing state, for example through power-sharing arrangements, the latter cannot be realized through the existing state [48]. Furthermore, self -determination emphasizes the right of people to determine their destiny as regards their cultural, social, economic, and political development [49]. These rights as [50], argued cannot be applied internally by groups within an already independent state. However, [51], contended that the expression of selfdetermination does not often disintegrate a state. Hence, some agitations could be about relative autonomy within the state while some might be outright secession.
Like many social and political phenomena, secession has been a subject of inquiry by separate and often unrelated disciplines: legal studies, political science, and applied philosophy. This diversity of approaches has created a definitional problem [52]. However, scholars agreed that secession involves the creation of a new state by the withdrawal of a territory and its populations from an existing state. According to Caney (1998) cited in [53], secession refers to a territorial community that breaks away from its former host state and the founding of its separate state and sovereign political entity. In a like manner, [54] sees secession as" the creation of a state by the use or threat to use force without the consent of the former sovereign" (p.375). By the same token, Kohen (2006) cited in [55] sees secession as "the creation of a new independent entity through the separation of part of the territory and population of an existing State, without the consent of the latter.
[also] to be incorporated as part of another State" (p.537). For Dahlitz (2003) cited in [55] secession arises whenever a significant proportion of the population of a given territory, being part of a state, expresses the wish by word or by deed to become a sovereign State in itself or to join with and become part of another sovereign state. [56], sees secession as a demand by an ethnicnationalist group for either independence from, or significant regional autonomy within a state.
From the foregoing, secession involves the breakaway or withdrawal from an existing state and the creation of a new one. Having to define secession, it is important at this juncture to examine the meaning of secessionist movements and to this, we do in the following paragraph.
According to [57], secessionist movements are groups seeking withdrawal from a larger political entity or a country to become an independent state, separate from the former country they belong to. Put differently, secessionist movements are groups that are bent on having their independent enclave different from the hitherto ones they belong to. Similarly, [58], sees secessionist movement as "a conscious effort, attempt, and agitation by a group of persons of common primordial or constructed identity, interest, and destiny to pull out of an already existing sovereign state for an independent state of their own." (p.95). The method adopted by these groups to achieve their objectives range from peaceful or non-violent approach to violent approach. Examples of these groups in Nigeria are Movement for the Actualization of the Sovereign State of Biafra (MASSOB) formed by Ralph Nwazuruike, Biafra Zionist Federation
NATIONAL UNITY
The term national unity has been used interchangeably with national integration, nationbuilding, and national cohesion and it has been defined differently by scholars. According to [59], national unity means a feeling of common purpose that binds peoples of diverse cultures, colors, and ethnic nationalities together as one. Similarly, Duverger cited in [60], sees national unity as the process of unifying the various segments of a society to make it harmonious based upon an order its members regard as equitably harmonious. Likewise, Morrison, et al (n.d), define national unity as the process by which members of a social system develop linkages and location so that the boundaries of the system persist over time and the boundaries of sub-systems become less consequential in affecting behavior. In a like manner, Jacob and Tenue cited in [60] define national unity as a cordial relationship existing among members of a political community. It can also mean a state of mind or disposition that is cohesive, committed to acting to achieve mutual goals.
From the foregoing, national unity can be seen as a process where people of diverse beliefs, coexist peacefully as one family under the national ethos and constitution.
NATIONAL DEVELOPMENT
The concept of development has been defined differently by scholars. Put differently development means different things to different people. For some, it means making life better for all. To others, development means economic growth (increase in GDP). [61] equates progress and modernity with development. [62] defines development as a multi-dimensional process involving the totality of man in his political, economic, psychological, and social realities among others. Likewise, [63] sees development as a multi-dimensional process involving reorganization and reorientation of the entire economic and social systems. By the same token, [64], sees development as involving the steady and systematic change in the cultural, economic, and political spheres of society in a way that increases production, empowers the people and their communities, protects the environment, strengthens institutions, grows quality of life and promotes good governance.
It is important at this juncture to define national development. [64], defined national development as the overall development or a collective socio-economic, political, and technological advancement of a country or nation. While [65] defined national development as the ability of a state to provide a source of living for the majority of its inhabitants and elimination of poverty, provision of adequate welfare, shelter, clothing to its citizens. This means socio-economic growth, popular participation in politics, overall restructuring and transformation of the society, social justice, and positive changes in social relationships and intergovernmental relationship.
From the foregoing definitions, national development is an all-around development that consists of the political, economic, technological, and social spheres of a nation.
THEORETICAL FRAMEWORK
This study is anchored on two theories namely elite and frustration-aggression theories. The choice of elite theory is because the elite decides how the political and socio-economic life of a nation is organized. Moreover, the elite shapes the developmental direction of a country by the way they allocate resources. When resources are equitably distributed, development and peace prevail but when resources are unjustly allocated, underdevelopment and violence prevail [66]. While frustration-aggression theory enriches our understanding of the motive or driving force for the agitation for resource control and secessionist activities in the country.
Elite Theory
In every society or organization either developed or developing, simple or complex, a class of people selected or elected occupies the topmost position in such society or organization and this is due to their educational attainment or skills position and in some cases, birth (royalty) this category of people are known as elite.
The term elite refers to "a selected and small group of citizens and or organizations that control a large amount of power. It is also used to analyze the groups that either control or are situated at the top of societies" [67]. The elite theory stipulates that power is concentrated in the hands of a small group known as the 'elite' in any given society [68]. This small group is called 'Guardians' by Plato in his work "Republic" [68]. There are several versions of the elite theory ranging from that developed by Vilfredo Pareto, Gaetano Mosca, Robert Michels, C, Wright Mills, Floyd Hunters, and a host of others [69,70].
Pareto, in his insightful study of the elite, divided the elite into governing and non-governing elite and ascribes to the group scholarly prevalence or predominance which differentiated them from the general populace. Similarly, Mosca cited in [70], divides society into the ruling class and nonruling class. The ruling /political class is the elite and the sub-elite. The sub-elite class in this setting alludes to technocrats, managers, and civil servants, who are above the masses as far as access to an opportunity from a state. The elite class which consists of governing and the non-governing elite are highly organized compared to the masses and, as a result, they cannot be dared by the masses [70].
Mitchel's analysis is centered on bureaucracy and not the actual government undertakings. He contends that every social and political organization in society is run by a few minorities, which make the decisions. He attributes the oligarchic tendencies of an organization to the complex nature of the organization, the nature of human beings, and the phenomenon of leadership (cited in [70]. Other notable elite theorists include [71,72]. The major thrust of the elite theory is as follows: In every society, there is and must always be a minority which rules. According to [73], "it is an organization, which gives birth to the domination of the elected over the electors, of the mandataries over the mandators, of the delegates over the delegators. Who says organization says oligarchy" (p. 15). This indicates that the oligarchy is a rational derivative of the organization. In addition, Pareto argues that minority is inevitable in all societiesdeveloped or developing simple or complex society. This minority that rules derived its initial power almost always from a force like the monopoly of military power. But with time, this power is transformed into domination through routinization. The minority ruling circle comprised all those who occupy powerful political positions.
Changes in the ruling class occur in many ways; the recruitment of those from the lower strata of the society into the ruling elite group. Another way is that a new group is integrated into the governing elite or by a complete replacement by a "counter-elite" through a revolution. These changes in the composition of the elite group are known as the circulation of the elite.
According to Pareto, people are ruled by the elite, where throughout human history, the continuous replacement of certain elite with another, new elite rise and old elite fall. In his words, "elite or aristocrats do not last. They live or take a position in a certain time. History is a graveyard of aristocracies" (cited in [69 p. 16]. The importance or utility of this theory (elite theory) to this study is that elite are the managers that direct and allocate resources among competing groups in the society. The failure of the elite to use these resources to improve the living condition of all may cause people to revolt against those they perceive to be responsible for their predicament (unemployment, poverty, etc.).
More so, the action or inaction of the elite about oil resources management or its distribution among component units in a federation like Nigeria and units within a federation can lead to violence. To paraphrase [74], the elite decide who gets what of the oil wealth, when, and how. Moreover, the control of this resource as earlier pointed out is a source of conflict among the different elite groups.
Furthermore, the theory enhances our understanding of the intrigue and dynamics that characterized Nigerian body politics. Through the elites theory, one can understand that both governing and non-governing have through policy and actions caused or manipulated the citizens to achieve their selfish goals and also enriched themselves and make themselves relevant in Nigeria's political arena. The elite especially the governing ones also used their positions of authoritative allocation of resources in the society to cause disaffection among the people by pitching one group against the other.
According to [75] whatever Nigeria has or has not become, it is due primarily to the deeds and or misdeeds of its leaders. This implies that the poor state of development of the country and other myriads of problems confronting the country can be laid squarely at the doorstep of the leaders. In other words, the deficit in leadership in terms of commitment, selflessness, and political will to take the bull by the horn are lacking in Nigerian leaders and that have been responsible for the situation in which the country finds itself. [76], corroborates this assertion by saying that Nigeria's major problem is leadership. Therefore, the Niger Delta region also faced leadership problems. Hence, the elites of Niger Delta extraction are responsible for the problem confronting the region because of their misplaced priority or their failure to prioritize the needs of their people but instead, they compounded the problems of the region through corrupt practices such as misappropriation of fund, which deprived the region the needed resources for its development [77,78].
Nevertheless, the theory has been criticized by scholars. The notion of elite revolves around power and yet this concept is not well defined by the classical elite theorists and this makes it possible to include in the ruling elites wielders of different sorts of powers and also those who wield no power [79]. Similarly, [80] contended that the elite theorists failed to develop a clearcut elite concept and that most of their arguments were general and lacking concrete substance. [81] maintained that no single elite exercised overall influence on every aspect of decision-making. In his work Who Governs? Examine three political issues in New Haven, Connecticut namely: party nominations for local elective offices/ positions, public education, and urban development. He found that no single elite operating behind the scene, but rather many lines of cleavages and politicians who were responsible for the desires of the citizenry.
The theory is too simplistic because it fails to differentiate between different political systems and assumes that all political systems are the same. The genuine differences between democracy and authoritarianism are dismissed. They are all regarded as an oligarchy. The argument that political elites are superior to the masses is simply an assertion. No objective criteria are being provided by which we can measure the superior quality of the elites [82].
Frustration-Aggression Theory
The frustration-aggression thesis states that aggression is a product of frustration and frustration is a product of aggression. The theory analyses violence from the point of view that when someone is prevented from realizing his goal, he vents his anger on those he perceives as a hindrance to the realization of his/her goal.
The frustration-aggression theory is the brainchild of John Dollard (a psychologist) and his associates namely Doob, Miller, Mowrer, and Sears (cited in [83] in their spearheading work on the subject and the later research led by [84]. The theory as articulated by [85] states that "the occurrence of aggression always presupposes the existence of frustration and, contrariwise, that the existence of frustration always leads to some form of aggression."(p.338). However, [86] modified the second part of the statement that states "the existence of frustration always leads to some form of aggression to read: "frustration produces investigations to several different types of response, one of which is an instigation to some form of aggression". Dollard et.al (1939) cited [87], sees frustration as "an interference with the occurrence of an instigated goalresponse at its proper time in the behavior sequence".
It is important to note that a hindrance does not constitute frustration. It becomes frustrating when one strives to achieve this goal. For Dollard, et.al (1939) cited in [83], aggression means "any sequence of behavior, the goal response to which is the injury of the person toward whom it is directed". But, aggression is not likely to occur if aggressive behavior is repressed through strategy associated with punishment [87]. Eminent political scientists such as [88,89,87], have applied this theory to the study of political violence.
The theory examines violence from the psychological viewpoint and attributes it to inhibition or blockage of goal attainment [90]. While trying to clarify aggression, researchers point out the contrasts between what individuals feel they need or should and what they get, the need -get-proportion [87] and contrasts between expected need fulfilment and actual need fulfilment [88]. Where the people's want or desire is unmet, the inclination is for individuals to go against those they consider in charge of disappointing their aspirations [91].
The crux of this theory is that aggression is the result of frustration and in a circumstance where the actual yearning of an individual is denied either directly or indirectly by the outcomes of the way the society is organized, the feeling of disillusionment may lead to such a person to express his displeasure through violence that will be targeted at those he /she considers to be responsible for his/her predicament [92]. The resurgence of secessionist movements in the southeast geopolitical zone of Nigeria can be attributed to the frustration of the people of the region due to the marginalization of the region by the Nigerian state.
The frustration-aggression theory enables us to comprehend the driving forces that accounted for both the agitation for resource control and the resurgence of secessionist movements in the south-east of Nigeria and why it has persisted despite government highhandedness of the activities of the movements in the region. Moreover, the theory enables us to understand that when people yearning is not met, there is the tendency that this may cause frustration and aggressive behavior (violence) by the people. For instance, the Niger Deltans believed that they were marginalized and short-changed by both the Nigerian state (neglect of the region) and the multi-national oil companies (environmental impact of oil exploration on the ecosystem) from enjoying the oil wealth which God has deposited in their region. In other words, the inhabitants of oil-producing communities or states, expect to derive benefits from this resource in terms of the development of the region. For instance, employment, provision of social amenities such as hospitals, schools, roads, water, electricity, and other good things of life that make life meaningful. Unfortunately, these social amenities are lacking and the people live in abject poverty.
Consequently, they became frustrated and blamed those they perceived to be responsible for their predicament. For instance, the youth in the Niger Delta consider the multi-national oil companies and the federal government represented by the elite from the dominant ethnic groups as being the stumbling block to the realization of their dreams of benefiting from the abundance of natural resources found in their region and because of this the multi-national oil companies were the targets of the youth who used violence means to show their displeasure to them for depriving them of enjoying the benefits of being the owner of the oil and gas resources found in their domains. They destroyed oil installations or facilities and also kidnapped oil workers for ransom and demanded autonomy and control of their resources. Thus, frustrationaggression arises because of the youths being unable to benefit from the oil wealth which has been cornered by the elite and used to take care of themselves and their immediate families. The consequence of this is violence which is also instigated by the elite.
While the Igbos complained of the denial of the highest office in the land, lack of infrastructure, poverty, unemployment, and many others accounted for the frustration and aggressive behavior of the people towards the Nigerian state and their resolve to demand a separate state of Biafra.
THE THREAT OF SECESSION IN NIGERIA: AN OVERVIEW
Secessionist threats are not new in Nigerian politics [93]. It dates back to the period before the independence of Nigeria. Evidence abounds in the literature where the three defunct regions, through utterances by their political leaders, threatened to break away from the entire country.
According to Ojo (2004) July 1966 and some officers of eastern extraction including the then military head of state Major-General Aguyi Ironsi and Col. Adekunle Fajuyi. It was reported that the north designed a separate flag and composed a national anthem in a bid to proclaim 'The Republic of the North' [94].
The Eastern region also threatened secession in 1964 following the rigging of the 1964 General Election and 1965 Western regional elections. Before December 1964, the N.C.N.C, then led by M. I. Okpara, the Premier of the Eastern Region, openly threatened secession. During an interview on 24 December 1964, Okpara expressed the desire of the Eastern Region to secede from the Federation. Earlier, on 10 December 1964, President Azikiwe had in a dawn broadcast to the nation warned of the dangers of disintegration arising from the allegations made about the conduct of the 1964 federal election [9]. In the course of his nation-wide address, Azikiwe observed: I make this suggestion because it is better for us and our admirers abroad that we should disintegrate in peace and not in pieces. Should the politicians fail to heed this warning, then I will venture the prediction that the experience of the democratic [sic] Republic of the Congo will be child's play if it ever comes to our turn to play such a tragic role [9].
The secession threat that was carried out by the Eastern region was when the late Col. Odumegwu Ojukwu declared the Eastern region as an independent state of the Republic of Biafra on May 30, 1967. This action led to the Nigerian-Biafran war which lasted for thirty-month ( May 30, 1967-January 15, 1970. The war has been described as the first modern civil war in sub-Saharan Africa after independence and one of the bloodiest. About one to three million people died, mostly of starvation. The levels of starvation in the war were three times higher than the starvation reported during World War II in Stalingrad and Holland [95]. Ojukwu [9] attributed the secession to Nigeria's exploitative and systematic killings of the Ibos since 1945 in Jos, in 1953 in Kano, and in 1966 following the first and the second military coups in Northern parts of the country.
In what is today known as the South-South geopolitical zone an attempt was made to pull that region out of Nigeria. Major Isaac Adaka Boro, an Ijaw man led an armed campaign for Niger Delta autonomy, resource control, and selfdetermination for the people of the region in the mid1960s. Put differently, Boro and his Niger Delta Volunteer Force declared the Niger Delta Republic as Independent State on February 23, 1966, and gallantly engaged the federal forces in a battle that lasted for twelve days.
In a like manner, the West also threatened secession in 1953 on the status of Lagos. The colonial government and the Northern and Eastern regional governments supported that Lagos should be detached from the Western region, remain a neutral territory as the federal capital. The Western regional government led by Awolowo vehemently opposed this and wanted Lagos to be administered as part of the western region. As the disagreement raged, Awolowo sent a strong-worded cable to the Secretary of State in which he claimed the freedom of the Western region "to decide whether or not they will remain in the proposed Nigerian Federation" (9, p.570). In the resumed constitutional conference of 1954 in Lagos, Awolowo's Action Group vehemently argued for a constitutional provision for the right of any of the federating regions to secede from the federation. This was opposed by Nnamdi Azikiwe's National Council of Nigeria and the Cameroons (NCNC). The conference ended with an agreement that no secession clause would be written into the amended constitution (Aremu & Buhari, 2017; [93,57,9]. Since the restoration of civil rule in May 1999, there has been a resurgence of groups across the length and breadth of this great nation demanding self-determination for their people based on perceived injustice and marginalization. The call for the dismemberment of Nigeria especially among the Igbos have been attributed to among others, the treatment of the Igbos as second class citizens in Nigeria, the denial of sensitive political positions to the Igbos, the Igbo dominated geopolitical region is the least with several states (five) when compared with other geopolitical zones.
The marginalization and isolation of the Igbo ethnic group in the political, social, and economic arrangement of the country coupled with the inability of all tiers of government to address the key developmental issues such as socioeconomic and political as well as inefficient and ineffective governance structure in the administration and management of the commonwealth accounted for the agitation for both resource control and a secessionist movement in the country [96].
Moreover, the manner and way the present administration of President Muhammadu Buhari treated the Igbo people; particularly in political appointments necessitated the call for secession by the Igbos. The administration neglected the constitutional provision of federal character which ensures fairness and a sense of belonging in appointment, project citing, etc. Hence, the violation of the constitutional provision is an invitation to anarchy and this accounted for the repeated call for secession among the Igbos. The following section examines these various groups particularly those from the eastern part of Nigeria which is the focus of this paper.
SECESSIONIST MOVEMENTS IN THE SOUTH-EAST GEOPOLITICAL ZONE OF NIGERIA
The restoration of civil rule in Nigeria in 1999 witnessed an upsurge of secessionist movements in the country more especially in the southeast where numerous groups are demanding an independent state of Biafra. Some of these movements include the Coalition of Biafra Liberation Groups (COBLIG), Biafra Foundation, Biafran Liberation Council (BLC), the Biafra Actualization Forum, the Movement for the Actualization of the Sovereign State of Biafra (MASSOB), and Indigenous People of Biafra (IPOB). This section of the paper examines the last two namely: MASSOB and IPOB. These two groups were selected because they are formidable secessionist movements with large followership in the region and have caused a breach of security in the region. Thus, this segment of the paper takes a cursory look at these groups and what they stood for, and their modus operandi.
The Movement for the Actualisation of Sovereign State of Biafra (MASSOB)
MASSOB was formed in Lagos on 13 September 1999 by an Indian trained lawyer, Chief Ralph Uwazuruike. He was a member of the then ruling party, People Democratic Party but became disappointed when the then President Chief Olusegun Obasanjo made appointments that excluded the Igbos [97, p.41]. The Objectives of MASSOB include the actualization of the independent state of Biafra; supporting all entities using peaceful means to bring about Biafra; encouraging sincere and honest dialogue with all ethnic nationalities in Nigeria aimed at peaceful separation of Biafra; and informing the world about the actualization of Biafra [98]. The leader of the separatist movement, Chief Ralph Uwazuruike openly canvassed for the disintegration of the federation and periodically engaged the Nigerian security agencies in battles [99].
MASSOB claims to be a peaceful movement and adopted a strategy of non-violence in the realization of its objectives.
According to Uwazuruike [98], 'Biafra failed because of our violent approach, but this time around we do not want any casualty, yet we are more determined than ever to have our independent Biafra' (p.30). He maintains that the plight of the Igbos was unacceptable and called for the disintegration of the country along ethnic lines. In his very words: What you should understand prima facie is that Nigeria is no good, how Nigeria is being administered is not good. That is why some people are even calling for a sovereign national conference, some people are calling for Biafra and others say self-determination. What I am saying as a person is that I want the Soviet experience to happen in Nigeria. My idea is to let Nigeria divide into as many places as possible; let the people go (IRIN News 2005).
The leadership of the movement adopted different strategies in the twenty-five stages in the struggle for the actualization of Biafra. Some of the activities of the movement included: the formation of the Biafra Security Agency; circulation of the Biafran currency, known as Biafra Pound and mobilizing its use for business transactions; rallying of Nigerians of Igbo extraction, mostly traders, to observe a sit-athome order; mobilizing the boycott of the 2006 census exercise in Igbo states because these states were not part of Nigeria; and organizing the popular Lagos soccer tournament as a means to bring home its demands and making a symbolic declaration of independence during these events [100,98].
The Movement was also involved in communal and civil functions. Some of these are forceful seizure of fuel tankers moving from any part of the East to the North as a sign of protest against the non-supply of adequate products to the East; taking on-board security issues some cities in the East (especially Onitsha); enforcement of the official price of petroleum products in filling stations in Igbo states; enforcement of sanitation laws in urban cities in the East with punitive measures for defaulters; the enforcement of rules on the residence of states considered to be Igbo states or Biafra territories and pegging of rents where it has become exorbitant and the settlement of disputes between warring groups [98].
The Movement also internationalized its struggle through the submission of the Biafra Bill of Rights to the United Nations and in 2001, it opened Biafra House in Washington, DC to coordinate its international activities [98]. It is important to note that MASSOB's activism was limited to sensitization campaigns, radio and online propaganda, and trafficking in memorabilia [101].
Though the movement claimed to be nonviolence, its strategy was seen as being aggressive and this led to the arrest of the leadership of the movement. For instance, in 2005, Uwazuruike was arrested and charged with treason but granted bail in 2007 to enable him to attend the burial of his mother who died while he was in detention. Also, the MASSOB members battled the Federal Government and the police, and this resulted in the death of some of the members. In 2006, Peter Obi, the then governor of Anambra State ordered a shoot-atsight order against the Biafra activists who were notorious for their disturbance of public peace in Onitsha, the commercial hub of the state [102].
The Indigenous People of Biafra (IPOB)
The IPOB is a secessionist group that claims to represent the South-East geopolitical zone of Nigeria and called for a referendum for the independent state of Biafra. It is the most popular, most radical, and most controversial of all the secessionist movements in the southeast of Nigeria and accuses MASSOB of compromising the vision of the Biafra actualization campaign, after collecting money from the Nigerian government [103]. There seems to be no consensus among scholars as to when IPOB was formed. For instance, Chiluwa [103] was of the view that IPOB was formed in 2013 while [105] and Goggins [106] believed that the separatist movement was formed in 2012. For [107], IPOB was formed in 2014. Nevertheless, IPOB is a breakaway faction of the MASSOB and was led by Nnamdi Kanu, a Nigerian-British based in London and the deputy leader of the Organisation is Uche Mefor. IPOB aims to restore defunct Biafra and its objectives include among others, to facilitate and advocate the Igbos' right to self-determination and also fight for the fundamental freedom of the Igbos in diaspora [59].
IPOB activities include sensitization of the people through the distribution of flyers, meetings, marches, and prayer meetings. Though the group claimed to be non-violence the method adopted by it was violence and this led to the government taking a hard stance on the group. The group made use of inflammatory and incisive statements, coupled with hate speeches as its modus operandi. According to the [108]: IPOB has occasionally resorted to violent rhetorics, not least through the transmissions of Radio Biafra. The occurrence of clashes between security forces and activists, some resulting in casualties on both sides, has also been reported during IPOB arrangements.
Similarly, [109] and [110] contended that IPOB and its leading members adopted hateful and inciting statements or what other people referred to as the language of beasts and cheap propaganda on social media, calling for the dissolution of the country into different countries or states. IPOB activities brought them into collision with law enforcement agencies and as a result, many of the members lost their lives in their clashes with law enforcement agencies, and its leader Kanu and others were charged to court for treasonable offenses. For instance, in 2016, it was estimated that 146 people died in the clash between IPOB members and the law enforcement agencies [111].
The Federal Government of Nigeria in its efforts to control the excesses of IPOB adopted force and legal actions. The force involves the use of military action against members of the group. Put differently, armed forces were deployed to the region on a special operation code-named Egwu Eke II (Python Dance II), which was conducted between September 15th and October 14th, 2017. The aim of the exercise according to the Nigerian Army was to rid the region of criminal elements [103]. The legal action involved the proscription of the group by an Abuja Federal High Court, following an ex-parte motion filed by the Attorney General of the Federation. The section that follows examines the implications of resource control agitation and secessionist movement activities on national unity and development.
IMPLICATIONS OF RESOURCE CONTROL AND SECESSIONIST MOVEMENTS ON NATIONAL UNITY AND DEVELOPMENT
There is no doubt that the agitation for resource control and secessionist movement activities especially those examined in this piece has farreaching consequences on national unity and development in Nigeria. This segment of the paper takes a cursory look at some of these implications.
The agitation for resource control by the South-South people of the country resulted in the polarization of the country into those in favor and those against resource control and this affected the unity of the country. The Southern part of the country was in favor of resource control while the North was against it. For instance, the elite from the Niger Delta was upset by the failure of successive administrations in the country to attach more weight to derivation or upward review of the derivation principle. At the 2005 constitutional conference, they demanded an upward review of the derivation to 25%, in the first instance, which was expected to be increased to 50% after five years and eventually 100% sometime in the future [112]. While the elite from the Northern part of the country vehemently opposed the demands by the elite from the Niger Delta. The elite from the north felt that much has been conceded to the region and as a result, the delegates from the region (Niger Delta) staged a walkout of the conference. However, the 2005 conference recommended an increase in derivation to 17% in the interim pending the outcome of expert commission (Adeosun, 2018 [36]. Similarly, the 2014 conference recommended that government should set up a technical committee to determine the appropriate percentage on the derivation and other issues such as special intervention funds and issues of reconstruction and rehabilitation of areas ravaged by insurgency [113]. As at the time the report was submitted to the Jonathan administration, the country was already preparing for the 2015 General Election and the electioneering campaign was on and the issue of implementation of the report became a campaign issue. In other words, the implementation of the 2014 National Dialogue was politicized. Thus, the recommendations of the conference were not implemented. The present administration which succeeded the Jonathan administration after the defeat of the latter in the 2015 General Election remarked that the report of the conference has been confined to the archives. Resource control agitation has resulted in the militancy and violence conflict among the people in the Niger Delta due to the inability of successive administrations in the country to address the issue of underdevelopment, poverty, environmental degradation, and unemployment of youth in the region. Different militant groups have emerged in the region campaigning for selfdetermination and autonomy for the region, among such groups are Movement for the Emancipation of Niger Delta (MEND). Niger Delta Avengers, etc. These groups employed violent means to accomplish their objectives, they destroyed oil installations, kidnapped oil workers, bombed government infrastructural facilities, theft of oil by syndicates, among others. The resultant effect of militancy activities in the region was the loss of jobs by the youth of the region due to the bombing and closure of some of the oil installations as well as the relocation of some of the oil companies' headquarters to Lagos.
In the Niger Delta or South-South geopolitical zone of Nigeria, the resource control protest has led to unity among the different classes of the elite in the region in their quest for an increase of the 13 percent derivation formula, the establishment of the Niger Delta Development Commission which is mandated to cater for the socio-economic development of the region. The agitation has also led to improvement in corporate social responsibility by transnational oil companies. For example, the Shell Petroleum Development Company spearheaded efforts to fight all forms of pollutions in the region while Chevron and SPDC are encouraging agriculture in the region [36,10,114].
The secessionist movement activities or agitations in the southeast geopolitical zone of Nigeria have serious implications on the unity and development of the country. The agitations have affected the economic activities of the region as many man-hours were lost to protest and many companies relocated to a safe place where their investments would be protected.
According to [115], contended that the recurring agitation for Biafra has both regional and national security implications, including the chances that mobilization of potential protesters could escalate armed violence and worsen the existing levels of insecurity. Besides, the country is currently facing several insurgencies in different parts of the country, the addition of the southeast security threat would be overstretched the security forces and also lead to an increase in government spending or allocation to defense and reduction in budgetary allocation to social services such as health and education.
It could lead to organized attacks on the people of the southeast geopolitical zone of the country residing in the northern part of Nigeria. Indeed, the quit notice issued to the Igbos residing in the north by a coalition of Arewa youths to leave the north by October 1, 2017, was in response to the activities of IPOB. Though the quit notice was later suspended, it showed the far-reaching consequences of the agitation.
The recurrent agitation by the secessionist movements for an independent Biafra State has serious implications for political stability and democratic consolidation. The demands for Biafra by the secessionist movements in the south-east can produce snowball effects where other groups in other regions of the country may demand greater autonomy or separation [115]. The separatist agitation by Sunday Adeyemo for the independent Odua Republic is a case in point. More so, the activities of the secessionist movements (MASSOB and IPOB) could raise the risk of inter-ethnic disaffection, destabilize Nigeria's democracy and worsen the crisis of confidence among the government and the various ethnic groups in the country.
Another
implication of the secessionist movement's activities in the southeast of Nigeria is the disruption of economic activities in the region and the country at large. The frequent demonstrations by both MASSOB and IPOB members and the clashes between them and the security agencies often lead to disruption of economic activities in locations where these protests occur and these have serious implications for both the region and the country in terms of revenue, employment generation and image of the country. A corollary to this is the issue of discouragement of investments in the region in particular and the country as a whole. With increasing hostilities between the secessionist groups and Nigerian authorities, the investment climate in the South East could be made more unfriendly, discouraging potential investors from directing their resources to the area [115]. No shrewd investor would invest his resource in an unstable country.
CONCLUSION AND RECOMMEN-DATION
The paper examined resource control agitation and secessionist movement activities in Nigeria with a focus on the southeast geo-political zone of the country as well as the implication of such demands on national unity and development. The paper was anchored on elite and frustrationaggression theories and through these theories, it was established that the elites ranging from traditional rulers, businessman and most importantly political office holders failed to provide good governance at all levels of government and this failure made the people to be angry and frustrated. The paper revealed that the resource control and secessionist groups' demands for the creation of defunct Biafra and the resurgence of separatist demand could be attributed to environmental degradation, lack of infrastructure, poverty, unemployment, and marginalization. The implications of resource control and secessionist groups' activities were thoroughly examined and some of these include economic activities and discouragement of investment; political instability and democratic consolidation; regional and national security; polarization of the country along ethnic and religious lines; militancy and violence. It is based on these findings that the paper suggests the following:
RECOMMENDATIONS
The government should as a matter of urgency address the root causes of resource control and secessionist movements' agitation to have a sense of lasting peace in the country. The way out is to address the environmental degradation of the Niger Delta by allocating more resources for the cleaning-up of the environment polluted by oil spillage and enforce the rule that the oil companies operating in the region should stop gas faring.
Moreover, the government should implement the 2005 and 2014 Constitutional Conferences recommendations especially those relating to issues of revenue allocation and devolution of powers. Presently, there is agitation for restructuring of the polity and this has been addressed by the 2014 Constitutional Conference Report and what the government should do is once again re-visit the recommendations and implement them.
The present federal structure is centralized, powers are concentrated at the center. Therefore, there is a need to devolve powers and revenue to the component units for them to discharge their constitutional responsibilities.
The government should be inclusive by this the writer means that all segments of the Nigerian Society should be involved in the administration of the country. There has been a cry of marginalization by the Igbos of their exclusion in the present administration of President Muhammadu Buhari. The government should look into this and take appropriate steps to address the problem through the appointment of more Ndi Igbos into strategic positions in the present administration. In other words, attention should be paid to addressing the governance and structural issues that gave birth to the renewed agitation.
These recommendations if implemented would go a long way in addressing some of the perceived problems which threaten the unity and development of the country.
|
2021-09-27T20:53:28.927Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "39b824ec86cbea513fe6ebab941a1f95aeb2c6ff",
"oa_license": null,
"oa_url": "https://www.journalarjass.com/index.php/ARJASS/article/download/30246/56741",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2d85536378e35229db795de0c7ea9e7aefbfd06c",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
259108189
|
pes2o/s2orc
|
v3-fos-license
|
Higgs Inflation and the Electroweak Gauge Sector
We introduce a new method that allows for the Higgs to be the inflaton. That is, we let the Higgs be a pseudo-Nambu-Goldstone (pNG) boson of a global coset symmetry $G/H$ that spontaneously breaks at an energy scale $\sim 4\pi f$ and give it a suitable $SU(2) \subset G$ Chern-Simons interaction, with $\beta$ the dimensionless Chern-Simons coupling strength and $f$ an $SU(2)$ decay constant. As a result, slow-roll inflation occurs via $SU(2)$-induced friction down a steep sinusoidal potential. In order to obey electroweak $SU(2)_{\rm L}\times U(1)_Y$ symmetry, the lowest-order Chern-Simons interaction is required to be quadratic in the Higgs with coupling strength $\propto \beta^2/f^2$. Higher-order interaction terms keep the full Lagrangian nearly invariant under the approximate pNG shift symmetry. Employing the simplest symmetry coset $SU(5)/SO(5)$, $N$ $e$-folds of inflation occur when $N \approx 60 \left(g/0.64\right)^2\left[\beta/\left(3\times 10^6\right)\right]^{8/3}\left[f/\left(5\times 10^{11}\ {\rm GeV}\right)\right]^{2/3}$, with $g$ the weak isospin gauge coupling constant. Small values of the decay constant, $f \lesssim 5 \times 10^{11} {\rm GeV}$, which are needed to address the Higgs hierarchy problem, are ruled out by electric dipole measurements and so successfully explaining inflation requires large $\beta$. We discuss possible methods to achieve such large couplings and other alternative Higgs inflation scenarios outside the standard modified-gravity framework.
I. INTRODUCTION
Inflation posits the existence of a slowly-rolling scalar field, the inflaton, to drive a period of accelerated expansion in the early Universe as a means to solve several problems in cosmology [1]. Given that we already have a scalar field in the Standard Model (SM), the Higgs [2][3][4], a possible minimal realization of inflation identifies the inflaton as the SM Higgs. 1 The SM Higgs potential, however, is not flat enough to both reproduce the observed matter power spectrum and sustain a sufficiently long inflationary phase, as its quartic self-interaction is too large [6][7][8][9][10]. As a result, scenarios beyond the SM must be considered to achieve successful Higgs inflation.
One paradigm meant to address the eta problem is natural inflation, where the inflaton is endowed with an approximate shift symmetry that protects the shape of its potential [58]. In natural inflation, the inflaton is a pseudo-Nambu-Goldstone (pNG) boson with a sinusoidal potential derived from the UV physics of a stronglyinteracting vacuum. Cosmic microwave background observations have pushed the original variant of this model to have large super-Planckian excursions [59], but its general form remains attractive [60][61][62][63][64][65][66][67][68][69]. In particular, if the inflaton is able to efficiently dissipate its kinetic energy through friction, then sub-Planckian excursions, characterized by a steep potential, do not pose an issue. Such a solution was first proposed through the use of Abelian gauge fields [70,71] and then followed up with its non-Abelian variety [72,73], known as chromo-natural inflation [74][75][76], with a combination of both ideas put forth recently [77]. It is also possible to use thermallyinduced [78,79] or scalar-induced [80] friction, rather than gauge-induced, to remove unwanted fast roll.
Here we construct a model of Higgs inflation where the Higgs is a pNG boson, based on Ref. [81]. 2 We use an effective field theory (EFT) framework, in the spirit of chromo-natural inflation, to keep the Higgs slowly rolling down the steep pNG potential. This friction mechanism is the new ingredient which obviates the need for the flattening method of all previous Higgs inflation models. With the aim of demonstrating the key inflationary dynamics and putting such dynamics in the context of a realistic pNG Higgs model, we consider two Higgs inflation variants of increasing complexity. Specifically, we first consider a toy minimal pNG Higgs Inflation model where the Higgs boson interacts with the weak isospin gauge fields. Second, we investigate a more complete setup, based on the "littlest Higgs" model [83], and thus examine effects of additional gauge fields (which are present in all pNG Higgs models). For specific parameters, such realistic models, and its generalizations, are known solutions to the Higgs hierarchy problem [84][85][86][87]. This model then also allows us to engage with the possibility that both the Higgs and inflaton hierarchy problems are resolved in the same manner. We note that simpler pNG Higgs models have been ruled out experimentally [88,89]. In all cases, we find that a sufficient number of e-folds can be achieved, albeit with large EFT couplings.
This work is organized as follows. We present a minimal pNG Higgs and review the littlest Higgs model in Sec. II. In doing so, we demonstrate how our inflationary model can be implemented in an already-existing solution to the hierarchy problem. We also show how to obtain EFT couplings consistent with electroweak SU (2) L × U (1) Y symmetry. Then, in Sec. III, we present both the minimal pNG Higgs inflation model as well as its littlest Higgs variant and delineate the necessary model parameters to achieve successful inflation. We discuss potential methods to achieve large EFT couplings, lay out future directions of study, and conclude in Sec IV.
Conventions and Notation: We let = c = 1 and use M 2 Pl = 1/(8πG) as the reduced Planck mass. We let an overdot denote a derivative with respect to cosmic time, f = df dt , and a prime denote a derivative with respect to the number of e-folds N = d log a, f = df dN , with a the scale factor of the Friedmann-Lemâitre-Robertson-Walker (FLRW) metric g µν = a 2 diag(−1, 1, 1, 1).
II. PNG HIGGS
We review the standard structure of the littlest-Higgs model and show how to express dimension-six Chern-Simons EFT couplings that obey the weak isospin 2 An inflationary setup where the Higgs is a pNG was considered in Ref. [82], although there the Higgs does not play the role as the inflaton.
SU (2) L symmetry. Given the overall complexity of this model, we begin in Sec. II A by displaying the minimal particle setup that will be required for inflation. We then address the entire particle-physics model in Sec. II B, delineating all new particles along with their interactions, calculating the form of the Higgs doublet potential, and concluding with the Chern-Simons EFT coupling. Readers interested in only the inflationary dynamics should go to Sec. III.
A. Minimal pNG Higgs
We display a toy minimal pseudo-Nambu-Goldstone (pNG) Higgs model relevant for inflation that encompasses a wide variety of pNG Higgs models. The components common to this toy model consist of the usual Higgs doublet H, but now with a periodic potential V (H) arising from higher-order dynamics. In addition, they contain a Higgs coupling to weak isospin gauge bosons through both covariant derivatives and a weak-isospin Chern-Simons current. Altogether, these terms take form in the Lagrangian where D µ = ∂ µ − igW a µ τ a − (i/2)g B µ , g and g are the weak-isospin and hypercharge gauge couplings associated with bosons W a µ and B µ , respectively, and τ a = σ a /2, with σ a the Pauli matrices. In addition, W a µν = ∂ µ W a ν − ∂ ν W a µ + f a ij W i µ W j ν is the field-strength tensor associated with the weak-isospin bosons. Finally, W a µν = (1/2) µναβ W a αβ is the weak-isospin field-strength tensor Hodge dual with µναβ the Levi-Civita tensor [˜ µναβ = −det (g µν ) µναβ is the flat-space Levi-Civita symbol], and f ajk =˜ ajk the SU (2) structure constants. Generically-speaking, the Higgs potential V (H) may be any periodic function; in the simplest case it is a cosine potential, but other calculable potentials are possible and depend on the specific model being considered, as will be seen in later sections of this work. We work in the unitary gauge, H = (0, h/ √ 2) T , with h the background Higgs field. In this minimal setup, it will be the electroweak sector, by itself, that drives inflation.
B. Littlest Higgs
We focus on a model that can supply all of the components described in the previous section, known as the littlest Higgs [83,90,91]. This model is based on a global SU (5)/SO(5) coset and is the minimal enhancement to the Standard Model particle content to realize a viable pNG Higgs. More specifically, the full theory we obtain is given by the Lagrangian To begin, recall that symmetries and symmetry breaking are key in many theories of particle physics. Little Higgs theories are generically built upon an interplay between spontaneously and explicitly broken global and gauged symmetries. This interplay mirrors the physics that describes pions as pseudo-Nambu-Goldstone bosons of a chiral flavor symmetry. A similar argument is invoked in little Higgs models, whereby the smallness of the Higgs's mass is due to some underlying strong dynamics, but the particular details of the underlying UV theory are negligible, instead encoding the physics in a semi-UV-complete theory. We first review its standard formulation in Sec. II B 1 before detailing our additional components in Sec. II B 2.
Standard Littlest Higgs
The littlest Higgs theory is characterized by a partial UV completion of the Standard Model using an approximate global SU (5) symmetry along with a gauged SU (2) × [SU (2) × U (1)] subgroup. We note that the original formulation of the littlest Higgs instead gauged a [SU (2)×U (1)] 2 subgroup [83]. This subgroup differs from our choice by one factor of U (1), i.e. it has an additional U (1) gauge field. The removal of this additional U (1) field does not affect the particular conclusions presented here [90,92]. In fact, the removal of the U (1) gauge field relieves some of the phenomenological tensions encountered by the original littlest Higgs; we thus omit it. In terms of new field content, before global SU (5) symmetry breaking, the littlest Higgs contains a collection of 11 massless scalar fields, three massless gauge bosons, and a heavy vector-like fermion. We now describe how this new field content is realized.
The 11 scalar fields, along with the 4 degrees of freedom in the Higgs doublet, are embedded within a symmetric 5 × 5 matrix scalar Σ under the 15 representation of SU (5). Upon spontaneous symmetry breaking at an energy scale Λ ∼ 4πf , this scalar obtains the vacuum expectation value (VEV) with 1 N the N × N identity matrix and blank entries zeros. This VEV spontaneously breaks SU (5) down to an SO(5) subgroup. SU (5) has 24 generators, while SO(5) has 10 generators. It follows that 24 − 10 = 14 massless scalars are produced in the particle spectrum, each corresponding to a broken generator of SU (5). Moreover, the VEV in Eq. (3) is chosen so that these 14 fields live in the fundamental representation of SO(5) upon breaking. In addition to the 14 massless scalars, there is a single massive scalar that represents deviations from the VEV along the direction of symmetry breaking. We integrate this scalar out and make no further mention of it henceforth. The Lagrangian for the matrix scalar alone is that of a nonlinear sigma model (NLSM): where with Π the 'pion' field matrix and T a the unbroken SU (5) generators (f is also called 'pion' decay constant). The pion matrix, fully written out, is As promised, we have a theory of 14 scalar fields: φ, a complex triplet, as a symmetric 2 × 2 matrix, H a complex doublet (our Higgs candidate), ω, a real triplet, as a Hermitian 2 × 2 traceless matrix, and the real scalar η. The above fields transform with a shift symmetry under the broken generators of SU (5), the specific field which does so depending on which broken generator is used for the transformation. Under the regime described thus far, the particles in Π would be exactly massless at all scales, per the Goldstone theorem, with Lagrangian as in Eq. (4). The symmetry is also explicitly broken, by gauging a SU (2) × [SU (2) × U (1)] subgroup. This breaking is done by promoting the derivative in Eq. (4) to a covariant derivative. In so doing, the Goldstone bosons become pseudo-Goldstone bosons that develop a mass proportional to the order parameter of the explicit breaking (which in this case are the gauge couplings of the gauged subgroup). This promotion also endows the Higgs with the requisite gauge couplings. Thus, Eq. (4) becomes Here the covariant derivatives are of the form where Q j a and Y are the generators of the SU (2) × [SU (2) × U (1)] gauged subgroup (i.e. j ∈ {1, 2} and a ∈ {1, 2, 3}); these generators are written explicitly Appendix B. With this promoted setup, the vacuum in Eq. (3) spontaneously breaks the SU (2) × [SU (2) × U (1)] symmetry down to the SU (2) L × U (1) Y of the Standard Model prior to EWSB. When this breaking occurs, the SM couplings are recovered by the relation g = g1g2 √ g 2 1 +g 2 2 = e/ sin (θ W ) = 0.64 and g = e/ cos (θ W ) = 0.34. Moreover, this breaking means that three of the generators are explicitly broken and thus become the longitudinal modes of the corresponding W bosons post-symmetry-breaking. In other words, the W 1 , W 2 bosons eat a combination of the three ω and η NGBs to spit out three massive bosons (heavy counterparts of the SM W bosons) and three massless bosons, which are the weak isospin bosons of the SM prior to EWSB. As a result, there is also a leftover massless Goldstone mode corresponding to the ungauged U (1) subgroup; other works that have considered this model [92] have ignored the effects of this mode and we do so as well. The heavy gauge bosons have gauge The U (1) gauge boson corresponds to our hyperweak U (1) Y . The kinetic terms for the SU (2) × [SU (2) × U (1)] gauge bosons are added in the canonical manner, which we label by L gauge kin . Coset models, such as this one, are also distinguished by their implementation of the coupling between the Higgs and top quark. The top is special because its large Yukawa coupling could potentially disrupt the naturalness of the Higgs potential. Other quarks, and leptons, are typically included according to their SM description. The littlest Higgs model seeks a minimal implementation (meaning the least number of new fermions). Hence, the Σ field can be coupled to the top quark through a term of the form with i, j, k ∈ {1, 2, 3} and x, y ∈ {4, 5}. Here, ψ is an SU (3) triplet obtained by enhancing the usual lefthanded third-generation quark doublet q 3 with a new heavy top, labelled T , Note the presence of two couplings, λ 1 and λ 2 , as well as the Dirac mass term for the new heavy top T . The top quark mass eigenstate will be a mixture of the new heavy top and the usual left-handed quark doublet top. As a result, the Higgs couples neither to λ 1 nor λ 2 , but to a product λ 1 λ 2 .
Thus if either coupling is turned off, the Higgs receives no quadratic divergence from the top quark at all. This lack of divergence is a manifestation of the generic collective symmetry breaking pattern of little Higgs theories. When either coupling is turned off, the theory has an enhanced global SU (3) symmetry, which renders the coupling technically natural. The other fermions of the Standard Model may be incorporated in the usual fashion; although this does violate the collective symmetry breaking structure that prevents quadratic divergences in the scalar sector, the relative lightness of all other fermions compared to the Higgs that such divergences are acceptable and to some extent negligible.
Now we must find the potential that the Higgs field experiences. Note that most treatments of the little Higgs only consider the effective potential to leading or nextto-leading order in the Higgs field h. Here we instead consider a full treatment, in order to illustrate the periodic potential experienced by the Higgs field, which plays an important role in the inflationary dynamics.
A one-loop analysis yields additional operators that must be added to the Lagrangian in Eq. (7) in order to account for quadratic divergences. These terms are of the form from gauge boson loops and from fermion loops. Here, c and c are undetermined coefficients, consistent with Wilson renormalization analysis, whose exact value depends on the UV completion. In the unitary gauge, the VEV of the Σ field Σ may be written as where the Higgs doublet as a column vector). Using Eq. (5), we can write the VEV of Σ as a function of the Higgs boson h. Using the expression for Π from Eq. (13) allows us to write Plugging Σ into the terms shown in Eq.(11-12) yields a tree-level potential for h [90,93]: where D/f 4 is an O 10 −2 constant and λ ± = c g 2 1 ± g 2 2 ± 16c λ 2 1 . This potential is periodic in h, and electroweak symmetry breaking (EWSB) does not take place, as can be seen by expanding the potential about h → 0; this feature is in keeping with the expectation of the Higgs as a Goldstone boson. A small non-zero Higgs mass proportional to the U (1) Y coupling g does arise at this level, due to the explicit breaking of SU (5) by the gauging of the U (1) Y subgroup. It can be shown that this mass must be positive, meaning EWSB does not take place at tree level [93]. EWSB instead arises through the Coleman-Weinberg mechanism, resulting in a negative loop-level Higgs mass and self-coupling terms. The full VEV of Eq. (14) can be used to compute the mass matrices for the gauge bosons M W , fermions M t and remaining scalar fields M s , in particular the triplet field φ [90]. Note that these expressions will be periodic in h, in accordance with its approximate shift symmetry. These matrices can be used to compute the CW contributions which remain once the heavy fields (the top partner, the Higgs triplet, etc) have been integrated out, which have the form [94]: The supertrace takes into account both the statistics [(+1) for bosons, (−1) for fermions] and the multiplicity of the fields (e.g., the weak boson contributions are multiplied by a factor of three for the three polarizations). The inclusion of these contributions leads to a negative Higgs mass term and consequently to EWSB. The contributions are from the heavy top quark, the heavy W boson, and the integrated-out Higgs triplet, respectively. These two potentials are shown in Fig. 1.
Beyond Standard Littlest Higgs
The Higgs has an observed mass of m H = 125 GeV [95]. Hence, for f 10 TeV, the tree-level potential in Eq. (15) does not predict the correct Higgs mass. In order to explore a larger phenomenological range of decay constants, while adhering to observations, we make a small change, f 4 → µ 4 , leading to the following potential with µ a new parameter that satisfies so that µ/f 1 for the regime of interest (e.g. in Sec. III A we will show that we require f 10 9 GeV implying µ/f 10 −3 ). Such a scale difference between the width and height of a potential, required by the amplitude of density perturbations arising in inflation, is common in axionic theories, where µ would roughly be the strong coupling scale and f the axion decay constant [96,97]. Hence, for convenience, we refer to µ as the strong coupling scale. In our case, we posit that such a difference could occur once a full UV completion of the Higgs is considered (i.e. in some composite Higgs model that embeds the little Higgs structure).
In addition to the change in potential, we also employ an EFT analysis to include two dimension-six Chern-Simons terms coupled to the scalar fields. This term has the form where we take both gauge fields to have the same dimensionless Chern-Simons coupling β and we explicitly point out that * denotes complex (not Hermitian) conjugation. The invariance of this expression under gauge transformations is explicitly shown in Appendix B. Upon inserting the expansion of the Σ field, the lowest-order term gives the usual Chern-Simons factor, which is a total derivative. The higher-order terms yield couplings between the Chern-Simons current and the scalar sector of the theory.
III. PNG HIGGS INFLATION
Given the particle Lagrangians in Sec. II, we now turn to their inflationary dynamics, i.e. we both solve for the dynamics and quantify the necessary Lagrangian parameters to obtain successful pNG Higgs inflation. By successful pNG Higgs inflation, we mean that at least N ∼ 60 e-folds can be achieved. To do so, we again first consider the minimal inflationary setup displayed in Sec. III A and then its littlest-Higgs variant in Sec. III B.
A. Minimal pNG Higgs Inflation
We begin with the Lagrangian in Eq. (1) and, for simplicity, take a Higgs potential of the form where µ is the amplitude of the potential and θ = h f is the normalized Higgs boson. In this case, the Higgs mass is m H = (µ 2 /f ) 2 . If we treat f as the scale of spontaneous symmetry breaking, then we take f M Pl in order to safely neglect quantum gravity corrections to our Lagrangian. 3 In this limit, the Higgs-covariant derivative D µ can be treated as a flat-space derivative ∂ µ since the Higgs interactions with weak isospin gauge fields are suppressed by a factor (f /M Pl ) 2
1.
In order to maintain isotropy of the Universe, the inflaton is only a function of time, h(t, x) = h(t). For the same reason, a classical and rotationally-invariant attractor gauge-field configuration is chosen as in chromonatural inflation [74]: A general gauge-field configuration redshifts away its anistropic parts during inflation, as the Chern-Simons term is only sensitive to the isotropic piece, so that the rotationally-invariant configuration can be dynamically 3 If instead we take f /β as the scale of spontaneous symmetry breaking, then it is possible to have f M Pl . In this case, f M Pl is an assumption in the following treatment.
achieved [98][99][100]. The hypercharge gauge boson is assumed to be identically zero, B µ = 0. Successful inflation occurs when the Hubble parameter H slowly evolves, ε = −Ḣ/H 2 1. Under this slow-roll condition, the Friedmann equations are where we have introduced the dimensionless mass parameter m ψ = gψ H and where are slow-roll parameters [75]. Since inflation requires ε < 1, each term in ε must also be small. In addition, for the slow-roll solution to persist, the acceleration of the fields must also be small, where overdot indicates a cosmic time derivative and prime a conformal time derivative. With the above conditions, the slow-roll equations of motion for both the Higgs and the gauge-fields are then with A = µ 4 /(H 2 f 2 ) and we have traded the gauge field ψ for its slow-roll parameter ε ψ . The left-hand sides of these equations are equivalent to those in Ref. [74], rewritten in terms of ε ψ . The right-hand sides are similar, but different, due to a dimension six (rather than a dimension five) Chern-Simons operator in Eq. (19) with dimensionless Chern-Simons coupling β. We seek static gauge-field solutions, ε ψ = 0, so that the second of the above equations becomes Combining this equation with Eq. (28) yields This equation has a simple solution by further assuming g 2 β 2 m ψ Aθ sin(θ) 1. That is, noting that H 2 m 2 ψ = g 2 M 2 Pl ε ψ , we get with ρ = β 2 /3 (µ/M Pl ) 4 and sinc(θ) = sin(θ)/θ. If inflation begins at the top of the potential and ends at the bottom, so that θ ∈ [0, π], then it lasts for e-folds. The maximum number of e-folds occurs around ρ ≈ 1 and, in this case, N ≈ g 2 β 2 . If instead ρ 1, which is the regime of interest, then In addition to the above e-fold constraint, there is an additional constraint from the electric dipole moment (EDM) of the electron d e [101,102]. More precisely, the dimension-six Chern-Simons operator induces this EDM through the triangle diagram in Fig. 2. Current bounds on this EDM come from spin precession measurements of polar thorium monoxide (ThO) molecules by the Advanced Cold Molecule Electron EDM (ACME) collaboration, which provide the limit d e 1.1 × 10 −29 e cm [103].
We show both of these constraints in Fig. 3. The decay constant shown in the lower horizontal axis must satisfy
B. Littlest Higgs Inflation
There are two main differences between the minimal pNG Higgs and littlest Higgs. First, the Higgs potential is different. Second, there are additional fields.
If the Higgs potential takes the form V (θ) = c 0 + c 2 sin 2 (2θ) + c 4 sin 4 (θ), as in the littlest Higgs model, we numerically find that the number of e-folds is roughly given by Eq. (34), up to small order unity corrections. We also point out that 8cg 2 ≈ 1, with c = 1, so that the minimal PNG and littlest-Higgs models have the same parameterization of the Higgs mass, m H ≈ (µ 2 /f ) 2 .
The main additional fields during the inflating phase (i.e. after global SU (5) symmetry breaking) are the heavy SU (2) bosons. We now address whether the heavy gauge boson can be dynamical. A massive gauge boson with mass M H changes all the equations of motions from m 2 ψ → m 2 ψ + M 2 H /H 2 . For slowly-rolling gauge fields, the fast oscillations of the mass term dominate the energy density and lead to a suppressed number of e-folds by a factor of the squared heavy-boson mass As a result, β must be even larger to compensate for the smallness of the boson mass. To avoid such a situation, we therefore conclude that the heavy gauge-boson must have zero dynamics, so that littlest-Higgs inflation reduces to the minimal set up in Sec. III A.
IV. DISCUSSION AND CONCLUSION
Why should the Higgs be the inflaton? One line of reasoning is that its presence as the only scalar in the SM yields a minimal explanation for the identity of the inflaton. Another is that the inflaton and Higgs boson both suffer from a hierarchy problem: loop corrections to their respective potentials spoil the viability of their theories. Previous work on Higgs inflation has led to a rich series of models, rooted in modifications of gravity, whose predictiveness is highly dependent on the precise resolution to their violations of perturbative unitarity, which is largely unknown. Here, we thus engaged with an alternative realization for Higgs inflation without modifying gravity, whereby the Higgs has an approximate shift-symmetry as a pNG boson and slow-roll is sustained through friction induced by a suitable weak-isospin Chern-Simons operator. In our scenario, inflation thus happens only within the electroweak sector.
In order to maintain a general framework, we consider two variants of pNG Higgs inflation. One, where we use only the minimal ingredients as dictated by our friction-assisted pNG description and another that embeds this minimal model by explicitly placing the pNG Higgs into a non-linear sigma model, i.e. we consider a little Higgs embedding. Specifically, we choose the simplest of such models, the littlest Higgs. Moreover, in order to consider the widest possible range of inflationary scales, we make a phenomenological replacement and consider the height and width of the Higgs potential to be independent parameters. However, we note that the standard description for the littlest Higgs only works around f ∼TeV scales; larger values of f wreck the cancellations of quadratic divergences and restore the hierarchy problem. Nevertheless, we imagine that our amplitude replacement is reasonable, as similar scalings are seen in other pNG potentials (e.g. axions), and surmise that it can be obtained in some true UV completion of the littlest Higgs which would fully address the hierarchy problem (e.g. some composite Higgs theory). We leave investigation into such UV completions for future work.
We find that, in both cases, successful Higgs inflation can occur at high energies, with decay constants f 5 × 10 11 GeV, and with a large dimensionless Chern-Simons couplings 10 5 β10 13 (see Fig. 3). It may be possible to achieve this large coupling through a variety of means, such as by including other couplings between the Higgs and the gauge fields, as in models with deconstructed dimensions [104], or including multiple Higgs fields, such as in assisted [105], N -flation [106], or clockwork [107][108][109] models. The additional fields (e.g. the massless η) may serve as one such assisted field when properly considered. In the case of multiple Higgs, we would be considering multiple copies of the model at hand. It may also be possible to lessen the large dimensionless coupling with models that have additional sources of friction, such as in warm inflation models [110][111][112][113][114].
Instead of using SM field content, a dark Higgs and/or dark SU (2) gauge-fields can instead be used to achieve inflation. In the case of a dark Higgs, the calculations we have presented remain the same, although the restriction of recovering the SM Higgs mass and TeV electroweak physics is lifted. We note that, in either case, it also is possible to use the little Higgs framework in concert with the original non-minimal coupling of Higgs inflation, rather than a new source of friction.
We have primarily focused here on the background evolution of inflation and largely ignored predictions of the perturbations. Given the similarities between this model and that of chromo-natural models, one immediate concern is that those models are not in agreement with observations of the cosmic microwave background; the original chromo-natural construction fails to produce the correct spectral index for the primordial scalar power spectrum and does not satisfy experimental constraints on the tensor-to-scalar ratio. However, the original authors found that introducing a mass term for the gauge field (by Higgsing) screens the gauge field fluctuations thus rendering chromo-natural inflation observationally viable [75]. In our model, since the Higgs obtains a VEV during inflation, it naturally induces a mass for the gauge fields and is likewise viable, by extension. We will consider a full cosmological perturbation analysis for future work.
Moreover, while the additional fields of littlest Higgs inflation do not change the dynamics of the background evolution, they can have an impact on the perturbations through non-Gaussian signatures [115]. That is, the additional fields in question have masses near the scale of inflation, leading to oscillatory features in cosmological correlators generic to quasi-single field inflation [116]. In addition, interactions between the Higgs and these other fields will yield cosmological collider signatures [117,118].
This work lays the foundation for further studies with this model and its extensions. For example, in one future work we will examine this model through the lens of the analysis done in Ref. [119]; there exists the possibility that this model, like many other models that employ a natural inflation-like mechanism, must inherently take backreaction effects into account, thus making them models of warm inflation. As mentioned previously, this warming of inflation may be an avenue to lower the dimensionless Chern-Simons coupling, making it a logical extension.
A simplifying assumption that underpins this work is that the Higgs rolls into the same EW-breaking minimum throughout the entire universe; a priori this need not be the case. Given that Higgs-dependent masses are 2π-periodic in H, these various vacua would be indistinguishable experimentally, but would have lasting signals such as domain walls or tunnelling phenomena.
Finally, our approach has the virtue of providing a natural baryogenesis [101] and reheating mechanism, since the Higgs inflaton has gauge invariant couplings to leptons and baryons and will decay into baryons, leptons and gauge bosons at the end of inflation. A detailed analysis of such a baryogenesis and reheating mechanism inherent to our model will also be pursued in the near future. In this appendix we briefly review the physics of pions, as a reminder of the structures employed in the construction of little Higgs models. At low energies, the fundamental degrees of freedom of QCD are confined and reorganized into composite degrees of freedom, namely the baryons and mesons.
Acknowledgments
At low energies (∼ 100 MeV), of particular relevance are the up, the down, and, to a lesser extent, the strange quarks. Compared to their charm, bottom, and top siblings, these quarks are effectively massless (m u ∼ 2 MeV, m d ∼ 4 MeV, m s ∼ 95 MeV). This (relative) degeneracy means that the effective theory has a flavor symmetry, SU (2) V [SU (3) V if the strange quarks are included in the discussion; we'll proceed with just the up/down case for this section].
Suppose that the quarks are taken to be massless. In this case, the symmetry is exact and is furthermore enlarged to a chiral symmetry SU (2) L ×SU (2) R . This symmetry has 2 × (2 2 − 1) = 6 generators. Of course, quarks are not massless; dimensional transmutation in QCD introduces a mass scale (Λ QCD ), so the massless limit chiral symmetry is spontaneously broken into the diagonal flavor symmetry as above, SU (2) V , which has 3 generators. Thus, by the Goldstone theorem, we would expect there to be 3 massless scalar modes in the spectrum of the theory. Furthermore, these Goldstone modes remain massless to all orders, which limits them to derivative couplings that may be written down in the Lagrangian. In this case, these three modes are known as the pions, π ± , π 0 . The dynamics of the pions are encoded as a nonlinear sigma model (NLSM), in a Lagrangian of the form where U is a unitary operator U = exp 2πi f π · T . Here T is the vector of broken SU (2) generators. Note that here it is very easy to see that under a transformation π → π + f α the Lagrangian is manifestly invariant.
Of course, the flavor symmetry is not exact; in addition to the spontaneous symmetry breaking, the symmetry is also broken explicitly by the Yukawa couplings which introduce mass differences between the quarks. The Goldstone theorem may be extended to such a case, wherein the Goldstone modes are no longer massless, but instead slightly massive, with the mass scale proportional to the degree to which the symmetry is explicitly broken. In this case, this means that the mass of the pions is controlled by the mass difference between the up and down quark.
In little Higgs models, this structure is adapted; the overarching flavor symmetry is replaced by a larger symmetry group of some UV-complete theory. In the littlest Higgs this is the SU (5) group. The breaking of this symmetry to some 'intermediate' group, which happens spontaneously, corresponds in the pion framework to the SU (2) V diagonal symmetry and to SO(5) in the littlest Higgs framework. Lastly, the explicit breaking of SU (5) to the SU (2) × [SU (2) × U (1)] subgroup is analogous to the breaking of SU (2) L × SU (2) R by the quark mass differences.
Appendix B: Chern-Simons EFT Couplings
The centerpiece of the littlest Higgs model is the SU (5)/SO(5) coset field, whose dynamics are encoded as a NLSM model with Lagrangian where Σ is defined as where Π is the 'pion' field matrix made by contracting and f is the 'pion' decay constant. This matrix has the following transformation property: It can be shown that the generators of SU (5) can be classed as either broken or unbroken, X a and T a , respectively, satisfying these relations T a Σ + Σ T T a = 0 and X a Σ − Σ X T a = 0 The pion matrix fully written out is In addition to the spontaneous breaking induced by Σ , the SU (5) symmetry is also explicitly broken by gauging an SU (2) × [SU (2) × U (1)] subgroup. In the following, Q a 1,2 and Y are the generators of the SU (2) and U (1) subgroups of SU (5) that are being gauged and W µ i,a and B µ are the gauge fields. With four gauge fields, there are four corresponding couplings: two 'weak' couplings g 1 , g 2 and a hypercharge coupling g . These are generated by Thus the full covariant derivative is so that the NLSM Lagrangian is where F µν = ∂ µ A ν − ∂ ν A µ is the electromagnetic field strength tensor and we used the gauge-coupling relation g = e/ sin (θ W ). Together with the SM Yukawa electron-Higgs interaction, as well as the covariant-derivative electron-photon interaction iΨ e γ µ D µ Ψ e ⊃ eA µΨe γ µ Ψ e , the electromagnetic Chern-Simons operator induces an electron EDM through the triangle diagram in Fig. 2. In the above expressions, L 1 = (ν eL , e L ) is the first generation lepton doublet, Ψ e = (e L , e R ) is the electron Dirac spinor, and e i , i ∈ {L, R}, are the corresponding electron chirality states.
|
2023-06-09T01:16:17.250Z
|
2023-06-07T00:00:00.000
|
{
"year": 2023,
"sha1": "9e45fe3d11df53ac214c35d9356b7954c01f46cb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9e45fe3d11df53ac214c35d9356b7954c01f46cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
55653705
|
pes2o/s2orc
|
v3-fos-license
|
First foot prints of chemistry on the shore of the Island of Superheavy Elements
Chemistry has arrived on the shore of the Island of Stability with the first chemical investigation of the superheavy elements Cn, 113, and 114. The results of three experimental series leading to first measured thermodynamic data and qualitatively evaluated chemical properties for these elements are described. An interesting volatile compound class has been observed in the on-line experiments for the elements Bi and Po. Hence, an exciting chemical study of their heavier transactinide homologues, elements 115 and 116 is suggested.
Introduction
Transactinide elements are produced artificially in heavy-ion induced nuclear fusion reactions. This production path allows for production rates on an atom-at-a-minute to atom-at-a-month scale applying currently available accelerator and target techniques (see for review [1]). The production scheme suffers also from vast amounts of by-products from multi-nucleon transfer reactions and from nuclear fusion reactions with contaminants in the target material and other irradiated construction materials. Hence, a chemical identification of transactinides appears to be only possible if the chemical separation of those by-products is sufficient. The produced transactinides are with some exceptions typically short-lived with half-lives in the sub-millisecond-to-minute time scale. Thus, chemical investigation methods have to be very fast and efficient. The chemical analysis of experiments performed on the one-atom-at-a-time scale requires the knowledge or presumption of defined chemical states unchanged during the chemical procedure. Moreover, the single atoms have to be able to change their chemical state within a chemical equilibrium several times to allow a quantification of thermodynamic data from the equilibrium on the basis of probabilities to find the atom in one or in the other chemical state.
Gas-phase chromatography techniques are nowadays the most efficient and fast methods to investigate chemical properties of transactinides (see for review [2]). Within the methods of gas chromatography, thermochromatography is the most efficient one. Thermochromatography probes the adsorption interaction of a volatile species with a defined stationary phase in a temperature gradient. The species are transported by the mobile carrier gas phase. Numerous encounters of the species with the surface lead to a retention time which is determined via the life time of their radioactive decay for short-lived isotopes and which is equal to the experimental duration for long-lived isotopes. This retention leads to an accumulation of decays or an accumulation of long-lived activity at a certain temperature within the gradient. Using models of gas chromatography [2,3] the interaction enthalpy of the species with the surface can be deduced from this deposition temperature.
The chemical identification of the transactinide species is usually performed indirectly via comparison to the chemical behavior of homologues in the corresponding groups of the periodic table at the same experimental conditions. The use of similarly low concentrations is of outmost importance and it is realized by applying carrier-free amounts of the homologues. Correlations are established between the observed adsorption behavior to macroscopic properties, such as, e.g. sublimation enthalpy or boiling points etc. [3] to bolster the speciation. In some cases density-functional theory is able to calculate adsorption properties of transactinides. Not being "ab-initio", usually these calculations are "optimized" to describe the adsorption behavior of the homologous species (see for review [4]).
Transactinide elements Rf, Db, Sg, Bh, and Hs have been chemically identified as typical members of their corresponding groups 4-8 of the periodic table (see for review [5]). Here, the investigation focused on the most stable chemical compounds at the highest oxidation states. The trends established by the periodic table suggest an increasing stability of the highest oxidation states in these transition metal groups with increasing atomic number. Otherwise, a decreasing volatility of the compounds in the highest oxidation states is predicted due to a stronger stabilization of the solid state compared to the gaseous state [3]. Indeed, the first gas phase chemical investigations of the light transition metal transactinides in the form of RfCl 4 , DbCl 5 /DbOCl 3 , SgO 2 Cl 2 , BhO 3 Cl, and HsO 4 confirmed these trends (see Fig.1). The standard formation enthalpies of transition metal compounds of the rows 5, 6, and 7 of the periodic table in their maximum oxidation state in the solid state (closed symbols) and gaseous state (open symbols) as a function of the sublimation enthalpy of the corresponding element. The lines are used to visualize the trends. Note here the increasing difference between the upper and the lower line of the same color which represents the sublimation enthalpy of the species representing a measure of volatility.
Since the beginning of this century the reported production of isotopes of even heavier transactinides the superheavy elements with half-lives in the order of seconds (see for review [6]) lead to large excitement also among nuclear chemists. For the first time superheavy elements became assessable to chemistry. In this paper we will summarize the efforts made towards their chemical investigation. The s-and p-electron shells are the valence shells in theses elements responsible for most of the chemical properties. These electron shells are subject to large influences by direct and indirect relativistic effects. Therefore, the spread of data predicted for chemical properties of SHE is much larger as compared to the transactinide transition metals [7][8][9][10].
The chemical investigation of copernicium (element 112)
The chemical investigation of copernicium is of utmost interest to physical chemistry and chemical theory. The valence electrons of this element are situated in the 7s electron orbital. This orbital is subject to strong relativistic contraction. The secondary shielding of the nuclear charge and the spinorbit splitting lead to a weakening and loosening of the 6d electron orbitals. The question for the chemistry experimentalists was to suggest and to setup a chemical system, which is able to distinguish between a noble metallic and an inert-gas-like copernicium. Already in 1980 it was suggested that the adsorption on noble metallic surfaces shall distinguish between both properties [10].
A thermochromatography detector based on the cryo-online detector technique [11] applied during the experiments with HsO 4 [12] was rebuilt and optimized to serve for adsorption studies of SHE on noble metal surfacse [13] (see Figure 2).
Fig. 2
The Cryo-Online-Detector COLD. The array of 32 sandwich-type detectors (upper side goldcovered) is shown mounted inside the channel. The lid contains the gas inlet and outlet and a copper bar welded into the steel housing and pressed on the top detectors to provide a stable almost linear temperature gradient of about 5 K/cm. The sandwich-type detectors made at PSI (center and right and mounted in the channel) and at ITE Warsaw (left) are shown as inserts in the upper part of the figure.
Gold was suggested to be the stationary surface of choice for the investigation, because it is assumed to be stable against surface oxidation. Thus, it should provide a clean metallic surface throughout the month-long experiments. Hence, the detector sandwiches in the new PSI Cryo-Online-Detector (COLD) were covered on one side with a gold layer of about 50 nm thickness. Furthermore, a self-drying gas loop system was developed to be able to run the experiments with a temperature gradient down to -180°C. This gas loop system contains drying cartridges based on Sicapent® and hot tantalum based getter ovens to remove water and oxygen in the carrier gas down to sub 100 ppb levels corresponding to dew points below -100°C. The dew point of the carrier gas has been monitored online by dew point sensors.
The operation principle of the entire setup is shown in Figure 3. The nuclear reaction products recoil out of the stationary target with the momentum of the beam and are thermalized in the carrier gas (typically a mixture of 70% He and 30% Ar) within the recoil chamber. The products are flushed into the first rough chemical separation stage consisting of a quartz tube heated up to 850°C with a piece of tantalum metal inside and a quartz wool plug for filtering aerosol particles produced in beam induced sputtering processes. In this first stage separation reactive gases and non-volatile elements are retained together with the aerosol particles. Only volatile and inert products enter a 4 m-long PFA® Teflon capillary held at room temperature (or eventually at 70°C), and connected directly to the inlet COLD detector. This transport represents the second stage separation. Here, volatile species are separated according to their adsorption interaction with the Teflon surface. Hence, only elements and species not strongly interacting with Teflon manage to pass to the COLD. The third stage separation occurs in COLD, where the volatile species are separated according to their adsorption interaction with the gold covered detector surface. The detectors are connected to a sophisticated energy resolving alpha and spontaneous fission fragment spectroscopic system operating in an event-by-event mode to be able to identify time correlated genetically linked decay chains form single atoms of transactinides. The isotope of copernicium with the mass number 283 was chosen [6]. With a half-life of 4 s and its 9.5 MeV alpha decay to the short-lived (120 ms) mainly spontaneously fissioning 279 Ds it seemed to be ideally suited for gas-phase chemical investigations. The production via the immediate alpha decay of the short-lived complete fusion-evaporation product 287 Fl produced in the 48 Ca induced nuclear fusion reaction with 242 Pu promised in 2004 higher production cross section of about 5 pb [14] compared to the direct production in the reaction of 48 Ca with 238 U, where less than 1 pb is expected [15]. 3 Experimental scheme operated for the adsorption thermochromatography with SHE (adopted from [16]).
In 2006 and 2007 two experiment campaigns have been performed using an overall beam dose of 6.2 * 10 18 Ca particles on 242 PuO 2 targets (1.5-2 mg/cm 2 ) prepared on 2 m Ti-foil backings using the painting technique at the U400 cyclotron at FLNR Dubna, Russia. Altogether, five decay chains related to 283 Cn have been observed (Fig.4, chains 1-5) [16,17]. Their deposition pattern along the temperature gradient in the COLD detector is shown in Fig. 5. The observed behavior under three changed experimental conditions show clearly a reversible mobile adsorption process involved in the thermochromatography of elemental Cn on gold. In the first part of the experiment the temperature gradient started at -24°C on the first detector and went down to -180°C. The gas flow was held at about 900 ml/min (Fig.5, upper panel). One decay of 283 Cn was observed at -28°C on detector #2. In the second part of the experiment the temperature gradient was increased by heating the first detector to 35°C. At otherwise equal conditions a second decay of 283 Cn was observed on detector #7 (-5°C) (see Fig.5, middle panel). In the third part of the experiment at the temperature conditions of the second part of the experiment the gas flow was increased up to 1500 ml/min (see Fig.5, lower panel). A broader exponential diffusion-controlled distribution of 185 Hg and almost a non-adsorption of the main part of 219 Rn in the COLD array indicate the higher gas flow. At these conditions three more decay chains of 283 112 were observed on detectors #11 (-21°C, chain 3), 12 (-39°C, chain 5), and 26(-124°C, chain 4) indicating also a sensitivity of the Cn adsorption on the gas flow. An adsorption enthalpy of Cn on gold at zero surface coverage was deduced to as H ads Au (Cn) = -52(+3,-2) kJ/mol using a Monte-Carlo simulation of gas phase chromatography [2], representing the first ever natural constant determined for Cn. This observation was confirmed in two additional experiments aimed at the chemical investigation of Fl (see section 3). There, one additional decay chain from 283 Cn was observed on detector #5 (-7°C) (see Fig.4, chain 6) together with two interesting high energy coincident SF signals on detectors #9 (117/94 MeV, -8°C) and #14(111/101 MeV, -16°C) in the temperature region where Cn was expected [18]. Recently, in an experiment performed at GSI Darmstadt [19,20] 285 Cn revealed an adsorption behavior presumably on a gold surface very similar to these observations. In the view of the results with Fl discussed later, it can not be excluded, that the observation of 283 Cn at -126°C (Fig.4, chain 4) is related to a chemical transport of 287 Fl.
The clear chemical identification of copernicium allowed for application of a correlation established to connect the microscopic atomic property adsorption enthalpy with the macroscopic property sublimation enthalpy [22]. Cn was shown (see Fig. 6) to be a very volatile member of group 12 of the periodic table, with a very weak metallic character [17].
In 2006 the chemical identification of 283 Cn was considered the first independent confirmation of the formation of superheavy elements in the 48 Ca induced nuclear fusion reactions with actinides claimed since 1999 at FLNR, Dubna [23]. Nowadays, these experiments bolster the atomic number of one member of the decay chains from elements Fl and Lv supporting their discoveries at FLNR. 6 8 10 12 14 16 18 20 22 24 26 28 30 6 8 10 12 14 16 18 20 22 24 26 28 30 Correlation of the microscopic property standard adsorption enthalpy with the macroscopic property related to volatility, the standard sublimation enthalpy adapted from [22]. Macroscopic properties of Cn were deduced empirically from the interaction of Cn with gold (red dotted line) using correlation techniques described in [3,21,22]. Horizontal black arrows show the results of theoretical predictions for the Cn-Au adsorption interaction and vertical black arrows show the predicted standard sublimation enthalpy of Cn, partly disagreeing with the experimental results.
The chemical investigation of flerovium (element 114)
Flerovium represents the heaviest member of group 14 of the periodic table. Its closest homologue is lead (Pb). From its position in the periodic table Fl is supposed to have fully occupied 7s and 7p 1/2 electron orbitals as chemically relevant valence shells. Both shells are subject to an increasingly strong relativistic contraction. Thus, a chemical inertness close to inert gases was postulated for Fl in 1975 [9]. An increasing elemental volatility can be expected from the trends in the groups 1, 2, 12-16 of the periodic table [26]. However, the adsorption bond properties of Fl on gold surfaces have been predicted by empirical correlations [26,27] and density functional theory [32,33,35] to be metallic. The experimental setup as used for the chemical identification of copernicium would be suitable to investigate a inert and volatile Fl. In the view of the latest predictions quite unexpectedly, one decay chain (Fig. 7, decay chain 1) was observed simultaneously with the Cn chains described in section 2 on detector #18 held at -88 °C. This decay chain was identical to the signature observed for 287 114 in the discovery experiments of Fl and Lv at the Dubna Gas-filled Recoil separator [6,14].
To bolster this observation the experiment was repeated using the same setup as described in section 2 but using 244 Pu targets thus aiming at the production of the more long-lived 288 Fl and 289 Fl. The initial observation was confirmed by the detection of two further Fl atoms (Fig. 7, The isotope 289 Fl could not be unambiguously identified in these experiments due to the too high background within the long correlation time required for an efficient detection. For this reason the experiment was repeated using the DGFRS as physical preseparation device [18]. Connecting otherwise unchanged setup, shown in Fig. 3, to the focal plane exit of the DGFRS via a dedicated recoil chamber this preseparation guaranteed highest background reduction. However, a considerable drop in yield was observed. Within a two-month experimental campaign a beam dose 9.72*10 18 48 Ca particles was passed on an about 300 g/cm 2 244 PuO 2 target electrochemically deposited on a 1.75 m Ti backing. In this experiment another decay chain was observed (Fig. 7, chain 4) on detector #19 held at -93°C. Even though, the -decay of 289 Fl was missing (probability is about 20%) in this chain its position in the detector array indicates a confirmation of the previous chemical identification of Fl (see Fig.8, lower panel). The deposition pattern observed for all events tentatively attributed to Fl adsorption on gold are shown in Fig.8. The consistent description of the adsorption of all three isotopes of Fl by the Monte-Carlo simulation of thermochromatography [2] using only one adsorption enthalpy value is an indication for a very weak adsorption interaction of element 114 with gold [31], not expected by theory [30,32,33,34]. Figure 9 summarizes the expectations by theory and shows the discrepancies to the experimental work. Recently, this empirical approach was criticized [32]. Therefore, it shall be noted here again, that this correlation is valid only for adsorbed elements on gold with small netadsorption enthalpies, meaning with no strong chemical interaction with the surface i.e. under preservation of the original chemical state upon adsorption. This is true for the adsorption of the light elements on gold providing the correlation and it can be assumed true for Cn and Fl [27]. Interestingly, the sublimation data [33] and adsorption data on gold predicted for Fl in [32,35] are well consistent with the correlation. Recently, two decay chains attributed to 288 Fl and 289 Fl were attributed to the observation of its adsorption presumably on a gold surface at room temperature in an experiment performed at GSI Darmstadt [19,20]. The conclusion from all these experiments is clearly, that only first steps are made towards the extremely interesting chemical identification of Fl and that more experimental data is required to resolve the Fl-Au interaction behavior. The experiments revealed also, that target stability is one of the main issues to be solved in the future (see section 6). Fig. 9 Correlation of the microscopic property standard adsorption enthalpy with the macroscopic property related to volatility, the standard sublimation enthalpy adapted from [22]. Macroscopic properties of Fl deduced empirically from the experimentally deduced adsorption interaction of Fl with gold (red dotted square) using correlation techniques described in [3,21,22]. Horizontal black arrows show the results of theoretical predictions for the Fl-Au adsorption interaction and vertical black arrows show the predicted standard sublimation enthalpy of Fl.
The attempts to chemically investigate element 113
The investigation of element 113 arose from the observation of three isotopes of element 113 having half-lives equal or longer than 0.5 s [36]. In 2002 the isotope 284 113 (T 1/2 =0.5 s) was observed in the decay chain of 288 115 produced in the nuclear fusion reaction of 48 Ca with 243 Am. Meanwhile, the maximum cross section of this reaction has been measured as 8.5 pb [37]. In 2010 the discovery of element 117 [38] revealed in the observed decay chains of 293 117 and 294 117 two isotopes of element 113 285 113 and 286 113 with half-lives of 6 s and 21 s, respectively. The predictions for the volatility of elemental E113 show a large spread [10,26,27,33,39]. Again, very interesting is the consistency of the relativistic calculations of the sublimation data (150 kJ/mol) [33] and adsorption data on gold (-159 kJ/mol) predicted for E113 in [40] with the heavily discussed correlation obtained in [22] (see e.g. Fig.9). The experimental techniques presented here for the investigations of Cn and Fl can be used to confirm or to exclude the higher volatility range of element 113. Thus in 2010 and 2011-2012 four experiment series were performed. Three experiments were performed using the Dubna gas-loop system with isothermal gold covered detectors [41] and with Teflon transport capillaries held at 70°C connecting the recoil chamber and the detectors. One of the experiments was performed using the COLD setup as shown in Fig.3 but also with a heated capillary (70°C). Two experiments performed with the FLNR setup and the experiment with COLD employed the nuclear fusion reaction of 48 Ca with 243 Am. Altogether, beam doses of 3.2*10 18 (isothermal detector at 0°C), 4.7*10 18 (two isothermal detectors in a row held at 20°C and at 0°C, respectively) and 5.6*10 18 (COLD, no getter oven in the loop and no Ta metal in the filter oven at the recoil chamber, and a temperature gradient established from 35°C down to -110°C) have been applied to initially 800-1500 g/cm 2 243 AmO 2 targets prepared on 2 m Ti-foils by the painting technique. The fourth experiment was performed directly after the first experiment using the isothermal detector held at 0°C. Here, about 600-800g/cm 2 249 Bk 2 O 3 targets have been prepared by molecular plating on to 2 m thin Ti-foils. Considering reaction cross-sections [37] and all known efficiencies, e.g., target-grid transmission, transport velocity, detection efficiency, these experiments missed to observe roughly 10-20 284 113 events, which is indicative for either a very low volatility of E113 similar to noble gases or more likely to a lower volatility of the elemental state of element 113, not allowing for an efficient transport of E113 through Teflon capillaries at 70°C. This estimation suffers from uncertainties connected to the observed target damage described in section 6. In the experiments with the 249 Bk target at an applied beam dose of 9*10 18 48 Ca one decay chain was observed [41], which is quite similar to the decay chain of 286 113 observed in the decay pattern from 294 117 [38], representing the 3-n evaporation channel. Unfortunately, the U-400 cyclotron was limited in 48 Ca beam energy leading to excitation energies between 27 and 35 MeV in the Bk targets only. Therefore, no 4-n evaporation channel could be observed, which would have an estimated factor of four higher cross section at 39 MeV excitation energy [38].
One tentative explanation for all these observations for element 113 is a formation of E113OH needed for a transport of element 113 to the detection devices through the Teflon capillary at 70°C. This formation might be kinetically hindered leading to the observation of its formation only for the longer-lived isotopes of element 113 ( 286 113). Also here, further work is required to bolster this conclusion. One way to facilitate the chemical reaction would be adding more water into the carrier gas of the gas phase chromatography setups. One shall be aware, that this approach is limited by the use of Ti-foils as vacuum window and target backing. An increase of 48 Ca energy form the U-400 cyclotron is expected to be available until end of 2012 [41]. This could allow for detecting the more abundantly produced 285 113 in chemistry experiments, though with a shorter half-life of 6 s. However, to investigate the elemental state of E113 in the gas phase most probably the vacuum chromatography approach [42] is more suitable.
The observation of a volatile compound class for Bi and Po as models for elements 115 and Livermorium (Lv, element 116)
In 2011 two experiments were performed within two month with Fl and E113 using the COLD setup. There were only two marginal differences between the experiments: 1. The targets were either 244 PuO 2 or 243 AmO 2 , respectively; 2. The Ta getters were not used in the loop and in the hot aerosol filter at the outlet of the recoil chamber (see Fig.3) in the experiment with E113, to work at an eventually higher water content (dew point -60°C) in the carrier gas to facilitate the formation of the volatile hydroxide species E113OH. In the experiment with Fl the Ta getters were included in the loop and in the aerosol filter oven.
In figure 10 the sum spectra of both experiments are shown. Surprisingly, they looked quite different. A significantly higher and different activity attributed to an obvious transport of Po and Bi isotopes is observed in the 244 Pu experiment considering almost the same beam doses applied with 5.7*10 18 on 243 Am and 4.6*10 18 on 244 Pu. One production paths of the isotopes in the ground states is the decay of heavier isotopes in the Ra-actinide region that are produced in multi-nucleon transfer reactions of 48 Ca with the target material [43]. On the other hand contamination of the target with macroscopic amounts (g-range) Pb and Bi is easily possible during the target production procedure. Therefore, a direct formation of 212m2 Bi/ 212m2 Po, 212m Bi/ 212m Po and 212g Bi/ 212g Po and 213 Bi/ 213 Po in nucleon transfer reactions is very likely to be the main production path for all of these isotopes [44] in both experiments. The only significant difference aside from the target material and its contamination is the use of Ta getters in the 244 Pu experiment. This getter was switched on and off for several times (see Fig.12, both panels, blue line) to check the dependency of the 213 Bi/ 213 Po and 212m2 Bi/ 212m2 Po transport. Both activities disappear when the getter is switched of. Note here about four orders of magnitude on the 212m2 Po scale (Fig. 11, left panel, red line) and two orders of magnitude the 213 Bi scale (Fig. 11, right panel, green line). Another correlation is related to the dew point measured on-line in the carrier gas (Fig. 11, both panels, black line). If the dew point is below -95°C no efficient transport is observed for both, 212m2 Po and 213 Bi, otherwise the transport efficiency is clearly dependent on the dew point. Thus we conclude that largest transport efficiencies for both elements are observed at high water contents in the carrier gas and with the getter oven switched on. A high water content together with a hot Ta getter in the system leads to a considerable amount of hydrogen in the carrier gas max. ~100 ppm. From these observations, we conclude, that the observed transport of 212m2 Po and 213 Bi is related to the hydrogen content. Already in 1918 Paneth observed the formation of BiH 3 with atomic hydrogen instatu nascendi [45]. The beam induced atomization of hydrogen may yield a considerable amount of atomic H in the recoil chamber which is reacting with the Bi and Po ions recoiling from the target. That the trace amounts of atomic hydrogen yield an efficient transport of Bi and Po points towards an efficient chemical reaction. The question about the reaction velocity remains open for further studies with more short-lived isotopes of Bi and Po. In Fig. 12 the deposition patterns measured for 212m2 PoH 2 and 213 BiH 3 are shown. The deposition temperature dependence on the half-life indicated by the different adsorption position of 212m2 Bi (T 1/2 = 9 min) given by the peak on detector 3 in the 212m2 Po distribution and 213 Bi (T 1/2 = 45.6 min) peaking on detector 9 is evidence for a mobile adsorption process of BiH 3 on gold. Using Monte-Carlo simulation procedures of thermochromatography based on [46] an limit adsorption enthalpy was estimated for PoH 2 on gold as -H ads Au (PoH 2 ) >70. For BiH 3 an adsorption enthalpy could be evaluated as -H ads Au (BiH 3 ) = 65±3 kJ/mol. The production of Pb isotopes as precursors for a good part of the listed isotopes is likely too. Therefore, the half-lives of the deposited species were roughly determined, after switching off the beam to clearly identify the deposited species. A fast decay of the 212 Bi(T 1/2 = 60.6 min) on the detectors #6-12, where 212 BiH 3 adsorption was observed, excluded the transport via a corresponding 212 Pb (T 1/2 = 10.64 h) compound. The fast decay of the main part of 212m2 Po (T 1/2 = 45 s) on the first 3 detectors within several minutes excludes its main transport as 212m2 Bi (T 1/2 =9 min), whereas the part of 212m2 Po measured on detectors #3-6 decayed with a half-live of about 10 min pointing towards a 212m2 Bi deposition. 213 Po(T 1/2 = 4.2 s) is too short-lived to be transported efficiently. The half-life of the deposition attributed to 213 Bi (see Fig. 12) and measured by the 213 Po daughter alpha-decay was with ~45 min consistent to the half-live of 213 Bi. The observation that the transport of the hydride species was possible through quartz wool filter unit held at 850°C with tantalum is very interesting from the stability point of view. Removal of the Ta from this filter unit had no influence on the transport efficiency of the hydride species to the COLD detector. These observations point towards an unexpectedly high thermal stability of the volatile species formed. However, a simple extrapolation scheme along the groups of the periodic table predicts that the hydrides of the element 115 and Lv shall be less stable compared to their lighter homologues. Apart from the very interesting density functional electronic structure calculations for group 16 dihydrides including LvH 2 [47] further theoretical calculations are needed regarding the thermodynamic stabilities of PoH 2 and LvH 2 and regarding the electronic structures and thermodynamic stabilities of BiH 3 and 115H 3 .
Otherwise, the predicted reasonable elemental volatility of the SHE 115 and Lv elements and their homologues [26,27,33,40] shall allow also for their investigation using a high temperature vacuum chromatography approach [42] in the future.
Improvements of experimental techniques
Nowadays, the main limitation for the production of SHE is target stability. Intense heavy-ion beams are delivered by large accelerators. Stationary or rotating targets are irradiated, typically consisting of stable oxide compounds or metals deposited onto thin foils of titanium or carbon. Vapour deposition of metals is possible only if enough target material is available, e.g. uranium or lead. Amounts from single milligrams to several tens milligrams are available for the rare and expensive heavy actinides. Molecular plating of the target material onto the metallic target backing is applied [48]. In this case an oxide layer is deposited. In some cases a painting and baking procedure is applied to obtain oxidic targets on titanium backings. During the intense irradiations of up to 1-4 pA equivalent to 10 13 -10 14 ions/cm 2 s the thermal, mechanical, and chemical stability of these targets is crucial. Unfortunately, oxide targets have some disadvantages due to their thermal and electrical insulation properties and due to possible high temperature reactions of the oxide material with the target backing. In figure 13 a photomicrograph of a stationary 244 PuO 2 -target (0.6 mg/cm 2 ) electroplated onto a 2 m thin foil of titanium is shown after irradiation with an integral beam dose of 3*10 18 at intensities of 0.5 pA. Clearly the chemical and thermomechanical degradation of the titanium backing is visible. The nonirradiated part behind the honey-comb shaped cooling grid indicates the material condition before irradiation.
Fig. 13
Microphotograph of a part of a severely damaged 244 PuO 2 target after intense irradiations (illuminated from the back side) [51]. A: Macroscopic hole in the target material layer and in the backing; B: Target material is still intact, the target backing material is burned, presumably due to the reaction with the target material. C: The target as it looked before irradiation. This part is shielded by a honey comb shaped copper cooling grid during the irradiation.
For the further investigations of superheavy elements using the COLD thermochromatography technique the stability of the heavy ion irradiation targets has to be improved. Recently we proposed new target materials to be used as stationary targets based on intermetallic compounds of actinides with high melting noble metals [49]. The actinide oxide target material is electroplated on thin noble metal foils, such as Pd or Rh. Subsequently, this target is heated up in a pure hydrogen atmosphere to temperatures of about 1000°C. In a coupled reduction process [50] a thin intermetallic target of homogeneous area thickness is formed (see Fig. 14). The intermetallic state has the advantage of heat and electrical conductance and a high chemical stability. A heavy ion irradiation of such an Am 243 /Pd intermetallic target with intense beams of 48 Ca has been performed at FLNR Dubna. First results revealed a considerably higher stability of this target compared to the conventional oxide targets on titanium backings. A detailed analysis is ongoing and will be published elsewhere [51].
Conclusion
Chemistry has arrived on the Island of Stability of Superheavy Elements. Interesting tasks regarding the confirmation of the very volatile Fl and regarding other chemical properties of Cn and Fl are to be tackled in the future. The investigation of element 113 needs either an increase of water vapor pressure in the carrier gas to facilitate the formation of 113OH expected to be more volatile compared to the elemental state. A new class of stationary targets has been developed for a stable production scheme of SHE, mandatory for their further chemical investigation. The observation of the formation of a volatile hydride species of group 15 and group 16 elements homologous to element 115 and Lv, presumably MH 3 and MH 2 , respectively, opens up an interesting chemical system for their gas-phase chemical investigation. Group 14 element lead was shown not to form a volatile lead hydride compound at conditions where the group 15 and 16 hydride compounds have been observed. More model studies are required to assess the most efficient way to produce these species, though.
|
2018-12-10T07:09:03.530Z
|
2012-12-18T00:00:00.000
|
{
"year": 2012,
"sha1": "60853c8e0fcaab020c37b5b077976d1eb332ff32",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/420/1/012003",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "06a761762c82664dcd95f4b3bb1e89e5e6985402",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
198981655
|
pes2o/s2orc
|
v3-fos-license
|
An assessment of medical students’ knowledge of prediabetes and diabetes prevention
Background The United States has 84 million adults with prediabetes, putting them at a higher risk than the general population for developing type 2 diabetes. Missed opportunities among primary care providers in diagnosing and managing patients with prediabetes represent a gap in care, suggesting there is a need to educate practicing physicians and medical students about diabetes prevention. The purpose of this study is to assess medical students’ basic knowledge of prediabetes and diabetes prevention, identify potential educational needs, and target areas for improvement in undergraduate medical education curricula. Methods A cross-sectional study to assess medical students’ preclinical and clinical management knowledge of prediabetes and diabetes prevention. Medical students attending the 2016 American Medical Association’s annual meeting took a 6-item knowledge questionnaire using a mobile application or a paper version. Scores were reported for the full sample of respondents, by year in medical school, by topic area, and by mode of survey response. Results The average student answered fewer than half of the questionnaire questions correctly. Scores on some items addressing preclinical content were higher among third- and fourth-year students compared to first- and second-year students (p = 0.039 and effect size = 0.363). Average scores on the items addressing clinical management were not significantly different by year in medical school, but the item measuring effectiveness of metformin to a lifestyle change program had 41.9% correct answers among the mobile application respondents compared to 21.5% among paper test respondents (p = 0.003 and effect size = 0.463). Conclusions Medical student performance on the prediabetes knowledge questionnaire was low. Students’ year in medical school had a slight impact on overall performance, but only for certain questions. The results suggest the need for improvements in current medical school curricula for increasing the awareness of screening for prediabetes as well as the benefits of the lifestyle change programs in the National Diabetes Prevention Program. Electronic supplementary material The online version of this article (10.1186/s12909-019-1721-9) contains supplementary material, which is available to authorized users.
Background
Nearly 1 in 3 adults in the United States (US) has prediabetes, a condition characterized by blood glucose levels that are elevated-hemoglobin A1C (HbA1c) test results between 5.7 and 6.4%-but not high enough to be classified as diabetes [1]. Risk factors for prediabetes include family history of type 2 diabetes, history of gestational diabetes, elevated body mass index (BMI), and sedentary lifestyle. Individuals with prediabetes are at higher risk than the general population for developing type 2 diabetes, heart disease, stroke, and other serious health conditions [1]. People with prediabetes can reduce their risk of developing type 2 diabetes by participating in structured lifestyle change programs (LCPs). Many LCPs currently available are modeled after the original Diabetes Prevention Program research study focused on promoting healthy diet, weight loss and increased physical activity, and are recognized by the Centers for Disease Control and Prevention's (CDC) National Diabetes Prevention Program (National DPP) [2,3]. Research showed that participation in an LCP reduced the incidence of type 2 diabetes by 58% relative to placebo at an average follow-up time of 3 years. In comparison, those treated with metformin had a 31% reduced 3-year incidence of type 2 diabetes compared to the placebo group [3]. Primary care physicians play a key role in screening, testing, and referring patients with prediabetes to LCPs.
The evidence base for preventing diabetes via intensive lifestyle change is substantial [4][5][6][7][8][9]. The United States Preventive Services Task Force (USPSTF) incorporated this evidence into the updated recommendation regarding screening for abnormal glucose and type 2 diabetes [10]. The grade B recommendation states that physicians should screen individuals for abnormal glucose if they are between the ages of 40 and 70 and are overweight or obese. USPSTF also recommends that all individuals with abnormal glucose should be referred for intensive behavioral counseling to promote lifestyle change [11]. Despite the evidence supporting intensive LCPs and clinical guidelines encouraging physicians to refer patients to these programs, they are still vastly underutilized and providers' awareness and patient referral rates to LCPs are still low [12][13][14].
Attitudes towards prediabetes coupled with missed opportunities among these providers in diagnosing and managing prediabetes represent a practice gap [15]. A recent survey determined that primary care physicians have significant knowledge gaps regarding prediabetes screening, diagnosis, and management, with less than 20% of physicians correctly answering questions in those domains [16]. These practice and knowledge gaps are an area where the American Medical Association (AMA) is focusing efforts towards educating physicians to screen and refer individuals with prediabetes to CDC-recognized LCPs [17]. Given that the practice and knowledge gaps observed by both Mainous et al. and Tseng et al. were so substantial, this begs the question of whether physicians in-training are receiving adequate prediabetesrelated education [15,16].
Limited information is available on whether prediabetes management or diabetes prevention is being taught in undergraduate medical education (UME) curricula. Physicians in training lack confidence in management of diabetes, report a need for further training [18], and often benefit from additional educational interventions and resources [19]. Furthermore, there is no information publicly-available from the National Board of Medical Examiners that describes whether medical students are being assessed on prediabetes or prevention of type 2 diabetes, specifically in the United States Medical Licensing Examination. This study attempts to measure medical students' knowledge of prediabetes and diabetes prevention to identify the potential educational needs and areas for improvement in chronic disease prevention curricula in UME.
Methods
A 6-item multiple choice questionnaire (Additional file 1) was developed to assess medical students' basic knowledge of prediabetes and type 2 diabetes prevention. The items were designed to reflect common domains typically used to teach disease-specific knowledge to medical students: epidemiology, diagnosis, and management, which included treatment options and clinical guidelines. These survey topic areas align to the key knowledge domains for prediabetes.
We relied on 3 subject-matter experts, including 2 primary care physicians to assess the questionnaire. Their review indicated the knowledge domains were addressed by the items with an appropriate level of difficulty for medical students.
The questionnaire was administered to a convenience sample of medical students attending the AMA's House of Delegates (HOD) meeting on June 11-15, 2016 in Chicago, Illinois during the Medical Student Section (MSS) sessions. To minimize the burden of response the only demographic characteristic collected from the respondents was year in medical school. Respondents entered data into the AMA meeting mobile application (Crowd Compass, Inc.) or a paper form. Participants were provided with an AMA gift bag as an incentive for completing the questionnaire. The University of Illinois Office for the Protection of Research Subjects Institutional Review Board approved the research protocol.
The percentage of participants correctly answering the individual items and average total scores are reported for the full sample and by the two groups of students, first-and second-year students, and thirdand fourth-year students. The questions were further broken into two categories defined according to the timing in which medical students might typically be taught each item: in the preclinical years or clinical years. Preclinical content was focused on epidemiology and diagnosis (proportion of adults with prediabetes, risk factors for prediabetes, and HbA1c levels), while clinical content was focused on the management of prediabetes (USPSTF screening recommendations, abilities of the LCP and metformin to reduce incidence of diabetes, and recommendations for clinically significant weight loss ranges). To explore whether lower performance on specific questions might be explained by timing of exposure to specific content in their curriculum, we assessed differences in performance on preclinical and clinical questions by year in medical school. Differences in scores between years of medical education and instrument modality (mobile application versus paper format) were examined using analysis of variance (ANOVA). Statistical significance is presented at both p < 0.05 and p < 0.10 since significance levels are a decreasing function of sample size [20,21]. Effect sizes were also calculated, with an effect size greater than 0.33 used to distinguish differences of practical significance between the means [22]. All analyses were performed in STATA 13 (College Station, TX). Indicates the correct answer
Results
There were 600 medical students who attended the meeting; 258 respondents completed the questionnaire; 61 respondents were residents, fellows, or attending physicians who wished to test their knowledge; the remaining 197 current medical students were used in the analysis. The medical students who attended the AMA HOD meeting, represented a total of 138 schools. Among the 197 current medical students, over threequarters (n = 156) were first-and second-year students, and the remaining sample (n = 41) were third-and fourth-year students. The question items and response frequencies are shown in Table 1. Among all respondents, almost 60% correctly answered the question about the optimal weight loss range for preventing or delaying the onset of type 2 diabetes. On the other hand, only 13% responded correctly to the question about the USPSTF recommendations for screening. Roughly half the respondents answered questions about prediabetes prevalence and risk factors correctly, and slightly more than a quarter of the respondents correctly answered questions about prediabetes diagnosis and interventions to prevent diabetes.
The percentage of correct responses by item and year in medical school are shown in Table 2. When separated by years in medical school, almost 40% of the third-and fourth-year students correctly responded to the question about HbA1c levels, while fewer than one-quarter of the first-and second-year students answered correctly (p = 0.039 and effect size = 0.363). Table 3 shows that the overall mean scores for the preclinical items were higher than the scores for the clinical management questions. Although some preclinical items demonstrated a trend towards better performance for third-and fourth-year students, their scores were not statistically different from first-and second-year students. Mode of response scores are shown in Table 4, where almost 60% of the students who took the test on paper responded correctly to the preclinical question regarding prediabetes risk factors, compared to 40% of electronic mobile respondents (p = 0.013 and effect size = 0.385). Interestingly slightly over 40% of electronic respondents correctly answered the clinical item comparing the effectiveness of metformin versus LCP, relative to a little over 20% of the paper respondents (p = 0.003 and effect size = 0.463). Clinical management scores were lower for those who took the test on paper (p = 0.048) compared to those who took the test using the mobile application, however the effect size was small at 0.078 as shown in Table 5.
Discussion
The results suggest that medical students' overall knowledge of preclinical and clinical management of prediabetes and diabetes prevention was poor. The average student respondent failed to answer more than one-half of the questions correctly. The questions with the lowest scores, where less than one-third of the respondents answered correctly, were related to knowledge of USPSTF recommendations, HbA1c levels for prediabetes diagnosis, and the effectiveness of metformin and the LCP to reduce the incidence of type 2 diabetes. The year in medical school had a relatively small impact on overall performance, affecting only the questions regarding HbA1c levels and prediabetes risk factors for first-and second-year students versus those in their third-and fourth-year.
Prediabetes knowledge and practice gaps that exist among practicing physicians might be partially explained by inadequate training received by medical students. Inclusion of prediabetes content in UME curricula is the first step towards addressing these gaps. Prediabetes education in medical school should include content regarding clinical guidelines for screening and diagnosing prediabetes as well evidence-based recommendations for managing prediabetes. This content should be addressed in preclinical education, then reinforced and practiced in clinical experiences. While there is an opportunity to improve students' knowledge in all domains assessed in this study, the largest knowledge gaps were in clinical management of patients with prediabetes. Given the large volume of patients with prediabetes that the average medical student might be expected to encounter during their UME experience (roughly one-third of adults), there should be many opportunities to expose students to evidence-based management of patients with prediabetes, particularly during their primary care clerkships. However, the knowledge and practice gaps observed in practicing physicians highlight the need for associated faculty development on prediabetes management so that clinical preceptors reinforce these concepts, and to ensure that students can observe and participate in high quality preventive care during their clinical experiences. Faculty development coupled with updated prediabetes-related UME curricula are important steps that medical schools could consider helping address the growing type 2 diabetes epidemic.
A limitation of the study is that the medical students who completed the questionnaire are comprised of students attending the 2016 AMA HOD meeting who volunteered to participate. These students are AMA members and may not be a representative sample of US medical students. However, we are not aware of any evidence suggesting that the convenience sample of students who are members of the AMA would be more or less knowledgeable of preclinical and clinical management of prediabetes and diabetes prevention than medical students who are not AMA members. Next, the questionnaire response rate was 33% (197/596) when accounting for all medical students in attendance at the meeting. However, the total number of medical students attending the MSS sessions at times that questionnaires were administered was not measured, suggesting the actual response rate may be higher than reported. There also may be a positive response bias and overestimation of average knowledge levels if students who believed they knew more about the topic were more likely to complete the questionnaire. The results suggest there is little if any upward bias in knowledge due to these factors. Additionally, the questionnaire was not validated prior to administration to these students. The results from this phase of research can aid the development of a reliable and valid prediabetes knowledge test, similar to those developed and validated for diabetes [23].
Conclusions
This study, using a questionnaire administered at the AMA's 2016 HOD meeting, highlighted the low medical student performance on prediabetes knowledge. The average student answered fewer than half of the questionnaire questions correctly. Overall performance varied slightly by the students' year in medical school, but only for certain questions between first-and second-year versus third-and fourth-year students. The results suggest a need for a review of current undergraduate medical school curricula, and for potential improvements to increase the awareness of screening for prediabetes as well the benefits of the LCPs that are part of the National DPP. p-values student scores are significantly different from students who took the test electronically compared to students who took the test on paper; **p < 0.05; significant effect size for educational research is ≥0.33
|
2019-07-31T13:03:56.063Z
|
2019-07-29T00:00:00.000
|
{
"year": 2019,
"sha1": "ba3977cac70389fc8ce803e01fc615f7b2b4c162",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-019-1721-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba3977cac70389fc8ce803e01fc615f7b2b4c162",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229282492
|
pes2o/s2orc
|
v3-fos-license
|
The Microbiota/Host Immune System Interaction in the Nose to Protect from COVID-19
Coronavirus disease 2019 (COVID-19) is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and is characterized by variable clinical presentation that ranges from asymptomatic to fatal multi-organ damage. The site of entry and the response of the host to the infection affect the outcomes. The role of the upper airways and the nasal barrier in the prevention of infection is increasingly being recognized. Besides the epithelial lining and the local immune system, the upper airways harbor a community of microorganisms, or microbiota, that takes an active part in mucosal homeostasis and in resistance to infection. However, the role of the upper airway microbiota in COVID-19 is not yet completely understood and likely goes beyond protection from viral entry to include the regulation of the immune response to the infection. Herein, we discuss the hypothesis that restoring endogenous barriers and anti-inflammatory pathways that are defective in COVID-19 patients might represent a valid strategy to reduce infectivity and ameliorate clinical symptomatology.
Introduction
Coronavirus disease 2019 , similarly to the flu virus, is transmitted through micro drops produced by sneezing, coughing, or simply speaking. The virus penetrates in the host through the upper airways, and the nasal barrier is the first defensive line to avoid infection [1]. Once the virus has entered target cells in the respiratory tract, the replication, maturation, and release of the virus occur while the host cell undergoes pyroptosis with the release of inflammatory molecules [2]. These initial events initiate an immune response that may lead to the resolution of the infection or, if dysfunctional, trigger a cytokine storm that exacerbates lung inflammation and increases the susceptibility to secondary bacterial or fungal infections [2]. In addition, the cytokine storm may cause potentially fatal multi-organ damage [2].
Host characteristics can influence the infectivity, severity, and outcomes of SARS-CoV-2 infection [3], and the local and systemic immune activities play a key role in the response to the virus aggression. Age ( [4] and references therein) and gender [5,6] both impact on local and systemic immunity, but while the modulation of systemic immune responses can be quite easy and rapid to obtain, improving the local upper respiratory tract (URT) immune competence is slightly complex. In fact, age and gender determine substantial variation in the URT. Aging reduces the immune response capacity in the nasal mucosa and decreases muco-nasal clearance, a fundamental process to prevent virus infection. Furthermore, Di Stadio et al. [5] have shown that estrogen stimulation (hormone/gender effect) on the
The Nose Ecosystem at Work: The Microbiota
The nose likely represents the major site of entry and infection by SARS-CoV-2. Indeed, most of inhaled air enters the body through the nose, and the nasal epithelium show the highest expression of the ACE2 coronavirus receptor [23,24]. The nose might also regulate the subsequent immune response and disease severity. For instance, stimulation of the nasal innate immune response by low doses of mouse hepatitis virus type 1 was able to prime lung immunity for protection against subsequent lethal SARS-CoV or influenza A virus infection [25]. The barrier function of the nasal mucosa as well as the regulation of the local and distal immune response are likely modulated by the microbiota, i.e., the communities of microorganisms that colonize all of the surfaces of the human body exposed to the external environment. The microbiota may act either locally or distally to modulate host physiological and pathological processes. For instance, in the so-called gut-lung axis, the microbiota of the gastrointestinal tract can cross-talk with the lung microbiota and both can influence the immune system to gauge the susceptibility of the host to respiratory infections [25]. Similar to other mucosal sites of the body, the nasal cavity also harbors a community of commensal microorganisms that likely represent an important player in mucosal homeostasis and protection against infection.
Several studies have analyzed the composition of the nasal microbiome in healthy adult subjects. The major phyla belong to Actinobacteria and Firmicutes, followed by Proteobacteria and Bacteroidetes [11,[26][27][28][29][30][31]. At the genus level, the nares are mainly colonized by lipophilic bacteria, such as Corynebacterium, Propionibacterium (Cutibacterium), and Staphylococcus, followed by other genera such as Moraxella, Streptococcus, and Dolosigranulum [28][29][30][31][32][33][34]. It is believed that commensal bacteria in the nasal cavity protect from opportunistic pathogens by competing for space and nutrients, and also by producing toxic molecules [35]. In addition, it has been recently shown that Staphylococcus epidermidis, which increases during nasal microbiome maturation in humans, stimulates the production of antimicrobial peptides by the nasal epithelium, which efficiently reduce pathogen colonization [36]. Moreover, S. epidermidis can also promote the production of interferon λ-dependent innate immunity by normal nasal epithelial cells to protect against the influenza virus [12], although interferon λ can in turn change the composition of the nasal microbiome and favor Staphylococcus aureus superinfection [37]. Although the interactions between the host, pathogens, and commensal bacteria are complex and still hard to decipher, it is becoming increasingly clear that dysbiosis in the nasal cavity, i.e., changes in the composition of the microbial communities, modulates the susceptibility of the host to pathological conditions, including, among others, acute respiratory tract infections [35]. Different microorganisms causing respiratory tract infections have been associated with changes in the nasal microbiome and include nonviral and viral pathogens. The former includes not only bacteria, such as Streptococcus pneumoniae, where it has been shown that IL-17 modulates the composition of the nasal microbiome to influence pneumococcal colonization [38], but also fungi. Indeed, we have recently characterized the microbiome of the URT of a large population of hematological patients and identified microbial signatures associated with invasive fungal infections (manuscript submitted), a major threat in this category of patients [39]. The associations between the nasal microbiome and viral infections have received more attention, although a causal relationship could not always be established. A recent study has evaluated the nasal and throat microbiomes in patients with influenza A and B infection and household contacts, and identified microbial signatures that could predict the risk of infection [40]. Similarly, it was shown that the nasal and throat microbiomes influenced the susceptibility to influenza infection [41] and were associated with influenza symptoms and duration of shedding in a household transmission study [42]. Importantly, the nasal microbiome cannot only influence the susceptibility to infection, but may also affect the efficacy of a live attenuated influenza vaccine [43], thus modulating its therapeutic efficacy. Similar associations with the nasal microbiome have been also demonstrated for respiratory syncytial virus bronchiolitis, with changes in the bacterial composition, alone [44] or in combination with host immune response [45], associated with the severity of symptoms, and the presence of specific taxa with the risk of recurrent wheezing after severe bronchiolitis [46]. Therefore, there may be shifts in the composition of the nasal microbiota that may result in pro-or anti-inflammatory patterns with effects on the susceptibility and course of infection. This paradigm has been elegantly demonstrated for the lung microbiome, in which the acellular bronchoalveolar lavage sample was distinguished in two patterns or pneumotypes, supraglottic predominant taxa (SPT) and background predominant taxa (BPT), based on the enrichment with taxa from the upper respiratory tract or the environment, respectively [47]. Notably, the pneumotype SPT was associated with increased lung inflammation and Th17 activation, and a blunted TLR4 response [47]. The presence of microbial patterns associated with distinct inflammatory responses has been also observed in the nasal microbiota. Healthy young adults were evaluated for their nasal microbiota before and at different times upon challenge with rhinovirus type 39 [48]. Upon grouping of the nasal microbiota into six clusters based on the predominant genera, it was demonstrated that the baseline nasal microbiota was associated with distinct nasal inflammatory responses, viral load, and symptom severity upon infection [48].
While we are just beginning to understand the role of the nasal microbiota in bacterial, fungal, and viral infections, its involvement in SARS-CoV-2 infection has remained largely unexplored and may reveal peculiar properties that reflect the specific etiopathogenesis of COVID-19. A study comparing the nasopharyngeal microbiota of patients positive or negative for COVID-19 was recently published [49]. No differences were revealed in diversity or composition [49], although the small number of samples and the presence of potential confounders may have hindered the identification of distinct signatures. Nevertheless, accumulating evidence points to a role of dysbiosis in barrier impairment with increased susceptibility to infection and dysregulated immune response. In this regard, potential approaches aimed at restoring the mucosal homeostasis with direct activity on microbiome composition and/or modulation of the immune response may represent valuable options for the prevention of COVID-19, and the use of treatments not associated with side effects would be desirable to increase the efficacy/safety profile ( Figure 1).
The Nose Ecosystem at Work: The Nasal Barrier
The nose serves important physiologic functions; it filtrates, warms, and humidifies the air so that pollution, viruses, and bacteria remain confined in the upper expiratory tract (URT), reducing the risk of inflammation and infection of the lungs [5,50]. The particular anatomy of the nose is specifically designed to promote nasal clearance by correctly driving the airflow from the external nostrils through the choana to the nasal posterior airways space (NPAS) [51,52]. There are three structures in each chona, called turbinates, that support the air passage and, thanks to the mucosa on their surface, further ameliorate nasal clearance [52]. The nasal choana is entirely coated with respiratory mucosa, consisting of a ciliated, highly vascularized, pseudostratified epithelium containing a sizeable number of mucus-producing goblet cells [51]. All these cells contribute to the fight against particles, viruses, and bacteria by working in synergy. In particular, the epithelial cells (ECs) provide a physical barrier and, working in conjunction with mucus glands and cilia, filter off the particles that enter the nose. Furthermore, ECs activate the local immune response, a fundamental step to block virus infiltration into the URT [5] and avoid the spread of infection in the pulmonary tract.
Through the antigen-binding protein, ECs introduce and present the antigen to T-cells facilitating the immune response; this mechanism is also supported by ECs pro-inflammatory cytokines production, which improves the efficacy of the nasal local immunity [50]. As support of these cells, there is the action of the endothelial cells that attracts leukocytes to the site of inflammation [51] (Figure 2).
The final support to the nasal barrier is provided by the nasal-associated lymphoid tissue (NALT), which is widely diffused in children's nose, but tends to disappear and be present only in the NPAS in the adult population [53].
Nasal microbiota can be a valid ally of the nasal immune response and helpful to fight viral and bacterial infection, as showed by Salzano et al. [54]. The authors showed that the nasal microbiota acts on the nasal immune response being responsible for the different responses that are observable in individuals suffering from nasal inflammation [54]. Specifically, the nasal microbiota is critical for the development of the mucosa-associated lymphoid tissue and the modulation of adaptive responses such as the production of IgA and the activity of T cells. Furthermore, nasal microbiota regulates the nasal epithelial barrier functions through the increased secretion of mucus and the control of paracellular transport across tight junctions [54].
Targeting the Nasal Microbiota-Immune System Cross-Talk
The cross-talk between the commensal microorganisms and the immune system plays a crucial role in mucosal homeostasis, including the interaction with pathogens and the outcome of infection (Figure 3). The microbiota may protect from pathogens through several mechanisms, which include competition for nutrients, space, and production of anti-microbial peptides. At the same time, the microbiota instructs the immune system to be tolerant towards commensal microorganisms while being prepared to vigorously respond to pathogen colonization. This form of mucosal tolerance is crucial to maintain the balance between the myriad of microbial stimuli to which the immune system is exposed and to fine-tune the immune response based on the effective risk of infection. The delicate regulatory mechanisms that take place in the mucous membranes are susceptible to alterations that may result in pathological conditions, for instance, following microbial dysbiosis, disruption of the epithelial barrier integrity, or loss of the discriminatory function by the immune system. These general mechanisms described in mucous membranes, such as the intestine and the lung, most likely apply to the nasal mucosa [55], which is exposed to microbial or non-microbial environmental antigens constantly inhaled from the outside and interacting with the stable communities of microorganisms that populate the nasal cavities and the immune cells residing within the nasal-associated lymphoid tissue. Therefore, restoring mucosal homeostasis might represent a valuable strategy to protect from a variety of insults, including pathogen invasion. One such strategy would be represented by the intranasal instillation of bacterial species. For instance, intranasal Staphylococcus epidermidis administration protected from Staphylococcus aureus in a mouse model of sinusitis [56]. Similarly, intranasal inoculation of Lactobacillus sakei protected against Corynebacterium tuberculostearicum sinus infection in a murine model [57]. De Boeck et al. [34] demonstrated that lactobacilli were reduced in the upper respiratory tract of chronic rhinosinusitis patients and tested a formulation of L. casei AMBR2 for nasal administration in healthy volunteers for future therapeutic application. Besides probiotic administration, an alternative strategy would be represented by the modulation of nutrients upon which both the host and the microbiota converge for functional cross-regulation. One such nutrient is represented by tryptophan (Trp), an essential amino acid in mammals, which is the substrate of a multitude of host and microbial metabolic pathways in the generation of bioactive molecules [58][59][60]. The Trp metabolic pathways play a critical role in immune regulation and immune tolerance. In the host, Trp is catabolized in two major metabolic pathways for the production of kynurenines, via the rate-limiting enzymes indoleamine 2, 3-dioxygenase 1 (IDO1), IDO2, tryptophan 2, 3-dioxygenase, or serotonin [61]. IDO1 is a crucial regulator of the innate and adaptive immune system that acts by depleting Trp at the host/tumor/microbe interface and polarizing the immune response via the activation of regulatory T cells and myeloid-derived suppressor cells [59,62,63]. The potential role of the IDO1 pathway in COVID-19 has been recently described. A metabolomic study revealed that the kynurenine pathway was activated in COVID-19 patients [64]. In agreement with these findings, another study found that Trp metabolism was among the top pathways affected by COVID-19, with reduced levels of Trp, serotonin, and indole-pyruvate, and increased levels of kynurenine, kynurenic acid, picolinic acid, and nicotinic acid, but not anthranilate, suggesting hyperactivation of the kynurenine pathway [65].
The above results would suggest that restoring homeostatic Trp catabolism to re-equilibrate the generation of the different Trp metabolites may represent a therapeutic strategy in COVID-19. Indeed, the IDO1-dependent kynurenine can work as an agonist of the aryl hydrocarbon receptor (AhR), a ligand-activated transcription factor involved in a wide range of physiological processes, including regulation of the immune response [66]. AhR, in turn, may up-regulate the expression and/or activity of IDO1, thus generating a feed-forward loop with consequences on the immune response [59]. Based on the evidence that IL-6 down-regulates IDO1 activity, it has been hypothesized that the beneficial effects of tocilizumab, the blocking antibody against the IL-6 receptor, may also depend on the restoration of the IDO1-AhR pathway [67].
The potential role of the IDO1-AhR pathway is however disputed. A possible aberrant activation of AhR and IDO1 has been put forward to explain the symptomatology of COVID-19 patients [68], and a recent study identified AhR signaling as a common host response to infection by coronaviruses responsible for lung pathogenesis [69]. These results would suggest that inhibiting AhR activity could be therapeutically exploited in COVID-19. However, AhR is a promiscuous receptor that can bind multiple ligands with both positive and negative effects on inflammation, immunity, and infections [66]. In particular, microbial ligands working as AhR agonists may play a beneficial activity in the regulation of mucosal homeostasis, for instance by promoting epithelial barrier integrity [70][71][72], thus increasing the resistance to infection, and, in turn, AhR may regulate the composition of the microbiome [73]. Therefore, it remains to be seen whether AhR antagonism or agonism would be the most promising therapeutic strategy in COVID-19. Whatever the case, it is important to consider the Trp metabolism as a whole, thus including the host and microbial metabolic pathways, as alterations of the Trp flux may result in pathological conditions [60,74]. The balance between the IDO1 and AhR pathways represents a surrogate for the status of the microbiota and the local immune system and a predictive factor for the susceptibility to infection and the disease course. Therefore, the characterization of Trp metabolism in the nasal mucosa would provide information on the host-microbiota cross-talk and open a new avenue for intervention to maintain a balance between immune tolerance and colonization resistance.
Current Clinical Trials Targeting the Nasal Microbiota/Immune System in COVID-19 Patients
The concept of targeting the nasal microbiota/immune system in SARS-CoV-2 infection is currently being exploited in clinical trials. In trial NCT04347538, patients testing positive for COVID-19 are treated with saline nasal irrigation alone or with baby shampoo to reduce viral shedding and load. Changes in the mucosal immune response and microbial load in the nasopharinx as well as viral load will be measured as primary outcomes. The trial NCT04410263 enrolls COVID-19-positive patients admitted to ICU, and the microbiota will be evaluated, among other parameters, to identify risk factors for the development of acute respiratory distress syndrome, as a prerequisite for future therapeutic strategies. Finally, the clinical trial NCT04458519 is testing the use of intranasal probiotics in COVID-19-positive patients with mild-to-moderate symptoms, and the changes in the severity of infection will be evaluated as primary outcomes.
In conclusion, these ongoing clinical trials emphasize the rationale for targeting the nasal microbiota and immune system to prevent and/or treat SARS-CoV-2 infections, as discussed in the present review.
Conclusions
COVID-19 is still a major threat and, at the time of writing, more than 36 million people have been infected worldwide, with more than 1 million deaths. It is imperative to adopt strategies to control infectious rates and reduce the severity of the disease. Since the virus mainly enters the body through the upper airways, one such strategy would be to reinforce the mucosal barrier, which includes the epithelial lining, the local immune system, and the microbiota. The latter component of the triad can be considered a bona fide natural immune barrier and a target for intervention, not only to improve its intrinsic resistance to infection, but also to homeostatically regulate the local immune system.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-12-17T06:16:34.447Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "4899ee04a9639f97100981e6296755b8b26fe68f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/10/12/345/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d87d21552c3a1cdf4a368615c0be91e4a5545f6",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4717044
|
pes2o/s2orc
|
v3-fos-license
|
Biosynthesis of Silver Nanoparticles on Orthodontic Elastomeric Modules: Evaluation of Mechanical and Antibacterial Properties
In the present study, silver nanoparticles (AgNPs) were synthesized in situ on orthodontic elastomeric modules (OEM) using silver nitrate salts as metal-ion precursors and extract of the plant Hetheroteca inuloides (H. inuloides) as bioreductant via a simple and eco-friendly method. The synthesized AgNPs were characterized by UV-visible spectroscopy; scanning electron microscopy-energy-dispersive spectroscopy (SEM-EDS) and transmission electron microscopy (TEM). The surface plasmon resonance peak found at 472 nm confirmed the formation of AgNPs. SEM and TEM images reveal that the particles are quasi-spherical. The EDS analysis of the AgNPs confirmed the presence of elemental silver. The antibacterial properties of OEM with AgNPs were evaluated against the clinical isolates Streptococcus mutans, Lactobacillus casei, Staphylococcus aureus and Escherichia coli using agar diffusion tests. The physical properties were evaluated by a universal testing machine. OEM with AgNPs had shown inhibition halos for all microorganisms in comparison with OEM control. Physical properties increased with respect to the control group. The results suggest the potential of the material to combat dental biofilm and in turn decrease the incidence of demineralization in dental enamel, ensuring their performance in patients with orthodontic treatment.
Introduction
The presence of fixed appliances on tooth surfaces makes the teeth cleaning process difficult, favoring dental biofilm accumulation [1]. After the bonding of orthodontic appliances, there are documented increases in the amounts of Streptococcus mutans and Lactobacilli in the saliva and dental plaque of patients [2]. These microorganisms have been identified as the main pathogens in dental caries and their presence increases the risk for decalcification [3]. White spot lesion (WSL) around brackets is a major complication in patients with fixed orthodontic treatments, especially those with poor oral hygiene. These lesions are due to demineralization of enamel by acids from biofilms around the brackets [4,5]. Development of WSL during fixed appliances therapy can occur rapidly. Studies by O'Reilly et al. and Øgaard et al. showed development of clinically visible WSL in orthodontic patients that occurred in four weeks or less [6,7]. Gorelick et al. studied the incidence of WSL in orthodontic patients and found that almost 50% of orthodontic patients developed at least one WSL during the course of treatment [8][9][10].
The method of ligation of orthodontic arch wires is a relevant factor that accounts for dental biofilm retention. In the search for more practical and efficient orthodontic accessories, elastomeric modules (ligatures) have been suggested as the material of choice to connect stainless steel arch wires to brackets instead of metallic ligatures [11]. Orthodontic elastomeric modules (OEM) are synthetic elastics made of polyurethane material, with advantages such as quickness of application, patient comfort and less expensive than self-ligation clips [12]. Apart from its practical benefits, it is evident from the literature review that elastomeric ligatures exhibit a greater number of microorganisms in the plaque around the brackets when compared with steel ligatures [13].
Forsberg et al. evaluated the microbial colonization of twelve patients treated with fixed orthodontic appliances and reported that the lateral incisor attached to the arch wire with an elastomeric ligature exhibited a greater number of microorganisms in dental plaque. They also reported a significant increase in the number of S. mutans and Lactobacilli in saliva after the insertion of fixed appliances [14,15]. The rough surface and the absorption properties of elastomeric ligatures further contribute to the formation of bacterial plaque on their surfaces, resulting in accumulation of a higher number of microorganisms on tooth surfaces [16]. They recommended that the use of elastomeric ligatures should be avoided in patients with inadequate oral hygiene because elastomeric ligatures will significantly increase microbial accumulation on tooth surfaces adjacent to the brackets, leading to a predisposition for the development of dental caries and gingivitis [14].
Elastomers in oral cavity would rapidly become coated with salivary proteins and biofilm help to deterioration of their physical properties. If elastomeric modules lack adequate physical properties, clinical applications will be difficult and time-consuming. The latter may cause undesirable tooth movement and prolongs orthodontic treatment [12].
Plaque control is a critical factor that might limit that implantation and settling of causal microorganisms from caries and periodontal disease [17].
During orthodontic treatment, some preventive measures may be adopted to protect tooth structure. Oral hygiene instruction and supervision, nutritional counseling, plaque staining, professional tooth cleaning and daily mouth rinses with fluoride solution are some methods used by orthodontist that depend on the cooperation of the patient. Ideal prevention should not depend on patient cooperation [18,19].
More recently, these general measures are increasingly being supplemented with specific recommendations for the treatment of bracket problem zones. Some noteworthy methods include fluoride (F)-releasing adhesives and fluoride-releasing elastomeric ligatures ties [19,20]. Nevertheless, the protocols of fluoride applications are not totally effective for controlling dental caries during orthodontic treatment [21].
The introduced fluoride releasing elastomeric ligatures have been reported to reduce dental biofilm formation and improve enamel remineralization in areas nearby to the brackets base, which are difficult to clean [11]. Benson et al. found that fluoridated elastomers were not effective in the reduction of streptococcal growth after a clinically relevant time [22]. Several studies have investigated the performance of fluoride-releasing elastomers on decreasing both the formation of S. mutans colonies or biofilms and the susceptibility for development carious lesions around orthodontic brackets. Generally, the findings of these studies have shown that fluoride-releasing elastomeric rings were not effective for that purpose [11]. Fluorine can inhibit demineralization and promote remineralization of hard dental tissues. But studies indicated that the duration of fluorine release was short-term [5]. Studies found that over half the total fluoride content of fluoride-releasing elastomers stored in vitro was released in the first 24 h, and 90% by the end of the first week [23].
Recently, a product that releases silver ions from silver-zeolite that is incorporated into an elastomer (Orthoshield Safe-T-tie) has been introduced in order to reduce bacterial development around orthodontic appliances. Nevertheless, Kim et al. found there were no significant differences between the antimicrobial effect on the silverized elastomers and the conventional elastomers. This study in vivo suggests that the concentration of released ions was not sufficient to impede bacterial growth [24].
Similarly, Won did not find either S. mutans or Porphyromonas gingivalis clear zones around silverized elastomers in modified agar disk diffusion test. Silverized elastomers were also ineffective in growth inhibition test when they were in direct contact with these microorganisms. Won speculated that the concentration of the silver ions in the silverized elastomers was insufficient for antimicrobial activity [24,25]. Also, O'Dell reported that silver-releasing elastomeric ligatures not were effective in inhibiting growth of S. mutans in vitro [26].
Nevertheless, Bai et al. found these technological modifications of the elastomers are a definite improvement over the regular elastomers with regard to adhesion of S. Mutans and Lactobacilli [13]. Caccianiga et al. conclude that Orthoshield Safe-T-Tie ligatures reduce gingival inflammation and periodontal pathogens in orthodontic patients. More studies will be necessary [27].
Nanotechnology has been applied to dental materials as an innovative concept for the development of materials with better properties and anti-caries potential. Nanomaterials have great potential to decrease biofilm accumulation, to inhibit the demineralization process and to combat caries-related bacteria [28]. Silver nanoparticles have been synthesized and incorporated into several biomaterials [29]. The use of plants extracts for nanoparticle synthesis may be advantageous over other biological processes, because it drops the elaborate process of maintaining cell cultures and can also be used for large-scale nanoparticle synthesis. Additionally, the green chemistry approach for the synthesis of nanoparticles using plants avoids the generation of toxic byproducts. Among the various known synthesis methods, plant-mediated nanoparticle synthesis is preferred as it is cost-effective, ecofriendly and safe for human therapeutic use [30][31][32]. Phytochemical compounds such as saponins, phenolic compounds, phytosterols and quinines present in plant biomolecules have both preservative and reductive activity [33].
However, in most of the methods, hazardous chemicals, low material conversion and high energy requirements are used for the preparation of nanoparticles [40]. Also, employing synthetic stabilizing agents can generate hazardous byproducts, making these methods unsuitable for biological applications [41]. So, there is a need to develop high-yield, low cost, non-toxic and environmentally friendly procedures [42]. In such a situation, the biological approach appears to be very appropriate. Natural materials, like plants, bacteria, fungi, yeast, have been used for the synthesis of silver nanoparticles [40]. The dried flower of Heterotheca inuloides, which is called "arnica", has been used in Mexican traditional medicine to treat inflammatory discomfort [43,44].
In the present study, we synthesized metallic silver nanoparticles using the extract of Heterotheca inuloides and evaluated the antibacterial and physical properties of orthodontic elastomeric modules decorated with these silver nanoparticles (AgNPs).
Characterization of the Silver Nanoparticles Biosynthesized
All synthesis parameters were investigated to be able to adequately decorate the elastic modules with silver nanoparticles without being agglomerated but in sufficient quantity to have good antibacterial activity. Pretreated orthodontic elastomeric ligatures were immersed in 8 mL of 1 × 10 −2 M silver nitrate (AgNO 3 ) (Sigma-Aldrich, St. Louis, MO, USA) for 60 min and later 2.5 mL of Heterotheca inuloides extract was added to reduce Ag + ions. The synthesis of silver nanoparticles was carried out for 12 h. The bioreduction of AgNO 3 into AgNPs can be confirmed visually by change in the solution color, from colorless to reddish brown. UV-Vis absorbance of AgNPs shows the characteristic plasmon absorption peak, which was detected at 472 nm ( Figure 1). The elastic modules are very transparent originally; after the process of incorporation of silver nanoparticles on the surface, they take a little change of color and appear slightly yellow. silver nitrate (AgNO3) (Sigma-Aldrich, St. Louis, MO, USA) for 60 min and later 2.5 mL of Heterotheca inuloides extract was added to reduce Ag + ions. The synthesis of silver nanoparticles was carried out for 12 h. The bioreduction of AgNO3 into AgNPs can be confirmed visually by change in the solution color, from colorless to reddish brown. UV-Vis absorbance of AgNPs shows the characteristic plasmon absorption peak, which was detected at 472 nm ( Figure 1). The elastic modules are very transparent originally; after the process of incorporation of silver nanoparticles on the surface, they take a little change of color and appear slightly yellow. The energy dispersive spectrometry (EDS) analysis recorded for AgNPs is listed in Figure 2. The energy dispersive spectrometry (EDS) analysis recorded for AgNPs is listed in Figure 2. silver nitrate (AgNO3) (Sigma-Aldrich, St. Louis, MO, USA) for 60 min and later 2.5 mL of Heterotheca inuloides extract was added to reduce Ag + ions. The synthesis of silver nanoparticles was carried out for 12 h. The bioreduction of AgNO3 into AgNPs can be confirmed visually by change in the solution color, from colorless to reddish brown. UV-Vis absorbance of AgNPs shows the characteristic plasmon absorption peak, which was detected at 472 nm ( Figure 1). The elastic modules are very transparent originally; after the process of incorporation of silver nanoparticles on the surface, they take a little change of color and appear slightly yellow. The energy dispersive spectrometry (EDS) analysis recorded for AgNPs is listed in Figure 2. Figure 4 shows Thermogravimetric analysis (TGA) curves of orthodontic elastic modules with and without AgNPs from 30 °C to 500 °C. The TGA curve of orthodontic elastic modules control showed T5 at 306 °C, the other stages of degradation temperature were at 354 °C and 398 °C; while in the TGA of orthodontic elastic modules with AgNPs can be appreciated T5 at 303 °C, 320 °C and 398 °C. Control orthodontic elastic modules showed lower onset degradation temperature in comparison to orthodontic elastic modules with AgNPs.
Antibacterial Activity
In addition to evaluating the most common types of bacteria for this research, such as Gram-positive S. aureus and Gram-negative E. coli, two other common oral cavity microorganisms were evaluated, such as L. casei and S. mutans, Gram-positive both. The results of the antimicrobial activity are shown in Figure 5. The control sample revealed no activity against all tested microorganisms. Figure 4 shows Thermogravimetric analysis (TGA) curves of orthodontic elastic modules with and without AgNPs from 30 • C to 500 • C. The TGA curve of orthodontic elastic modules control showed T 5 at 306 • C, the other stages of degradation temperature were at 354 • C and 398 • C; while in the TGA of orthodontic elastic modules with AgNPs can be appreciated T 5 at 303 • C, 320 • C and 398 • C. Control orthodontic elastic modules showed lower onset degradation temperature in comparison to orthodontic elastic modules with AgNPs. Figure 4 shows Thermogravimetric analysis (TGA) curves of orthodontic elastic modules with and without AgNPs from 30 °C to 500 °C. The TGA curve of orthodontic elastic modules control showed T5 at 306 °C, the other stages of degradation temperature were at 354 °C and 398 °C; while in the TGA of orthodontic elastic modules with AgNPs can be appreciated T5 at 303 °C, 320 °C and 398 °C. Control orthodontic elastic modules showed lower onset degradation temperature in comparison to orthodontic elastic modules with AgNPs.
Antibacterial Activity
In addition to evaluating the most common types of bacteria for this research, such as Gram-positive S. aureus and Gram-negative E. coli, two other common oral cavity microorganisms were evaluated, such as L. casei and S. mutans, Gram-positive both. The results of the antimicrobial activity are shown in Figure 5. The control sample revealed no activity against all tested microorganisms.
Antibacterial Activity
In addition to evaluating the most common types of bacteria for this research, such as Gram-positive S. aureus and Gram-negative E. coli, two other common oral cavity microorganisms were evaluated, such as L. casei and S. mutans, Gram-positive both. The results of the antimicrobial activity are shown in Figure 5. The control sample revealed no activity against all tested microorganisms. Orthodontic elastomeric ligatures containing AgNPs exhibited antibacterial activity against Gram-negative and Gram-positive bacteria. The mean values and standard deviation of the zone of growth inhibition (mm) of orthodontic elastic modules and paper disk are shown in Table 1. Orthodontic elastomeric ligatures containing AgNPs exhibited antibacterial activity against Gramnegative and Gram-positive bacteria. The mean values and standard deviation of the zone of growth inhibition (mm) of orthodontic elastic modules and paper disk are shown in Table 1.
Physical Properties
The t-test revealed there were significant differences between orthodontic elastic modules control and orthodontic elastic modules decorated with AgNPs (p < 0.05) ( Table 2). Physical properties (maximum strength, tension and displacement) of orthodontic elastic modules with AgNPs increased with respect to control group ( Figure 6).
Physical Properties
The t-test revealed there were significant differences between orthodontic elastic modules control and orthodontic elastic modules decorated with AgNPs (p < 0.05) ( Table 2). Physical properties (maximum strength, tension and displacement) of orthodontic elastic modules with AgNPs increased with respect to control group ( Figure 6).
Discussion
The reduction of silver ions was considered to occur due to the phenolic components present in the extract of Heterotheca inuloides [45]. Further studies are required to establish the mechanism of formation and stabilization of nanoparticles.
The biosynthesis of AgNPs was initially observed by the color change from colorless to reddish brown. The color change is due to the excitation of surface plasmon resonance vibration in AgNPs. Similar results were observed with various plants like studied by Sudhakar et al. and Joy Prabu et al. [46,47]. Generally, the characteristic part of the surface plasmon band of AgNPs falls within the wavelength range of 350-500 nm [48]. The appearance of surface plasmon peaks around 472 nm and confirms the formation of AgNPs. The kinetics of formation of silver nanoparticles by bioreduction usually occurs at 6 h and particularly with H. inuluoides, we find that if we leave in contact the elastic modules with silver nanoparticles for 12 h, we achieve a higher concentration. This reaches 16% of the weight of silver nanoparticles without agglomeration occurred; this assures us a high rate of antibacterial effectiveness.
The elastomeric ligatures were made of polyurethane, which are thermosetting polymers, that have a -(NH)-(C=O)-O-structural unit and are formed by step reaction (condensation) polymerization. The manufacture of polyurethane elastomers involves several stages. These polymers have short rigid portions (the aromatic rings and the urea) joined by short flexible hinges (the diamine linker and the CH2 group between the aromatic ring) and long very flexible portions (the polyether) whose length can be adjusted [49]. These functional groups provide links for binding AgNPs.
Thermogravimetric analysis curves show differences in the thermal stability of the control modules and modules with AgNPs. The result of this involvement is mainly due to exposure to pretreatment with NaOH that could generate a degree of surface hydrolysis of the urethane groups. However, these differences did not affect the antibacterial properties and the physical properties of AgNPs; this is demonstrated by the studies carried out on the elastic modules after the treatments with isopropyl alcohol and with sodium hydroxide, and also with the incorporation of silver nanoparticles. In contrast, the physical properties increased in a small proportion; however, further studies should be carried out to evaluate in detail the stability and all physical and mechanical properties.
Discussion
The reduction of silver ions was considered to occur due to the phenolic components present in the extract of Heterotheca inuloides [45]. Further studies are required to establish the mechanism of formation and stabilization of nanoparticles.
The biosynthesis of AgNPs was initially observed by the color change from colorless to reddish brown. The color change is due to the excitation of surface plasmon resonance vibration in AgNPs. Similar results were observed with various plants like studied by Sudhakar et al. and Joy Prabu et al. [46,47]. Generally, the characteristic part of the surface plasmon band of AgNPs falls within the wavelength range of 350-500 nm [48]. The appearance of surface plasmon peaks around 472 nm and confirms the formation of AgNPs. The kinetics of formation of silver nanoparticles by bioreduction usually occurs at 6 h and particularly with H. inuluoides, we find that if we leave in contact the elastic modules with silver nanoparticles for 12 h, we achieve a higher concentration. This reaches 16% of the weight of silver nanoparticles without agglomeration occurred; this assures us a high rate of antibacterial effectiveness.
The elastomeric ligatures were made of polyurethane, which are thermosetting polymers, that have a -(NH)-(C=O)-O-structural unit and are formed by step reaction (condensation) polymerization. The manufacture of polyurethane elastomers involves several stages. These polymers have short rigid portions (the aromatic rings and the urea) joined by short flexible hinges (the diamine linker and the CH 2 group between the aromatic ring) and long very flexible portions (the polyether) whose length can be adjusted [49]. These functional groups provide links for binding AgNPs.
Thermogravimetric analysis curves show differences in the thermal stability of the control modules and modules with AgNPs. The result of this involvement is mainly due to exposure to pretreatment with NaOH that could generate a degree of surface hydrolysis of the urethane groups. However, these differences did not affect the antibacterial properties and the physical properties of AgNPs; this is demonstrated by the studies carried out on the elastic modules after the treatments with isopropyl alcohol and with sodium hydroxide, and also with the incorporation of silver nanoparticles.
In contrast, the physical properties increased in a small proportion; however, further studies should be carried out to evaluate in detail the stability and all physical and mechanical properties.
Silver has superior antibacterial activity compared to other metals; it has a strong cytotoxic effect on a broad range of microorganism in metallic and ionic forms. Several studies have evaluated the cytotoxicity of silver nanoparticles on fungi, protozoa, a number of viruses, and Gram-negative and Gram-positive bacterias such as Streptococcus mutans, Lactobacillus sp., Escherichia coli and Staphylococcus aureus, confirming the antibacterial and bactericidal properties of silver nanoparticles [50][51][52][53]. Hernández-Sierra et al. indicated that AgNPs inhibits the growth of S. mutans at lower concentrations compared to Zn-Nps and Au-Nps and thus it may be more effective against dental caries [54]. Our results show that orthodontic elastic modules decorated with silver nanoparticles inhibited not only the bacteria on the materials surfaces, also the bacteria away from the material in the culture medium against S. mutans, L. casei, S. aureus and E. coli. This indicates the potential ability of this materials to combat incidence of enamel decalcification in orthodontic patients because there showed significant reduction in S. mutans and L. casei.
The mechanism of antibacterial activity is not very well-known; possibly the AgNPs inhibits the enzymes of the cell respiratory cycle and damages the deoxyribonucleic acid (DNA) synthesis, leading to cell death [54,55]. In the present study, the Ag salt was reduced to AgNPs in situ, avoiding the need for prefabricated nanoparticles to be mixed with the polymer, which could cause agglomeration. The high surface area of AgNPs provided potent antibacterial effect with better physical properties, except, changes in color, from clear to light yellow as a result of incorporation of AgNPs in the orthodontic elastic modules. In all probability, the colour appearance of the tooth will not be affected by the addition of Ag-Nps, like shown the study realized by Argueta-Figueroa [56], this is because the Ag-Nps were synthesized in situ on the modules and the existing Van Der Walls interactions between positively charged nanoparticles have strong attraction to the support (modules). In addition, these modules are changed every month during the treatment review. Direct comparison of these results with others studies is difficult because there are no similar published studies.
Ag-Nps have also been applied in several areas of dentistry, as endodontics [57,58], dental prostheses [59,60], implantology [61,62], restorative dentistry [63,64], and orthodontic adhesives [65,66]. Nanomaterials provide superior antimicrobial activity and display comparable physical properties when compared with conventional materials-this is probably due to the small size and high surface area of the nanoparticles [28,67]. Nevertheless, the oral environment is dynamic, with constant changes in temperature, pH, and the volume of fluids washing over the modules; a further complication could be differences in diet, salivary flow rates, and oral-hygiene regimens [23]. This study was performed in vitro and the physiological conditions of in vivo studies may differ [68]. More precise methods are necessary to simulate more precisely the dynamic relationship between wire, bracket, and ligature during tooth movement. Further in vivo studies should be performed to determine the long-term performance of orthodontic material using nanotechnology.
Silver is known to have low toxicity and good biocompatibility with human cells [69]. However, further specific studies are needed to determine its cytotoxicity when AgNps are attached to orthodontic elastic modules.
Pre-Treatment of Orthodontic Elastic Ligatures
Orthodontic elastomeric ligatures (Mini Stix ligature ties non-coated, TP Orthodontics, LaPorte, IN, USA) were immersed in isopropyl alcohol and cleaned in an ultrasonic cleaner (Branson 1510R-DTH, Branson Ultrasonics, Danbury, CT, USA) for 30 min, rinsed with deionized water, and added NaOH 10%. After that, orthodontic elastomeric ligatures were put in an ultrasonic cleaner one more time for 30 min and then rinsed several times with deionized water.
Preparation of the Heterotheca Inuloides Extract
1 g of Heterotheca inuloides from Anahuac Mexican teas (99.90% of purity) was boiled for 5 min in 100 mL of deionized water and then filtered. The aqueous extract was used as the reducing agent for synthesis of silver nanoparticles [70].
In Situ Synthesis of AgNPs in Orthodontic Elastic Ligatures
Pretreated orthodontic elastomeric ligatures were immersed in 8 mL of 1 × 10 −2 M silver nitrate (AgNO 3 ) (Sigma-Aldrich, St. Louis, MO, USA) for 60 min and later 2.5 mL of Heterotheca inuloides extract was added to reduce Ag + ions. The synthesis of silver nanoparticles was carried out for 12 h into the darkness (to minimize the photoactivation of silver nitrate). Later, orthodontic elastomeric ligatures were removed from the solution and allowed to dry at room temperature during 8 h.
Characterization of AgNPs
Reduction of Ag + ions was assessed by measuring the UV-Vis spectrum of 1 mL aliquots of the sample in a quartz cell as described forward. UV-Vis spectral analysis for AgNPs was carried in a Cary 5000 UV-Vis Spectrophotometer. Measurements were performed in an interval between 200 and 800 nm range operated at a resolution of 1 nm. Synthesized AgNPs were characterized by scanning electron microscopy (SEM) energy dispersive spectrometry (EDS) (JEOL, JSM-6510LV, Tokyo, Japan) at 20 kV of acceleration and using secondary electrons and transmission electron microscopy (TEM) was carried on in a JEOL-2100 microscope (Tokyo, Japan) at 200 kV of acceleration in the bright field mode. In order to prepare the samples from TEM the specimens were sonicated during 3 h to detach the nanoparticles from the orthodontic elastomeric ligature.
Thermogravimetric Analysis
Thermal stability of the conventional and orthodontic elastic modules with AgNPs examined by thermogravimetric analyses (TGAs) using SDT (Q600 model). The weight chance of each sample was evaluated by TGAs at a heating rate of 10 • C/min to 600 • in a nitrogen atmosphere (flow of 100 mL/min).
Antibacterial Activity
The in vitro antibacterial activity of the samples was determined using a direct contact test with agar diffusion technique according the Clinical and Laboratory Standards Institute (CLSI) [71]. Mueller-Hinton agar (MHA) (BD Bioxon, Spark, MD USA) were prepared and inoculated with bacterial culture. Mueller-Hinton agar with 5% sheep blood was necessary to testing of L. casei.
Bacterial strains used in this study were obtained from the culture collection of the Biochemistry Laboratory of the School of Dentistry, National Autonomous University of Mexico (UNAM). Strains used are endemic to the region from central Mexico, and each one was characterized by cultural and biochemical test [72].
Antibacterial activity of AgNPs was investigated against a panel of clinically relevant microorganisms, representative for Gram-positive and Gram-negative bacteria commonly used as standards: S. aureus, E. coli, S. mutans and L. casei.
The culture was adjusted with sterile saline to achieve a turbidity equivalent to a 0.5 McFarland standard or 10 8 CFU/mL. The agar plates were inoculated from the standardized cultures of the test microorganisms using a sterile cotton swab and then spread as uniformly as possible throughout the entire media. Three orthodontic elastomeric ligatures with AgNPs, one orthodontic elastomeric ligature control, one disk made of filter paper was impregnated with ten µL of AgNPs concentration of and one disk control were firmly placed on agar plates. Inoculated agar plates were incubated at 37 • C for 24 h. Agar plates with S. mutans and L. casei were incubated in anaerobic jar. Antibacterial activity was evaluated by measuring the diameter of the inhibition zone (mm) on the surface of the plates, and the results were reported as mean ± standard deviation. The antimicrobial activity was assessed using procedures from the Clinical and Laboratory Standards Institute [52].
Mechanical Properties
Mechanical properties (maximum strength, tension and displacement) of orthodontic elastic ligatures decorated with AgNPs and conventional ligature were tested by universal testing machine (Autograph AGS-X, Shimadzu, Kyoto, Japan). Using a U-shaped hook adapted to the machine, elastomeric ligatures were stretched until they were broken. This was carried out with a crosshead speed of 100 mm/min. As each elastomer was stretched, force (newtons) and extension (mm) were measured and recorded.
The maximum force was operationally defined as the ability to move the maximum weight for a single repetition; tension as the effect of applying a force on a shape increasing its elongation; and the displacement was the change in position.
Conclusions
We have demonstrated that silver nanoparticle biosynthesis by Heterotheca inuloides promises an ecofriendly, non-toxic, simple and economical pathway to synthesize AgNPs with a controlled average size of 17 nm and stable. UV-visible spectroscopy showed peaks in the range of 472 nm confirming the formation of AgNPs. Orthodontic elastic modules decorated with AgNPs can inhibit the growth of three important Gram-positive microorganisms commonly found in oral cavities: S. mutans, L. casei and S. aureus as well as Gram-negative bacteria like E. coli, demonstrated that the composite possesses broad spectrum antibacterial activity. Orthodontic elastic modules decorated with AgNPs demonstrated higher physical properties such as maximum strength, tension and displacement compared to conventional modules. The results suggest the potential of the composite to combat dental plaque and therefore decrease the incidence of dental enamel demineralization, ensuring its performance in patients with orthodontic treatment.
|
2017-10-09T00:18:58.796Z
|
2017-08-25T00:00:00.000
|
{
"year": 2017,
"sha1": "e480c8187eb2a7f4b0d08bec4510518764eaedb4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/22/9/1407/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e480c8187eb2a7f4b0d08bec4510518764eaedb4",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
247817464
|
pes2o/s2orc
|
v3-fos-license
|
Demographic factors associated with length of stay in hospital and histological diagnosis in adults undergoing appendicectomy
Martin Richardson İD , Rishi Singhal İD 1 Department of General Surgery, Birmingham Heartlands Hospital, University Hospitals Birmingham NHS Foundation Trust, United Kingdom 2 Clinic of General Surgery, Warwick Hospital South Warwickshire NHS Foundation Trust, Warwick, United Kingdom Demographic factors associated with length of stay in hospital and histological diagnosis in adults undergoing appendicectomy ORIGINAL ARTICLE Turk J Surg 2022; 38 (1): 36-45
IntRODuCtIOn
Appendicitis is one of the most common but also one of the most challenging acute surgical diagnoses (1). Despite an ongoing debate of conservative versus surgical management, appendicectomy remains one of the most frequently performed emergency surgical procedures in the world.
The wide differential diagnosis of acute lower abdominal pain often results in equivocal diagnosis of acute appendicitis. Whilst risk scoring systems have been developed that utilize a mixture of clinical and biochemical parameters (2)(3)(4), these are poorly implemented particularly in the United Kingdom. Due to a lack of validation of these scoring systems, a strong belief remains among practitioners that the decision to investigate invasively and consider operative management is best left to the judgement of the surgeon (5). The difficulties in clinical diagnosis of appendicitis, which are not purely investigation related, can therefore result in variability in the delivery of treatment and a potential subsequent variation in outcome.
Socioeconomic deprivation has been shown to be associated with increased length of stay in a variety of admissions, such as for coronary artery bypass and hip surgery (6,7). This is also supported by various national economic studies that suggest that lower income groups utilize more healthcare facilities, potentially due to later presentation to health professionals and as a result being more acutely unwell at first healthcare contact (8). This association has been reported to hold across pri-mary care services, emergency departments and tertiary care centers (9)(10)(11).
The continued refinement of modern acute surgical services and the ongoing uptake of laparoscopic surgery for even severe appendicitis has led to shortened length of stay in hospital (12). These improvements in care should apply to patients of all ages and ethnicities and across all socioeconomic classes. However, evidence from centers around the world exists for differences between socioeconomic groups in both postoperative recovery and length of stay. An 11-year national analysis carried out in Taiwan demonstrated longer length of stay and increased costs in patients undergoing appendicectomy who belonged to lower socioeconomic groups (13). Despite ongoing debate on the etiology of appendicitis with many studies published on appendicitis pathology (14), there remains a paucity of research investigating the impact of social deprivation on outcomes following surgery for acute appendicitis.
Although differential incidence rates of appendicitis based on ethnic origin have been previously suggested, no direct associations have been proven definitively (15). Investigating the influence of ethnicity on the risk of developing acute appendicitis remains challenging as many other factors may be confounding such as social deprivation, cultural attitudes and biological factors that have yet to be elucidated.
The aim of our study was to investigate the relationship between demographic factors (including ethnicity and social deprivation) and length of stay in hospital and the final histological diagnosis in patients who had undergone appendicectomy in a single NHS trust.
Data Source
All patients aged 17 years or older who underwent appendicectomy between 2010 and 2016 at any of the three hospitals within a single UK NHS Foundation trust in the West Midlands were included. The study was registered with the relevant NHS organisation's governance department as a clinical audit, therefore an NHS Research Ethics Committee review was not required. Data were extracted from electronic health records. Variables collected included date of birth, gender, ethnicity, residential postcode, admission date, discharge date and histological diagnosis.
Variables used
Due to significant heterogeneity in the clinical coding of ethnicity, patients were reclassified into one of a set of nine ethnic categories: African, Asian (east/south/other), Caribbean, Caucasian, mixed, other or unknown.
Residential postcodes were used to obtain each patient's corresponding ranking score based on the 2015 Indices of Multiple Deprivation. This is a multi-domain score generally regarded as the best measure of social deprivation available in the UK (16). As an indication, Birmingham ranked as the 6th most deprived Core City Local Authority at the time the indices were derived. Furthermore, areas within both the top and bottom deciles of deprivation were present within the catchment area of the NHS trust investigated in this study.
Admission and discharge dates were used to calculate each patient's length of stay; this was rounded up and measured in whole calendar days.
Histology reports were reviewed for any macroscopic or microscopic confirmation of inflammation within the appendix. A modified version of the classification system for appendicitis previously described by Carr was used (17). Diagnoses were categorized into one of four groups: (i) normal or non-inflamed appendix, (ii) 'uncomplicated appendicitis' -inflamed appendix with or without suppuration, (iii) 'complicated appendicitis' -evidence of perforation, necrosis or both arising from the appendix, (iv) 'gangrenous appendicitis' -evidence of gangrene within the appendix with or without perforation. Where histological analysis gave an alternative diagnosis such as a neuroendocrine tumor or inflammatory bowel disease, these were categorized as a non-inflamed appendix. This enabled us to correlate demographic variables against histological evidence of appendicitis and not other diseases, as well as ensure maximal inclusion of patients in the analysis.
Statistical Analyses
The predictive potential of the demographic variables collected against length of stay in hospital was investigated using a generalized linear model. For histological diagnosis, a nominal variable, multinomial logistic regression was used. Statistical analysis was performed using R version 3.5.1 'Feather Spray' (18). (Table 1). There were 1675 female patients (48.76%) and 1769 male patients (51.24%). There was no significant difference in the sexual distribution (Chi-squared test, p= 0.2573). The average age for the whole cohort of patients was 37.8 years (range 73 years), however as expected there were a higher number of younger patients amongst both genders.
patients were included in the study
Distribution of the included patients across the defined ethnic categories used in this study can be found in Table 1. The two most prevalent categories were Caucasian (72.82%) and South Asian (12.70%). The ethnic categories with the highest mean IMD rankings and therefore the most socioeconomically deprived was African and South Asian, whereas the least deprived ethnic categories were East Asian and Caucasian ( Table 2).
The total mean length of stay in hospital for all included patients was 4.33 days (SD 5.9 days); however, amongst older pa-tients, length of stay in general was greater than this (Table 1). Figure 1 illustrates mean length of stay in hospital for quartiles of social deprivation in each of the ethnic categories. Amongst most ethnic categories, length of stay was variable amongst quartiles of deprivation. Notably however, length of stay was similar across deprivation quartiles amongst Caucasian patients and interestingly shows an increasing trend in South Asian patients. This suggests that deprivation influenced length of stay on South Asian patients specifically, with more deprived South Asian patients having a greater length of stay than more affluent patients.
The percentage of appendixes that were histologically reported as normal or non-inflamed across all included patients was 28.37% (Table 3). 54.38% of reports were of uncomplicated appendicitis, with 17.25% of histology reports containing one or more of necrosis, perforation or gangrene.
Length of Stay
A generalized linear model was created to test the associations of age, gender, ethnicity and deprivation against length of stay (Table 4). Due to a potential confounding association between ethnicity and deprivation, an interaction term was included for these two variables.
Age was found to be the most significant predictor of increased length of stay in hospital (Z-value= 52.448, p< 0.001). This association was to be expected.
South Asian ethnicity was found to be independently associated with an increased length of stay in hospital (Z-value= 3.478, p< 0.001). Sex and social deprivation were not statistically significant independent predictors of increased length of stay.
However, when interaction terms between deprivation and each of the ethnicities were considered, it was found that amongst South Asian patients only, social deprivation was associated with an increased length of stay in hospital (Z-value 2.841, p= 0.005). This suggests that although social deprivation is not a predictor of longer hospital stay in all patients, it may influence the admission duration of South Asian patients specifically.
Histological Diagnosis
The histological diagnoses found in each ethnic category are shown in Table 5. From a multinomial logistic regression analysis with histological diagnostic category as the dependent variable and a non-inflamed appendix as the reference outcome, it was found that IMD rank was independently predictive of having specifically 'complicated' appendicitis following appendicectomy (p= 0.01), but was not associated with having any evidence of gangrene on histology (p= 0.68). Therefore, the results of this study suggest that being more socioeconomically deprived results in a greater likelihood of having necrosis with or without perforation reported on histology but not gangrene.
Age was found to be a significant predictor of having confirmed appendicitis compared to having a histologically non-inflamed appendix (p= 0.021), but no apparent difference in odds ratios was found in the prediction of complicated or gangrenous appendicitis based on age. Similarly, male sex was found to be significantly associated with having confirmed appendicitis (p< 0.001 in all 3 categories) compared to a histologically non-inflamed appendix, but much like age did not appear to predict complicated or gangrenous appendicitis. Any associations between any of the ethnic categories with any category of appendicitis did not hold when interacted with IMD ranking-these associations were therefore deemed not independent and so their statistical significance was ignored.
DISCuSSIOn
This study is the first analysis to suggest that South Asian ethnicity appears to be independently associated with increased length of stay in admissions for operative management of acute appendicitis. In addition, amongst South Asian patients, social deprivation was found to be associated with increased length of stay. This study is the first to investigate possible associations between demographic variables and histological evaluation post-appendicectomy. Social deprivation was found to be associated with a greater likelihood of observing necrosis with or without perforation on histological analysis, but did not predict the presence of gangrene. In keeping with previously published datasets, male gender and age remain strong independent predictors of positive histology for appendicitis following appendicectomy. Finally, we report a negative appendicectomy rate of 27.8% over 7 years in this single center analysis, marginally higher than the historically acceptable rate of 15-25% (19).
Role of Ethnicity
The catchment area of the NHS trust investigated in this study included significant areas with large non-Caucasian communities with and without social deprivation. Also included were municipalities with largely Caucasian populations, again with varying levels of deprivation. The sample of patients obtained overall therefore was felt to be sufficiently representative of the UK population.
Diagnostic delay may lead to either or both of increased length of stay in hospital and prolonged postoperative recovery. Cultural differences between ethnicities may affect presentation behaviors and pathways to secondary care and attitudes to emergency surgery (20). Where language barriers exist between patients and healthcare professionals, the reporting of symptoms may be affected sufficiently to lead to delays in action being taken or misunderstandings regarding severity of symptoms leading to potential overtreatment. As these factors can either prolong or shorten admission duration, the data collected and analyses performed in this study are unable to differentiate whether any of them are independently important factors.
Nonetheless, the specific associations between South Asian ethnicity both alone and in combination with deprivation are surprising, particularly given that the center studied is known to have a significant South Asian population. This ethnic category in particular may have certain perioperative risk factors that may affect recovery from appendicectomy such as diabetes, obesity and cardiovascular disease. However, this may not explain the observations from this study fully given the major proportion of younger patients included, who would be expected to have minimal co-morbidities. Although not fully elucidated, the etiology of acute appendicitis may have a more severe etiology in South Asian patients-the exact reasons for this are unclear. However, no significant association was found between South Asian ethnicity and a histological diagnosis of more complicated appendicitis.
Overall, whilst an association between South Asian ethnicity and prolonged length of stay for admissions with appendicectomy has been observed in this study, no definitive conclusions can be made as to reasons why this is the case. Some of these reasons, to name a few, may be a disparity in health-related knowledge between South Asians and other ethnicities, increased prevalence of certain co-morbidities (e.g., type 2 diabetes), cultural differences in attitudes to healthcare in the perioperative period (21), differences in the pharmacokinetics of antibiotics and analgesics (22), measurement of symptoms using methods that are less appropriate for certain ethnicities (e.g., pain scales not being entirely holistic). Furthermore, the causes for South Asians having increased length of stay may be different geographically and vary by local health economy.
Role of Social Deprivation
Associations between social deprivation and longer admission duration have been observed elsewhere in the literature and in different clinical settings (6,7). With a relationship only seen in South Asian patients, this study is the first to suggest an increased length of stay due to deprivation amongst a relatively specific demographic group in a single acute NHS trust. In addition, this study is the first in general to be able to provide potential insights into the influence of social deprivation on length of stay across different ethnic categories following emergency surgery.
There may be numerous reasons for longer stays in hospital amongst the more deprived population. These may include different thresholds for patients to present themselves acutely, differences in routes of presentation either via primary or directly to secondary care and differences in health education and awareness regarding the nature of acute medical issues. Different attitudes to surgery could lead to differences between socioeconomic groups in potential attempts at non-operative management-for example there may be preferences amongst certain groups not to have surgery and therefore trial antibiotics first. Finally, there may be ethnic or cultural barriers to timely discharge from hospital following surgery, such as availability of care at home or in social care (23).
The observation that increased social deprivation raises the likelihood of normal histology following appendicectomy may be due to a vast array of factors. A key factor influencing histology will be the preferences of operating surgeons regarding the removal of the macroscopically normal 'lily-white' appendix.
If there is a preference to perform an appendicectomy even when laparoscopy is negative for acute appendicitis, the pro-portion of histopathology results reporting a non-inflamed appendix will be increased independent of any patient factors. For the three hospitals investigated in this study, the departmental policy was that if acute appendicitis was deemed the most likely diagnosis preoperatively and no other pathology to explain the patient's symptoms was identified on laparoscopy, appendicectomy was justifiable. No data regarding the macroscopic appearance of the appendix at the time of surgery was collected in this study, which precluded any investigation into surgeon-based influences on the study results. However, the data obtained in this study allowed a pragmatic investigation into histological outcomes following appendicectomy.
Despite the lack of high-level evidence in the form of randomized trials or meta-analyses, there appears to be a growing international consensus that during surgery for suspected acute appendicitis, a macroscopically normal appendix should be removed (24). Nevertheless, heterogeneity in management of the normal appendix means that patient factors affecting histological diagnosis will always remain difficult to investigate from observational studies in particular. Data from this study suggest an increased likelihood of negative histology amongst patients who are more socioeconomically deprived. Any further studies investigating demographic factors and histological outcomes following appendicectomy will need to remain pragmatic in order to accommodate differences in operative management of acute appendicitis.
Finally, many of the possible reasons for the observed influence of social deprivation and histological outcome overlap with previously mentioned factors affecting length of stay such as pathways to the operating theatre, health behaviors and attitudes affecting time to presentation from onset of symptoms and availability of healthcare resources, both perioperatively and outside of hospital.
Hospitals should ideally adapt their services according to local health needs. Where certain socioeconomic groups are more populous and with specific healthcare requirements, appropriate services should be established amongst healthcare providers to ensure patients receive the best possible care.
Role of Age & Sex
Despite the increased incidence of appendicitis in younger male patients, it has previously been reported that female patients undergo twice as many appendicectomies (25). Our analysis reported a higher rate of positive histology for appendicitis in male patients. The differential diagnosis of right-sided lower abdominal pain is much wider in the young female patient, with consideration required towards potential gynecological causes of abdominal pain. Furthermore in younger patients, there may be a greater likelihood for the operating surgeon to perform a 'prophylactic appendicectomy' in which a normal ap-pendix is removed in order to minimize the risk of appendicitis at a later time (26).
In addition, the uptake of laparoscopy particularly for both the investigation and management of acute right sided abdominal pain continues to increase and is now becoming the standard of care for the operative management of appendicitis. Combined with pressures to increase the throughput of emergency surgical services in the NHS, the threshold for laparoscopy may be decreasing and therefore partially explain higher negative appendicectomy rates seen in some centers such as the setting of this study. However, continuing variations in practice amongst surgeons makes this a difficult conclusion to make definitively.
Also of importance is the common practice in the UK of confirming acute appendicitis radiologically in older patients before proceeding to surgery, irrespective of gender. Although not stipulated in any published UK guidelines, most patients of more advanced age undergo cross sectional imaging in order to rule out cecal pathologies such as right sided colonic malignancy before deciding on operative management. This results in a significantly lower negative appendicectomy rate within the elderly cohort.
The role of age and gender in the management of undifferentiated right lower quadrant pain was investigated as part of the UK Right Iliac Fossa Treatment (RIFT) audit (27). This cross-sectional snapshot audit took place more recently than the period of this observational study and across a more generalizable population. Prior to moving forward with further research into the age and sex associations suggested by this study, there would be utility in establishing whether a more national investigation such as the RIFT study corroborates the findings of this single center longitudinal analysis.
Study Limitations
A key limitation of this study was that data were only collected from patients who underwent appendicectomy. Patients who were successfully treated non-operatively or who presented to a healthcare professional with symptoms suggestive of appendicitis, but never reached the care of an acute surgical team, were not included.
Length of stay was measured in whole calendar days between the admission and discharge date, rounded up where required. There is potential therefore for the artifactual increase in length of stay by one day for patients who underwent surgery overnight. Nonetheless, whilst this may affect the applicability of length of stay as an absolute measure when comparing this study's dataset with that of another study, the associations observed should be unaffected as this potential systematic error would apply to all of the study population.
The reclassification of patients into one of nine predetermined ethnic categories due to such a significant heterogeneity in initial ethnicity coding resulted in a loss of more detailed information about patient's nationalities. Younger patients in particular are more likely to have been born in the UK and therefore potentially not have the same risk factors as those from their family's country of origin.
Although the study period was seven years, only the Index of Multiple Deprivation 2015 was used. New indices of multiple deprivation are not released annually however and although social deprivation is evidently difficult to quantify, the Index of Multiple Deprivation scores are considered the best method available in the UK (16). Nonetheless, measuring socioeconomic deprivation in younger patients is more difficult as some of the variables used in determining their IMD score are either not applicable to them or may not pertain to their primary residence.
A typical example of this would apply to a university student who should not necessarily be classed as unemployed with no income and in addition may not normally reside in the catchment area of the studied hospital.
In this study, a negative appendicectomy was regarded as the absence of inflammation of the appendix as stated on the histopathology report, irrespective of the final diagnosis. Non-inflammatory but nonetheless pathological findings on histology were therefore regarded as normal. These included diagnoses such as neuroendocrine tumors, Meckel's diverticulitis with a non-inflamed appendix and parasitic infection without inflammation. The effects of these pathologies on length of stay in hospital and their different incidences across demographic groups may affect the results of this study. However, these pathologies are all rare and would likely not affect a sufficiently large sample size significantly.
COnCLuSIOn
Overall, this retrospective longitudinal analysis is the first to highlight South Asian patients as a particular demographic group at risk of increased length of stay for an admission with appendicectomy. The differential effect of social deprivation across ethnicities observed in this study has not been reported in other studies investigating the patient pathway for acute appendicitis.
Further population-based research is required in order to investigate possible causative factors that cannot be proven conclusively from observational data, both locally to establish potential region-specific inequalities and in other areas to ascertain whether similar associations are seen.
Finally, we report a higher negative appendicectomy rate amongst young female patients compared to their male counterparts, most likely due to differences in diagnostic difficulty between these two groups and surgeon specific preferences on the operative management of a normal appearing appendix.
|
2022-03-31T15:28:23.281Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "30bca59331ef45c08a651c36946ddc1aa785f749",
"oa_license": "CCBYNC",
"oa_url": "https://turkjsurg.com/full-text-pdf/1830/eng",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "628acd71f4dae8a395756f1aab87bcab367de4d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225970809
|
pes2o/s2orc
|
v3-fos-license
|
Does an open access journal about vegetation still make sense in 2020?
The current issue is the first one of the new version of Plant Sociology, the international peer-reviewed journal of the "Società Italiana di Scienza della Vegetazione" (SISV). The technical management of the journal has been entrusted to the editorial platform Pensoft, the Editorial Board has been largely reshaped, now including also a dedicated Social media team. Plant Sociology is focused on all aspects of vegetation from phytocoenosis to landscape level, through time and space, at different geographic and ecological scales; the journal contributes to spread around the issues related to management and conservation of plant communities and plant diversity. All the articles are freely available in Open Access (OA) with affordable article processing charge (APC). In the present Editorial, we brief-ly discuss the importance of opening the access to knowledge and data about vegetation. We believe that disseminating plant science might be a precious tool for understanding ecological processes, modelling future trends and supporting decision makers. The intro-duced technological improvement will hopefully allow a larger visibility and circulation for the papers published on Plant Sociology .
Introduction
June 2020: the first issue of Plant Sociology, the international journal of the "Società Italiana di Scienza della Vegetazione" (SISV), sees the light in its new version. It is an important step for our journal, implying many changes and some challenges.
The first, prominent aspect is that the technical management of the journal has been entrusted to Pensoft, an independent and innovative editorial platform, that will take charge of all aspects regarding production, submission system and online publishing. SISV and the Editorial Board of the journal maintain respectively the ownership and the scientific management, including the entire peer-review process. This big improvement will hopefully allow a larger visibility and circulation for the papers published on Plant Sociology.
The second, equally important point is that, following the journal's policy of the last years, all the articles of Plant Sociology are freely available in Open Access (OA) mode. The Editorial Board deeply believes in the importance of open, free and huge dissemination of scientific results. To maintain this possibility, authors are now asked to contribute a very reasonable article processing charge (APC), further reduced for SISV members. It is perhaps worth to say that, being owned by a scientific society, Plant Sociology is a non-profit journal and all the requested charges serve to the technical support provided by the publisher.
The whole Editorial Board has been reorganized, now including also a dedicated Social media team, and keeping some important continuity elements, such as the presence of Edoardo Biondi, now Consultant Editor, who for more than 25 years has worked with incomparable dedication to the management and improvement of the journal.
Besides these crucial points, Plant Sociology maintains its long-lasting vocation to focus on all aspects of vegetation from phytocoenosis to landscape level, through time and space, at different geographic and ecological scales, hosting the results of studies centred on plant communities and habitats modelization, interpretation, assessment, mapping, management, conservation and monitoring.
Open Access evolution and perspectives
Since Swartz (2008) stated that "sharing is a moral imperative", the "open science" topic became more and more central in the scientific community. Reality is that the popularity of sharing tools such as the controversial Sci-Hub, containing more than 47 million pirated research papers (Androcec 2017), or the moderate Research Gate (www.researchgate.net/), is evidently increasing all over the world (Bohannon 2016;Himmelstein et al. 2017;Nicholas et al. 2018), representing a serious threat to editorial companies and posing challenging questions to the whole scientific world (Anderson 2018).
Alexandra Elbakyan's Sci-Hub, in the words of Bohannon (2016) "an awe-inspiring act of altruism or a massive criminal enterprise, depending on whom you ask", certainly inspired many authors and editors to take the road of OA.
In Additionally, international research funding sources are more and more pushing towards open access as the best or even mandatory way to publish results of research and scientific and data. For instance, under Horizon 2020, each beneficiary must ensure open access to all peer-reviewed scientific publications relating to its results (Article 29.2 of the Model Grant Agreement; European Commission 2017).
OA is not always a synonym of easy and equal access to publishing, since behind this praiseworthy policy, unaffordable costs sometimes lay behind, as indicated by Van Noorden (2013), who also demonstrated that OA costs are weakly related to the actual influence of journal and articles.
In a world where scientific publishing developed into an industry, it has been showed that in both Natural and Medical Sciences (NMS) there is still a moderate level of concentration of scientific papers in the hands of a few big publishers, highlighting a relative independence that has been attributed to the strength of scientific societies (Larivière et al. 2015).
As a matter of fact, SISV decided to support (also economically) the OA mode for its official journal, choosing for "gold open access" i.e. making the journal's content freely available for readers on the publisher's website (at the same time, Plant Sociology still retains a printed version, only for libraries and official repositories). Similar decisions have been taken, e.g., by the Italian Botanical Society (SBI) with the journal Italian Botanist, formerly Informatore Botanico Italiano (Peruzzi and
About the journal
Plant Sociology has succeeded Notiziario della Società Italiana di Fitosociologia (1964-1989, ISSN 1120-4605) and, later, Fitosociologia (1990, ISSN 1125, the historical journals of the SISV. In this large timespan, started 56 years ago, the journal published a total of 819 scientific papers organized in 58 volumes and 102 issues. The name Plant Sociology is a tribute to the founder of Phytosociology, Josias Braun-Blanquet (1884-1980, who used it as title of his major monographic work "Pflanzensoziologie: Grundzüge der Vegetationskunde" (Braun-Blanquet 1928).
Plant Sociology is an international, peer-reviewed OA journal. It publishes original research articles dealing with all aspects of vegetation, from plant community to landscape level, including dynamic processes and community ecology. It favours papers focusing on Plant Sociology and vegetation survey for developing ecological models, vegetation interpretation, classification and mapping, environmental quality assessment, plant biodiversity management and conservation, EU Annex I habitats interpretation and monitoring, on the ground of rigorous and quantitative measures of physical and biological components.
The journal is open to territorial studies at different geographic scale and accepts contributes dealing with applied research, provided they offer new methodological perspectives and a robust, updated vegetation analysis. The main subject are represented by:
• Phanerogamic and cryptogamic vegetation survey and classification • Vegetation mapping • Plant ecology and synecology • Plant community traits • Plant community conservation and management • Syntaxonomy and nomenclature • Biostatistic analysis and data banks • Habitat directive • Alien plant invasions
The types of article hosted by the journal include Research articles, Review articles, Short communications, Editorials, Corrigendum and/or addendum.
Each issue contains contributes for the column "Habitat Records", a specific section of the journal dedicated to providing data and supporting the implementation of the 92/43/EEC "Habitat" Directive in Europe (Gigante et al. 2019). The journal gives space to papers presenting the results of collaborative projects (e.g. Viciani et al. 2020).
Since 2012 Plant Sociology is indexed in the international databases Scopus (Source id: 21100211323) and Web of Science (Biological Abstracts, BIOSIS Preview).
Plant Sociology represents one of the few editorial spaces open to the publication of original research articles on all aspects of Vegetation Science, contributing to spread around the issues related to management and conservation of plant diversity. Its history has been built over the decades through many challenges successfully faced, thanks to the scrupulous work of the various collaborators who have contributed selflessly to its management.
The current European and global editorial scenario sees the role of large publishers expanding more and more at the expense of small editors and scientific communities. This is certainly one of the reasons that caused a certain drop in the number of articles published in the two annual issues of Plant Sociology, together perhaps with a general sense of disillusionment that pushes more and more young people to move towards publishing giants, that have literally transformed the realm of knowledge in a market. Not that we decided to abandon the field.
The renewal of Plant Sociology is a challenge that we have undertaken with conviction, aware of the difficulties and pitfalls that characterize the life of a scientific journal today. Entrusting the technical management of the journal to a professional company aims to improve its dissemination and attractiveness, but also to focus our efforts only on scientific content. The management of the journal has since weighed on a small editorial staff that has taken on all the necessary procedures for the creation of a modern scientific periodical; today this "home made" method is no longer sufficient to guarantee an adequate circulation of our authors' articles in an editorial scene that has deeply changed.
As a result of the recently started partnership with Pensoft and thanks to the high-tech services provided by the scholarly publishing platform ARPHA, the first 2020 papers of are now available on the new website of Plant Sociology. All pre-2020 issues and articles remain available on the former website http://www.scienzadellavegetazione.it/sisv/rivista/rivista_elenco.jsp.
We believe and hope that more authors will want to help support the improvement and growth of Plant Sociology, actively collaborating in the relaunch of the journal, choosing it again and again for the publication of the results of their research.
Conclusive remarks
Going to the question in the title: Does an OA journal about vegetation science still make sense in 2020? Our answer is: Definitely yes. Openly and freely disseminating research and knowledge about plant diversity and living systems should be one of the major targets (if not the most prominent) of human societies. In the present time, a frightening epidemic (Coronavirus disease 2019 or COVID-19) is spreading around in our planet, still reaping victims and undermining from the foundations a development system that for decades has neglected the signals coming from the other components of the living world. The harmful consequences of habitat fragmentation and ecosystems disruption have been too long predicted and proved, demonstrating the negative impacts of humans on natural systems (Corlett et al. 2020). Scientists are already suggesting how much humans can learn from COVID-19, in order to effectively drive new conservation strategy (Ervin 2020;Pearson et al. 2020).
We are not so naïve to believe that humankind will emerge improved from this catastrophe. However, we can rely on knowledge to hope removing/mitigating the impacts of our species on ecosystems. A journal focusing on all aspects of natural, semi-natural and anthropic plant systems, from basic investigation to their modelization, assessment, mapping, management, conservation and monitoring, is certainly a precious tool to detect environmental unbalances, understand processes and out-line predictive scenarios that support decision makers. In this sense, we believe that more and more OA journals focused on biodiversity should find space in the academic editorial world, because only through deep knowledge of processes and functions of a complex planet, humankind can find a way to survive healthy.
|
2020-07-09T09:14:10.796Z
|
2020-03-07T00:00:00.000
|
{
"year": 2020,
"sha1": "b881a35704d225f73fd5b9ed5dca31304fc90551",
"oa_license": "CCBY",
"oa_url": "https://plantsociology.arphahub.com/article/55913/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c0bd38a95c2777717a523b7e4a16c145d2f0adf0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
29105343
|
pes2o/s2orc
|
v3-fos-license
|
ANALYSIS OF WAVEFORM RETRACKING METHODS IN ANTARCTIC ICE SHEET BASED ON CRYOSAT-2 DATA
Satellite altimetry plays an important role in many geoscientific and environmental studies of Antarctic ice sheet. The ranging accuracy is degenerated near coasts or over nonocean surfaces, due to waveform contamination. A postprocess technique, known as waveform retracking, can be used to retrack the corrupt waveform and in turn improve the ranging accuracy. In 2010, the CryoSat-2 satellite was launched with the Synthetic aperture Interferometric Radar ALtimeter (SIRAL) onboard. Satellite altimetry waveform retracking methods are discussed in the paper. Six retracking methods including the OCOG method, the threshold method with 10%, 25% and 50% threshold level, the linear and exponential 5-β parametric methods are used to retrack CryoSat-2 waveform over the transect from Zhongshan Station to Dome A. The results show that the threshold retracker performs best with the consideration of waveform retracking success rate and RMS of retracking distance corrections. The linear 5-β parametric retracker gives best waveform retracking precision, but cannot make full use of the waveform data.
INTRODUCTION
Satellite altimetry is used to detect the earth and its variation precisely and periodically on a large scale.The satellite altimetry data are widely used to study the earth gravity field model, mean sea level, oceanic tidal model and seabed topography.In Polar Regions, satellite altimetry has proven to be a valuable tool for many geoscientific and environmental studies.It can, for example, be used for ice sheet mapping and mass balance study (Bamber et al. 2009, Zhang et al. 2015, Li et al. 2016).It is also used to detect sea ice changes in polar areas (Yuan et al. 2016).
However, the echo waveforms of altimetry impulse are often contaminated by the coastal terrain, islands, oceanic tide, geophysical conditions and hardware delay over nonocean areas.This kind of waveform is so irregular that the distance between the altimetry satellite and its nadir point cannot be precisely estimated from the waveforms.In order to calculate precise distances, the middle point of waveform leading edge should be repositioned, then the distance correction should be reestimated by comparing the retracked middle point of leading edge and the pregiven gate, which is called the waveform retracking technique of radar satellite altimetry.and 50% threshold level respectively, the linear and exponential 5-β parametric method to the CryoSat-2 return waveforms over the transect from Zhongshan Station to Dome A.
WAVEFORM RETRACKING TECHNIQUE
Methods of waveform retracking can be classified into two categories: one is based on functional fit and the other based on statistics.
OCOG algorithm
In 1986, between the threshold level and the leading edge.The threshold method is a statistical method and not of physical characteristics.The method can give more precise retracking gate than OCOG.The corresponding equations are as follows: where () yi is the power of i th gate; N y is the average power of former 5 gates; L T is the threshold level; A is the amplitude; k is the k th gate whose power is more than L T ; ret n is the middle point of leading edge.
β parametric fitting algorithm
The β parametric fitting algorithm was firstly put forward by Martin et al. in 1983 The 5-β parametric method is mainly used to process the complex waveform returned from the single reflecting surface, shown in Figure 2. If the waveform is present like a spike, the 5-β parametric algorithm may be non-convergent in the iterative procedure and cannot give the right results.The linear 5-β parametric equation is: Where () yt is the sampling power at time t; 1 is the thermal noise of returned waveform; 2 is the returned impulse power for leading edge; 3 is the middle point of leading edge which is the half power from the received power to the maximum power and used to calculate the difference to the pregiven gate and get the retracking distance correction; 4 is the risetime parameter; 5 is the slope of ramping edge; () Pz is the error function; () Qt is a linear function to fit the gradual attenuation echo wave in the ramping edge.
The exponential 5-β parametric equation is: The 5 parameters
Retracking distance correction
After retracking waveforms, the middle point of leading edge can be determined.According to the pregiven gate and the light velocity, the retracking distance correction where k t is the time interval for one gate; c is the light velocity, c=299792458 m/s; ret n is the middle point of leading edge; and tr n is the pregiven gate.
DATA AND METHOD
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W7, 2017 ISPRS Geospatial Week 2017, 18-22 September 2017, Wuhan, China
CryoSat-2 data and study areas
The
from the National Aeronautics and Space Administration, USA (NASA).The method uses a relevant parametric function to fit the altimetry waveform based on the Brown mean impulse echo model.Β parameters can be estimated by the iterative calculation based on good initial values with the least squares adjustment or the maximum likelihood estimator.
iFigure 2 .
Figure 2. Schematic diagram of 5-β parameter method CryoSat-2 satellite was launched on April 8, 2010, carrying a new developed altimeter operating in Ku-band.The SIRAL instrument samples the surface every 300 m along track using three different measurement modes, LRM, SAR and SARIn.The low resolution mode (LRM) is used over oceans and the flat interior of the ice sheets.LRM is similar to the operation of conventional pulsewidth-limited altimeters.In the synthetic aperture (SAR) and synthetic aperture interferometric (SARIn) modes, SIRAL samples the surface with a higher pulse repletion frequency (18 181 Hz) than in LRM (1970 Hz).SARIn measures the steep areas at the margins of the ice sheet and ice caps, whereas the SAR mode is used over sea ice to reveal ice free-board by distinguishing leads and ice flows.In this study, we use the CryoSat-2 L1B product provided by ESA, which contains the precise orbit of the satellite, the back-scattered radar waveforms, the tracker range and the coherence and phase difference for SARIn mode.The product also contains additional information, such as geophysical and tidal corrections and quality flags.
Figure 3
Figure 3 shows the study areas with CryoSat-2 ground tracks over the transect from Zhongshan Station to Dome 3% and 22.1% with the two 5-β parametric fitting methods.The success rates for the other retracking methods are all 100%.The RMSs from the two 5-β parametric methods are low, which indicates that the two methods can give good retracking results.The RMS for threshold method is the minimum when the threshold is 50%.Figure 4 is the histograms of retracking results for LRM waveforms, the gray line indicates the onboard tracking point.The retracking results from the two 5-β parametric methods and the threshold method with 50% threshold level are close to normal distribution.
Figure 4 .
Figure 4. Histograms of retracking results for LRM waveforms, the gray line indicates the pregiven gate.
Figure 5 .
Figure 5. Histograms of retracking results for SARIn waveforms, the gray line indicates the pregiven gate.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W7, 2017 ISPRS Geospatial Week 2017, 18-22 September 2017, Wuhan, China evaluate waveform retracking methods in Antarctic by retracking the CryoSat-2 waveforms.We applied six retracking algorithms including the Off Center of Gravity (OCOG), the threshold retracking method with 10%, 25% The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W7, 2017 ISPRS Geospatial Week 2017, 18-22 September 2017, Wuhan, China be determined based on the amplitude and the maximum waveform power calculated with OCOG.The retracked point can be obtained to linearly interpolate the neighboring samples close to the intersecting threshold
Table 1 .
Statistics of waveform retracking results
|
2017-10-17T13:20:33.447Z
|
2017-09-14T00:00:00.000
|
{
"year": 2017,
"sha1": "b866f76b1b917ea28d31fb995fd0cebf4f32d87b",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W7/1561/2017/isprs-archives-XLII-2-W7-1561-2017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b866f76b1b917ea28d31fb995fd0cebf4f32d87b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
56254798
|
pes2o/s2orc
|
v3-fos-license
|
Imatinib reduces proliferation of leukemic cells in vitro
Introduction: Philadelphia chromosome is a cytogenetic marker for chronic myeloid leukemia (CML). The main aims of this study were to assess the positive responses, side-effects and survival of CML patients treated with imatinib mesylate. Methods: All recently diagnosed CML patients who were treated with imatinib were recruited to this study. We investigated hematological and cytogenetic parameters by CBC, FISH and RT-PCR individually. Results: Of the 10 cases, 7 (70%) were males and 3 (30%) were female. Four (40%) of the cases were analyzed retrospectively and 8 cases (80%) exhibited general exhaustion (75%), fever (80%), and splenomegaly (80%). Indications of bleeding and rashes were rarely seen at presentation. The majority of the patients had a generally low risk profile (70%), 30% had intermediate risk; with no subjects exhibiting high risk CML, 9 subjects (90%) were in remission. One patient (10%) had been in remission for 3 years, 4 (40%) had been in remission for 6 years, one was in remission after 7 years and 5 (50%) were in remission after 10 years. Most of the patients (90%) exhibited a deficient major molecular reaction, after 6 years of treatment and 42% of them had a deficient major molecular reaction after 10 years of treatment. No significant side effects associated with Imatinib treatment were reported by the patients. Imatinib treatment resulted in diminished expansion in CML CFU-GM cells. Conclusion: Imatinib mesylate is indicated for the treatment of Philadelphia chromosome-positive CP-CML with no significant adverse outcomes. Correspondence to: Faris Q. Alenzi Ph.D., Professor of Immunology, College of Applied Medical Sciences Prince Sattam bin Abdulaziz University Saudi Arabia, Email: fqalenzi@ksu.edu.sa 1. Naif Abdulla Alanazi, Dept. of Surgery, Prince Mohamed bin Abdulaziz Hospital, Riyadh, Saudi Arabia 2. Naif Enad Alanazi, Dept. of Medicine, King Salman Hospital, Riyadh, Saudi Arabia 3. Faisal Farhan.J.Alanazi, Riyadh College for Dentistry, Riyadh, Saudi Arabia 4. Saud Altamimi, 5. Osama S. Alghamdi 6. Abdulrahman Alanazi 7. Mohamed Al-Shahrani Dept. of Med Lab Sci., CAMS, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia 8. Mohamed W. Al-Rabea, College of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia 9. Arfan Arshad, Dar Shefa Hospital, Riyadh, Saudi Arabia 10. Faris Q. Alenzi, Dar Shefa Hospital, Riyadh, Saudi Arabia, Dept. of Med Lab Sci., CAMS, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia Bangladesh Journal of Medical Science Vol. 16 No. 02 April’17. Page : 320--324 Introduction Philadelphia chromosome (Ph) results from the reciprocal translocation t(9;22) (q34;q11) of truncated chromosome 22 that is a hallmark of chronic myeloid leukemia (CML). This aberrant fusion gene encodes the break point cluster regionproto-oncogene tyrosine-protein kinase (BCRABL) oncogenic protein which leads to persistently enhanced tyrosine kinase activity with consequent cell proliferation, inhibition of differentiation and resistance to cell death1-6. Cytogenetic investigation of bone marrow samples has shown that 90–95% of
Introduction
Philadelphia chromosome (Ph) results from the reciprocal translocation t(9;22) (q34;q11) of truncated chromosome 22 that is a hallmark of chronic myeloid leukemia (CML).This aberrant fusion gene encodes the break point cluster region-proto-oncogene tyrosine-protein kinase (BCR-ABL) oncogenic protein which leads to persistently enhanced tyrosine kinase activity with consequent cell proliferation, inhibition of differentiation and resistance to cell death [1][2][3][4][5][6] .Cytogenetic investigation of bone marrow samples has shown that 90-95% of CML patients are Ph chromosome positive 2 .CML disease develops as a clonal hemopoietic stem cell expansion, characterized by a chronic phase (CP), an accelerated phase followed by a blast crisis (BC) phase 3 .Each Bcr-Abl mRNA transcript is found in distinct phenotypes of CML and these are predictive of responses to therapy and overall clinical outcome 7 .Imatinib mesylate (STI-571, Gleevec) is a wellestablished first-generation ABL tyrosine kinase inhibitor (TKI) whose use has dramatically improved the prognosis of CML through decreased Bcr-Abl mRNA measured using real-time quantitative PCR (RQ-PCR) in CML patients.In contrast, hematological responders the platelet counts fall in response to Imatinib treatment with no cytogenetic change evident.This suggests that the relevant CML clone was Imatinib sensitive but that there was insufficient normal haemopoiesis to sustain cytogenetic reactions.This study was performed to examine the clinical presentation, hematological reaction, molecular reaction, survival and unfavorable effects of Imatinib treatment of CML patients at our hospital.
Patients:
Ten patients (7 males, 3 females aged 22-71 years) were recruited to the study.Each was diagnosed to have a chronic myeloid leukemia (CP-CML) at our university hospital.This study was conducted between January 2015-Dec 2015 at the Leukemia Research Unit, PSAU, Al-Kharj, Saudi Arabia.Age and sex-matched control samples were acquired from bone marrow donated by volunteers giving cells for allogeneic transplantation or from volunteer blood donors.All patients were analyzed by symptoms, peripheral blood smear examination, bone marrow analysis with biopsies analyzed by quantitative RT-PCR.The oral dosage of Imatinib was 400 mg/day, administered between 2005 and 2015.At the mid-point of this period, peripheral blood smears, complete blood count (CBC) and clinical examination were performed at regular intervals with RT-PCR carried out at the last visit.Peripheral blood and bone marrow samples from patients with CP-CML were tested before and after Imatinib treatment for those subjects who were cytogenetic responders.The patients were considered as having a positive clinical reaction if they were free of manifestations and indications of CML.
Preparation of Mononuclear cells for Culture & CFU-GM culture and expansion:
Mononuclear cells (MNCs) were purified by density gradient centrifugation over Ficoll-Hypaque (1.077 g/ ml Nyegaard, washed with HBSS (GibcoBRL) and, cells resuspended in the same medium and adjusted to a concentration of 5×10 6 in 5ml in MEM containing 15% FCS.We followed the same protocols previously published by Marley et al 5,8,9 .The susceptibility to Imatinib is expressed as the proportion between the AUC of the control samples and the AUC of the cells cultured following Imatinib treatment.Statistics Data were transferred to microsoft Excel spreadsheets and statistical analysis performed using Stat View SE+ package.Data distribution was analyzed by the Mann-Whitney U test and Wilcoxon Signed rank test.Significance was calculated by Spearman Rank test.The AUC was computed utilizing a Microsoft Excel spreadsheet.
Results
Nine (90%) out of twelve patients with Philadelphiachromosome positive end stage CML were males and 25% were females (Table -1).The most common symptoms and clinical signs with which the patients presented were exhaustion 9 (90%), fever 8(80%), rash 4 (40%), and splenomegaly 8 (80%).The majority of the patients had a generally low risk profile (70%), 30% had intermediate risk; with no subjects exhibiting high risk CML; 9 subjects (90%) were in remission.WHO scoring framework.One patient had been in remission for 3 years, 4 (40%) had been in remission for 6 years, one was in remission after 7 years and (40%) were in remission after 10 years.One patient had a deficient major molecular reaction (MMR) to Imatinib treatment 12 years after initial diagnosis.Most of the patients (80%) exhibited MMR, after 6 years of treatment and 40% of them were in MMR after 10 years of treatment.Imatinib was well tolerated in these patients with no symptoms evident.The morphology of the lymphocytes differed in size and shape from normal with some cells exhibiting a less mature phenotype.During the study period, none of the patients had increased numbers of lymphocytes in their bone marrow with a mean lymphocyte count of 3%.The mid-range of lymphocytes was 7%, however, there was a marked variability between patients.At study conclusion, the CML patients exhibited diminished B cell numbers in their bone marrow that contrasted with normal control values (10% of lymphocytes versus 29%), and no immature or maturating structures identified.Patients with a suboptimal reaction to Imatinib had diminished numbers of B cells in the bone marrow, though patients who responded well to Imatinib treatment and with bone marrow lymphocytosis had normal or expanded numbers of B cells.In T cells, the CD4/CD8 proportion was typical and the extent of regulatory T cells (Tregs) in bone marrow was almost identical in various settings.The quantities of DC were equivalent to normal values in patients who responded well to Imatinib treatment.We established responses to Imatinib treatment to establish the most those patients who did not exhibit a strong response to Imatinib (data not shown).We also observed a significant reduction in the AUC of those treated with Imatinib compared with untreated controls (p=0.002,n=10) (Figure-1 and 2).Strikingly, Imatinib reactions have no significant impact on NBM CFU-GM development compared with positive effects on CML CFU-GM formation.In vitro testing demonstrated, clear differences between control and CML cell responses including cell multiplication, attachment, and reactions to Imatinib and IFN-a.The AUC in CML is higher than in control cells, and was significantly reduced by Imatinib or IFN-a although we observed, wide variations in both responses.It is well-established that Imatinib is a tyrosine kinase inhibitor that specifically hinders the tyrosine movement of ABL and BCR-ABL proteins and the development of CFU-GM colonies from CML patients.Discussion Chronic myeloid leukemia (CML) is a clonal myeloproliferative defect of the pluripotent immature stem cells with a rate of 1 per 100,000 in the west 10 .In contrast, the rate of occurrence in Saudi Arabia is not known.CML accounts for 6.2% of all leukemia with 4.6% and 6.7% due to CML in males and females respectively 1 .The median age of patients diagnosed with CML is 67 years-of-age in the west with slight male predominance 12 .In contrast, in this study the median age was 39 years with slightly higher rates in males although this is based on very small numbers.Prior to the introduction of Imatinib the usual chemotherapy regime for CML was dependent on Busulphan or hydroxyurea.However the sideeffects were numerous with a median survival rate of under 3 years, hematological reduction of less than 80% with no cytogenetic remission 10 .Allergenic undifferentiated cell transplantation was the main remedial treatment for CML but it was challenging with a high death rate and unfavorable side-effects.Interferon Alpha was the treatment of choice before the development of tyrosine kinase inhibitors.The current treatment of choice for CML patients is Imatinib 400 mg daily 13 .The inhibition of BCR-ABL tyrosine kinase is a very successful treatment for Ph+ CML when compared with previously available treatments.
In the present study, we examined whether Imatinib treatment exhibits variations in 12 patients over a long time period.The major outcomes were that none of our patients reported any significant side-effects despite the fact that the length of treatment was from 3-10 years with 90% having been treated for more than 6years.These outcomes are in agreement with a number of local reports from Saudi Arabia [4][5][6][7][8][9][10][11][12][13][14][15][16] .
A number of studies have demonstrated that following 6 years of treatment with Imatinib, 86-88% of patients with Ph+ CML remained in MMR, which was characterized as a 3-log decrease in the quantity of the Bcr-Abl transcript which has been established as an objective marker for a positive outcome in clinical studies [17][18] .Despite the fact that this study is based on small numbers, 83% of our patients remained in MMR following 6 years of treatment and 42% remained in MMR after10 years of treatment with few adverse effects reported or discontinuation of treatment with Imatinib.Our study is in agreement with studies which demonstrated a general survival rate in CML of 95.2% after 8 years [19][20] .We also explored whether the self-replication of CML CFU-GM could be decreased to the levels seen in normal bone marrow CFU-GMs following treatment with different concentrations of Imatinib.There was a significant reduction in CML CFU-GMs replication to the levels seen in normal bone marrow, with no effect on cell proliferation.These outcomes are in agreement with those reported by Gordon and colleagues 9 .These findings demonstrate that Imatinib inhibits progenitor cell multiplication, as shown by the AUC changes.CML related CFU-GM exhibit p210 expression, improved progenitor cell function and a positive reaction to remedial treatment in CML.The possibility that diverse downstream signaling pathways might be proportionately inactivated by treatment with Imatinib is supported by our preliminary discoveries on the impacts of pathway inhibitors in individual patients.Our information hastargeted the PI3-kinase pathway and we propose that downstream targets such as ERK and p70 might be differentially expressed in CML patients.These results support the use of orally administrated Imatinib as an effective treatment with few adverse effects compared with the alternative option of IFN-a treatment of CP-CML.
Conclusion
Imatinib mesylate is an effective, well-tolerated medication over a 10 year period of treatment and follow-up of patients with Ph+ CML.Acknowledgement This project was supported by a research grant from the deanship of scientific research at the Prince Sattam bin Abdulaziz University, SAUDI ARABIA (ref no: RU-2015-101).Special thanks to Professors M. Alrabea (Jeddah) and W. Tamimi (Riyadh) for providing samples.
Characteristics value
Total number of patients 10
|
2018-12-15T20:19:07.376Z
|
2017-03-23T00:00:00.000
|
{
"year": 2017,
"sha1": "dbcb7a074fae38f570602ec8ba4deca73a4c3cac",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/BJMS/article/download/31945/21556",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dbcb7a074fae38f570602ec8ba4deca73a4c3cac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235213239
|
pes2o/s2orc
|
v3-fos-license
|
Laparoscopic-Assisted Percutaneous Endoscopic Gastrostomy Reduces Major Complications in High-Risk Pediatric Patients
Purpose Percutaneous endoscopic gastrostomy (PEG) is a safe method to feed patients with feeding difficulty. This study aimed to compare the outcomes of conventional PEG and laparoscopic-assisted PEG (L-PEG) placement in high-risk pediatric patients. Methods In our tertiary pediatric department, 90 PEG insertions were performed between 2014 and 2019. Children with severe thoracoabdominal deformity (TAD), previous abdominal surgery, ventriculoperitoneal (VP) shunt, and abdominal tumors were considered as high-risk patients. Age, sex, diagnosis, operative time, complications, and mortality were compared among patients who underwent conventional PEG placement (first group) and those who underwent L-PEG placement (second group). Results We analyzed the outcomes of conventional PEG placement (first group, n=15; patients with severe TAD [n=7], abdominal tumor [n=6], and VP shunts [n=2]) and L-PEG placement (second group, n=10; patients with VP shunts [n=5], previous abdominal surgery [n=4], and severe TAD [n=1]). Regarding minor complications, 1 (6.6%) patient in the first group underwent unplanned PEG removal and 1 (10%) patient in the second group had peristomal granuloma. We observed three major complications: colon perforation (6.6%) in a patient with VP shunt, gastrocolic fistula (6.6%) in a patient with Fallot-tetralogy and severe TAD, and pneumoperitoneum (6.6%) caused by early tube dislodgement in an autistic patient with severe TAD. All the three complications occurred in the first group (20%). No major complications occurred in the second group. Conclusion In high-risk patients, L-PEG may be safer than conventional PEG. Thus, L-PEG is recommended for high-risk patients.
INTRODUCTION
According to the European Society for Clinical Nutrition and Metabolism guidelines, gastrostomy placement is indicated in all patients requiring supplementary feeding for >2-3 weeks. Enteral tube feeding aids in avoiding further body weight loss, correcting nutritional deficiencies, promoting growth in children, and improving patients' quality of life [1].
Percutaneous endoscopic gastrostomy (PEG) was first described in 1980 by Gauderer [2]. Currently, PEG is widely used worldwide; however, the rate of adverse effects is not low [3]. In the past decades, various technical modifications have been proposed to reduce complications. Techniques such as image-guided gastrostomy, introducer PEG, and singlestage PEG buttons or tubes have the advantage of avoiding the oropharynx and esophagus and thus, prevent the carriage of microorganisms to the peristomal site [3]. These variants of the push technique are useful in the case of esophageal tumors or surgery and can be performed even in smaller children when the internal fixation plate of the PEG is extremely large. A second intervention or anesthesia is not required to replace the tube in the push technique.
Laparoscopic guidance is useful in patients with severe TAD, hepatomegaly, or previous abdominal surgery, because the site of the puncture is under visuall control, and thus hepatic or colonic interposition, and vascular injuries are avoidable and adhesions can be released easily [4]. In laparoscopic-assisted gastrostomy (LAG), a gastrostomy tube is inserted laparoscopically by a surgeon. This technique is popular and can be used during laparoscopic fundoplication. In laparoscopic-assisted PEG (L-PEG), the original pull-through technique is performed under laparoscopic and endoscopic guidance. In L-PEG the laparoscopy provides an intra-abdominal view to the endoscopist. This help is crucial in high-risk patients, although transillumination of the abdominal wall is inappropriate.
This study aimed to analyze the outcomes of conventional PEG and L-PEG in high-risk patients in our tertiary pediatric center.
MATERIALS AND METHODS
A total of 90 PEG insertions were performed between January 2014 and December 2019 in our tertiary pediatric gastroenterological and surgical centers. Patients who underwent open, LAG, and one-step gastrostomy placements were excluded from the study. We retrospectively analyzed 25 of 85 high-risk patients (patients with severe thoracoabdominal deformity [TAD], previous abdominal surgery or abdominal tumor, and ventriculoperitoneal [VP] shunt) with respect to age, sex, diagnosis, indication for surgery, operative time, minor and major complications (intraoperative/postoperative), and mortality.
This study was conducted in accordance with the Declaration of Helsinki and the recommendations of the 2015 World Health Organization (WHO) guidelines. The study protocol was approved by the Human Investigation Review Board of the University of Szeged, Albert Szent-Györgyi Clinical Center (Approval No. WHO 4015). Written informed consent was obtained from all patients.
Original pull technique
All PEG procedures were performed under general anesthesia using a flexible gastroscope (Fujinon EG-530WR [outer diameter: 9.4 mm] or Fujinon EG-530N [outer diameter: 5.9 mm]; Fujinon, Wayne, NJ, USA). The stomach was insufflated. After transillumination, a 5-mm skin incision was made by the surgeon at the appropriate site of the anterior abdominal wall. After puncture and air aspiration, a guidewire was passed through the cannula sheath into the stomach and was grasped and pulled out through the oropharynx along with the gastroscope. The loop of the gastrostomy tube was fixed to the guidewire and pulled back through the esophagus into the stomach and out through the puncture site until the internal fixation plate was adjacent to the anterior gastric wall.
L-PEG
An open (Hasson) technique was used to gain infraumbilical access to establish pneumoperitoneum by insufflating carbon dioxide at 1-3 L/min until an intra-abdominal pressure of 8-12 mmHg was achieved. A 5-mm port and 30° optic device were placed and abdominal exploration was performed. If the abdominal cavity was adhesion-free, the conventional PEG procedure was performed under gastroscopic and laparoscopic visual control. However, in the case of adhesions, adhesions were released using 3-mm instruments introduced through separate working ports and thereafter, the gastrostomy tube was inserted using the original pull technique.
RESULTS
A total of 25 high-risk patients underwent PEG tube placement between January 2014 and December 2019. Patients who underwent open, one-step, and LAG were not included in the analysis. This retrospective study included 15 (60%) boys and 10 (40%) girls with a mean age of 70 months (range: 2.5 months to 17.5 years).
These 25 high-risk patients were divided into two groups. The first group comprised 15 (60%) patients who underwent conventional PEG placement with the pull technique only under endoscopic guidance. The second group comprised 10 (40%) patients who underwent L-PEG placement under both endoscopic and laparoscopic guidance.
In the first group, the mean age of the patients was 71 months (range: 2.5 months to 17.5 years) and the boy:girl ratio was 9:6. In the second group, the mean age of the patients was 57 months (range: 10 months to 14 years) and the boy:girl ratio was 6:4 ( Table 1).
Indications for gastrostomy in all cases were feeding difficulties or malnutrition.
Risk factors in the first group were severe TAD (n=7) shunts (n=5), previous abdominal surgeries (n=4; duodenal atresia, previous gastrostomy, left nephrectomy because of Wilms tumor, and tumor biopsy of rhabdomyosarcoma), and severe TAD (n=1). Adhesions were found in three (30%) patients, and they were released laparoscopically. There was no need for a conversion.
The mean operative time for the PEG procedure was 23 minutes (range: 14-35 minutes), whereas that for the L-PEG procedure was 46 minutes (range: 32-80 minutes) in the first group. The Welch's two-sample t-test revealed a significant difference between the length of the two procedures. The mean operative time of L-PEG was significantly (p=0.001) longer than that of the conventional PEG, especially if adhesiolysis was required (60-80 minutes).
After PEG placement, refeeding was started with water at 8 hours followed by formula at 24 hours in both the groups. The refeeding time did not significantly differ between the two groups. Hospital stay depended on refeeding time and underlying diseases and not on the operative technique.
Adverse effects were classified as minor or major according to the European Society for Pediatric Gastroenterology, Hepatic and Nutrition guidelines [5]. Minor complications occurred in two (8%) patients. In the first group, one (6.6%) patient underwent unplanned removal of the tube. The skin opening was closed immediately after unplanned removal and the internal fixation plate was emptied with a stool. In the second group, the occurrence of peristomal granuloma was noted in one (10%) patient.
We observed three major complications: transverse colon perforation, gastrocolic fistula, and pneumoperitoneum. All the three complications occurred in the first group (20%). No major complications (0%) were observed in the second group.
Regarding lethal outcome, one patient in the first group with severe comorbidities died because of severe outcomes of his general condition long after the postoperative period. However, no association was found between the fatal outcome and the operation.
DISCUSSION
Tube feeding is the method of choice when enteral nutrition is recommended and oral intake is insufficient. Previously, open gastrostomies were performed by surgeons through laparotomy. A Pezzer catheter was inserted into the stomach and fixed with a double-layer purse-string suture. Thereafter, the tube was brought out through a stab incision in the abdominal wall [2].
After PEG was first described by Gauderer [2] in 1980, this minimally invasive technique became the gold standard. The advantages of PEG are less scarring, shorter operative time, fewer infections, less postoperative pain, and shorter hospital stay [2]. In most cases, when the esophagus is patent and transillumination of the stomach through the abdominal wall is achievable, PEG tube placement is safe. The three principles of safe PEG placement are endoscopic gastric distension, endoscopically visible focal finger invagination, and transillumination [3,4]. However, these criteria are not considered in children with distorted anatomy because of severe scoliosis or intra-abdominal adhesions due to VP shunts, peritoneal dialysis, or previous operations. In these patients, a high risk of bowel or hepatic injury exists. Laparoscopy offers better and direct visualization of the stomach, and any adhesions can be released with this minimally invasive method.
According to a literature review on the complications of PEG insertions, the most common major complications after the conventional PEG procedure are systemic infections (3.5%) and peritonitis, sepsis, or wound dehiscence (1.5%). Pneumoperitoneum occurs in 0.7% of the patients. Asymptomatic pneumoperitoneum can occur without intestinal perforation because of the procedure; however, esophagus or bowel perforations occur in 0.3% of the patients. Gastrocolic fistulas because of the interposition of the splenic flexure between the anterior abdominal and gastric walls occurs in 0.45% of the patients. Buried bumper, intraabdominal bleeding, and ileus are detected in 1% of the patients [3]. Impaired coagulation, severe ascites, peritonitis, and local esophageal and general gastrointestinal obstructions are considered absolute contraindications for PEG placement [6]. Severe kyphoscoliosis with interposed organs and distorted anatomy are relative contraindications [6]. Vervloessem et al. [7] analyzed the potential risk factors for major complications in 449 patients and found that only VP shunts were associated with a significantly high major complication rate. Although PD catheters, hepatomegaly, esophageal stenosis, and coagulopathy had high complication rates, the difference between the two rates was not significant.
In our institute, L-PEG was started in 2014 after a major complication in a patient with a VP shunt. Thereafter, all patients at high risk for intestinal injury (patients with VP shunt, PD catheter, previous abdominal surgery, severe thoracoabdominal deformities, hepatomegaly, or intra-abdominal masses) underwent L-PEG placement. Before selection of patients, conventional PEG placement was performed in 15 high-risk patients, that is patients with severe TAD (n=7), abdominal tumor (n=6), and VP shunts (n=2). Three major complications, namely colon perforation (n=1), gastrocolic fistula (n=1), and pneumoperitoneum (n=1), occurred.
Colonic perforation was found in a patient with a 2-year-old VP-shunt. The patient developed peritonitis on the first postoperative day. Laparotomy was performed, and two perforation openings were found in the transverse colon, which were closed with a double-layer suture. The distal catheter of the VP shunt was temporarily externalized. The PEG was transferred to a gastrostomy tube. A gastrocolic fistula was observed in a 3-year-old boy with Fallottetralogy, severe TAD, and somatomental retardation. The internal bumper was removed endoscopically and the chronic fistula was planned to be closed; however, the patient was lost to follow-up and the chronic fistula was closed surgically. Pneumoperitoneum because of early dislodgement of the tube in the early postoperative period by an autistic patient with severe TAD was observed. Gastropexy was performed laparoscopically. This complication was independent of the surgical technique as well as patient's high-risk status.
After selection of high-risk patients, 10 L-PEG placements were performed and the indications for laparoscopic guidance were VP shunts (n=5), previous abdominal surgeries (n=4; duodenal atresia, previous gastrostomy, left nephrectomy because of Wilms tumor, and tumor biopsy from rhabdomyosarcoma), and severe TAD (n=1). Adhesions were found in three (30%) patients, of which two had a VP shunt and one had a previous gastrostomy. The advantage of L-PEG is that surgeons and endoscopists perform the same procedures, and therefore, there is no requirement for learning a new technique. The endoscopist performs the original pull technique and the surgeon attains umbilical access as in any laparoscopic procedure for a 5-mm camera port. We recommend the open (Hasson) technique over the Veress needle technique to prevent vessel, hepatic, or bowel injury. Any adhesions can be
|
2021-05-28T05:22:17.736Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "eb1010a402a8bd0a9331af361c9472ca7727de03",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5223/pghn.2021.24.3.273",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb1010a402a8bd0a9331af361c9472ca7727de03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
44051786
|
pes2o/s2orc
|
v3-fos-license
|
Clues to the Pathogenesis of Melasma from its Histologic Findings
Melasma is an acquired hypermelanosis characterized by the development of symmetrical, irregular light-to-dark brown macules and patches on sun-exposed areas of the skin, especially on the skin of the face [1]. It is common among Asian and Hispanic women who are in their third or fourth decades of life [2]. Three patterns of melasma are recognized clinically that are based on the distribution of the hyperpigmentation on the face, namely, the centrofacial, malar, and mandibular patterns. However, melasma often presents as a mixture of these patterns. Factors involved in the pathogenesis of melasma include a genetic predisposition, chronic exposure to ultraviolet (UV) radiation, and female sex hormones [3-6]. However, the pathogenesis of melasma has not yet been fully elucidated.
Introduction
Melasma is an acquired hypermelanosis characterized by the development of symmetrical, irregular light-to-dark brown macules and patches on sun-exposed areas of the skin, especially on the skin of the face [1]. It is common among Asian and Hispanic women who are in their third or fourth decades of life [2]. Three patterns of melasma are recognized clinically that are based on the distribution of the hyperpigmentation on the face, namely, the centrofacial, malar, and mandibular patterns. However, melasma often presents as a mixture of these patterns. Factors involved in the pathogenesis of melasma include a genetic predisposition, chronic exposure to ultraviolet (UV) radiation, and female sex hormones [3][4][5][6]. However, the pathogenesis of melasma has not yet been fully elucidated.
The histopathologic features of melasma skin might provide clues towards understanding its pathogenesis. This review discusses five histologic characteristics of melasma, namely, epidermal hyperpigmentation, basement membrane disruption, solar elastosis, increased vascularization, and a high prevalence of mast cells, and it considers their implications for the pathogenesis of melasma.
Epidermal hyperpigmentation
The most characteristic histologic feature of melasma is the increase in the amount of melanin in the epidermis. Fontana-Masson staining has shown that, the melanin content of melasma skin is higher in all layers of the epidermis, including the stratum corneum, than that in perilesional normal skin [7][8][9]. Image analysis of Fontana-Massonstained sections of skin from 22 patients with melasma showed a significant difference in the density of melanin between melasma skin (mean ± standard deviation [SD]: 0.37 ± 0.02) and perilesional normal skin (0.34 ± 0.02) (p<0.01) [10]. These findings indicate that the development of melasma involves accelerated melanin synthesis, increased levels of melanin transfer to the keratinocytes, and reduced melanin degradation.
Reports on melanocyte numbers in melasma are inconsistent. Kang et al. [7] found a higher melanin content and increased numbers of melanocytes in melasma skin. Their study involved the quantitative image analysis of 56 Fontana-Masson-stained sections. They showed that, compared with perilesional normal skin, the number of melanocytes per millimeter of epidermal length and the number of melanocytes per millimeter of rete ridge length increased by 24% and 27%, respectively, in melasma skin, while the pigmented area per millimeter of epidermal length and the pigmented area per millimeter of rete ridge length increased by 73% and 39%, respectively. In addition, ultrastructural observations that included clear increases in the numbers of melanosomes and melanocytes in melasma skin were reported. In contrast, a study by Grimes et al. [11] of 22 skin specimens from subjects with Fitzpatrick skin types IV-VI immunostained using Mel-5, did not find a significant increase in melanocyte numbers in melasma skin compared with perilesional normal skin. Moreover, in their study of 44 patients with melasma, Miot et al. [10] did not find any differences in melanocyte numbers between melasma skin and perilesional normal skin sections labeled using a Melan-A antibody.
Electron microscopy has shown higher numbers of mature melanosomes in keratinocytes and melanocytes in melasma skin [10], and a significantly higher number of dendrites per keratinocyte in melasma skin (7.55 ± 2.53 dendrites per keratinocyte) than that in perilesional normal skin (5.28 ± 1.85 dendrites per keratinocyte) (p<0.05) [11]. Furthermore, electron microscopy demonstrated increased levels of activity within melanocytes in melasma skin, which was deduced from the presence of higher organelle numbers, including mitochondria, Golgi apparatuses, rough endoplasmic reticula, and ribosomes [7]. Immunohistochemistry using NKI-beteb, which recognizes the melanocyte lineage-specific pmel-17 antigen, showed a higher staining intensity in melasma skin than in normal skin ( Figure 1). Compared with nonlesional skin, increased expression of tyrosinase has been demonstrated [12]. Mel-5 immunostaining, which detects tyrosinaserelated protein (TRP)-1, also increased in intensity in melasma skin than in normal skin, suggesting that levels of TRP-1 are higher in melasma melanocytes [7]. In addition, we have observed the elevated expression of TRP-2 in melasma skin ( Figure 1). These findings support the concept of an increased level of melanogenesis in the pathogenesis of melasma.
Abstract
Melasma is a common acquired hypermelanosis that affects sun-exposed areas of the skin, especially the face. Its histologic manifestations are evident in the epidermis, extracellular matrix, and dermis. One of the hallmarks of melasma is an increase in the amount of epidermal melanin; however, whether melanocyte numbers increase or not is a topic of debate. Interestingly, basement membrane abnormalities also characterize melasma. Furthermore, solar elastosis is recognized as one of the dermal pathologic findings of melasma. These findings suggest that extracellular matrix abnormalities are consistently found in melasma. In the dermis, increased vascularity and increases in mast cell numbers are observed, indicating that dermal factors have important roles in the pathogenesis of melasma, despite melasma being characterized by epidermal hyperpigmentation. This review discusses these histologic characteristics of melasma, and it considers their implications for the pathogenesis of this skin condition.
The higher level of solar elastosis in melasma skin implies that chronic sun exposure is a prerequisite for the development of melasma. After UVB irradiation, keratinocytes induce melanocyte proliferation and melanogenesis by secreting Stem Cell Factor (SCF), basic Fibroblast Growth Factor (bFGF), interleukin-1, endothelin-1, inducible nitric oxide synthase, α-melanocyte-stimulating hormone, and adrenocorticotropic hormone [16][17][18][19]. The secretion of prostaglandin E2 after UVB exposure results in larger and more dendritic melanocytes [20]. Furthermore, solar damage of the dermis could induce the secretion of melanogenic cytokines, including SCF and hepatocyte growth factor, from the dermal fibroblasts, thereby influencing the development of hyperpigmentation in the overlying epidermis [21][22].
Increased vascularization
Accumulating evidence has shown that the number of blood vessels is higher in melasma lesions than in perilesional normal skin [23][24][25]. An immunohistochemical study of factor VIIIa-related antigen showed a considerable increase in the number of enlarged blood vessels, vessel size, and vessel density in melasma skin compared with perilesional normal skin [23]. The elevated expression of vascular endothelial growth factor (VEGF) in keratinocytes has led to the hypothesis that VEGF may play a role in the behavior of the melanocytes in the skin, because functioning VEGF receptors were demonstrated in melanocytes in vitro [26]. Elevations in the levels of c-kit, SCF, and inducible nitric oxide synthase have also been observed, which could affect vascularization [27,28].
Tranexamic acid (TXA) inhibits plasmin, a key molecule involved in angiogenesis that converts extracellular matrix-bound VEGF into its free forms [29]. TXA has also been reported to suppress neovascularization-induced bFGF [30]. In a recent clinical trial that evaluated the efficacy of systemic TXA in the treatment of melasma, we demonstrated significant decreases in the lesional melanin index and in the erythema index after the oral administration of 250 mg TXA three times per day for eight weeks [31]. Histologic analysis showed significant reductions in the level of epidermal pigmentation and vessel numbers (Figure 2A-D). These findings suggest that the interactions between increased levels of vascularization and the melanocytes may act on the development of hyperpigmentation within the overlying epidermis.
The role of mast cells in the development of melasma has not been definitively elucidated. Since repetitive UV irradiation induces the production of mast cell tryptase, which degrades type IV collagen, elevated mast cell numbers and tryptase levels could weaken the basement membrane in melasma skin [32]. Mast cells could trigger solar elastosis by inducing the production of elastin by fibroblasts, either directly or via other cell types or cytokines [33,34]. Solar elastosis
Basement membrane disruption
Some studies have investigated the status of the basement membrane in melasma skin. For example, Sanchez et al. [13] demonstrated the presence of the vacuolar degeneration of the basal cells and the focal vacuolar degeneration of the basement membrane in 3.9% (3/76) of melasma skin specimens. In contrast, Kang et al. [7] did not observe any disruptions of the basement membrane in their evaluation of skin samples from 56 Korean patients with melasma using diastase-resistant periodic acid-Schiff (D-PAS) staining and electron microscopy. However, the same author recently reported that pendulous melanocytes associated with basement membrane abnormalities are a characteristic feature of melasma [14]. Another study of melasma patients with Fitzpatrick skin types IV and V revealed that D-PAS staining and anti-collagen type IV immunohistochemistry indicated damage to the basement membrane in 95.5% and 83% of the skin samples, respectively [9]. Basement membrane disruption could be caused by elevated levels of matrix metalloproteinase (MMP)-2 and MMP-9, which degrade type IV collagen and type VI collagen in the skin during chronic UV exposure [15].
Since free melanin and melanophages are present in the dermis of melasma skin, disruption of the basement membrane could facilitate the descent or the migration of melanocytes and melanin into the dermis [7,9]. Histologic findings such as the protrusion of pigmented basal cells into the dermis of 66% of melasma skin samples compared with that of pigmented basal cells into the dermis of 20% of photoprotected nonlesional skin samples, support this hypothesis [9]. Consequently, the treatment of melasma is challenging because it is often recalcitrant to therapy and, even after being successfully cleared, it frequently recurs [13].
Solar elastosis
Solar elastosis is one of the most commonly observed histologic characteristics of melasma skin. Kang et al. [7] reported a moderate-tosevere degree of solar elastosis in 93% of the melasma patients included in their study. Melasma skin showed a significantly higher degree of solar elastosis than perilesional normal skin (83% vs. 29%, p<0.05) [9]. did not develop in mast cell-deficient mice that were repeatedly irradiated with UV [35]. Elevated numbers of mast cells together with the presence of infiltrating leukocytes and dilated blood vessels might reflect the chronic skin inflammation that underlies the development of melasma [8]. Finally, mast cells can also induce vascular proliferation by secreting angiogenic factors, including VEGF, fibroblast growth factor-2, and transforming growth factor-ß [36].
|
2019-03-11T13:05:57.439Z
|
2014-10-19T00:00:00.000
|
{
"year": 2014,
"sha1": "00d6238bc7803ebbd76541af2d210314ef917c95",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/clues-to-the-pathogenesis-of-melasma-from-its-histologic-findings-2376-0427.1000141.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cb5b9b8e19257082bdddab428bb9a5281068caac",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253098799
|
pes2o/s2orc
|
v3-fos-license
|
Numerical simulation of microscale acoustic streaming flow
The efficient mixing of fluid samples in miniaturized total analysis systems is essential for numerous applications including biological screening assays, chemical extraction, polymerization, cell analysis, and protein folding. Miniaturized microfluidic platforms have recently emerged for microscale fluid mixing with their capabilities of high-throughput sample processing and reduced sample depletion. However, microscale fluid mixing is inherently hampered due to the characteristics of a low Reynolds number flows in diminutive channels. Microscale fluid mixing mainly depends on molecular diffusion and thus requires long processing time. In order to address the conundrum, a variety of active microfluidic approaches for swift and efficient mixing have been developed using electro-kinetic flow, laser-induced flow, magnetic stirring, and acoustic streaming flow (ASF). Among these techniques with external force fields, acoustofluidic fluid mixing has been acclaimed due to its on-demand, controllable, non-invasive and biocompatible nature. In this study, we performed numerical simulations of acoustic streaming flows, induced by with a highly localized surface acoustic waves (SAWs), in a microscale fluidic channel based on an acoustic wave attenuation model
Introduction
The efficient mixing of fluid samples in miniaturized total analysis systems is essential for numerous applications 1 including biological screening assays, 2 chemical extraction, 3 polymerization, 4 cell analysis and protein folding. 5 Miniaturized microfluidic platforms have recently emerged for microscale fluid mixing with their capabilities of highthroughput sample processing and reduced sample depletion. 6 However, microscale fluid mixing is inherently hampered due to the characteristics of low Reynolds number flows in diminutive channels. Microscale fluid mixing mainly depends on molecular diffusion and thus requires long processing time. In order to address the conundrum, a variety of active microfluidic approaches for swift and efficient mixing have been developed using electro-kinetic flow, 7-8 laser-induced flow, 9 magnetic stirring, 10 and acoustic streaming flow (ASF). [11][12] Among these techniques with external force fields, acoustofluidic fluid mixing has been acclaimed due to its on-demand, controllable, non-invasive and biocompatible nature. In this study, we performed numerical simulations of acoustic streaming flows, induced by with a highly localized surface acoustic waves (SAWs), in a microscale fluidic channel based on an acoustic wave attenuation model.
Methods
For numerical simulations of 3D microscale flows interacted with acoustic waves, COMSOL Multiphysics was utilized to model the effected fluid streamlines in the proximity of focused SAWs. The non-linear time-averaged body force and SAW attenuation model for the substrate/fluid interface and fluid domain are also considered. The attenuation lengths were analytically determined by using parametric relations α -1 ≈ 12.8λSAW and β -1 ≈ 3e 6 λSAW 2 for the substrate/fluid interface and within the fluid itself, respectively 13 where α -1 is the coefficient of attenuation in substrate, λSAW is wavelength of applied surface acoustic wave and β -1 is coefficient of attenuation in fluid media. The effect of high-frequency SAWs originated from a focused interdigital transducer (FIDT) on the flow streamlines was numerically calculated in our numerical model. The grid independence was also confirmed by eliminating the influence of mesh parameters on the simulated acoustic velocities and streaming velocities in the laminar flow frequency domain to avoid any numerical error arising from grid convergence.
The COMSOL laminar flow model started with the definition of constant and variable parameters in both local and global domains. The significant constant parameters were defined globally for all physics while simulating the problem through a fully coupled scheme. The rest of the variables were manipulated in the extended definition step of each module separately to make it accessible for the respective model. Where U is the first order fluid displacement velocity magnitude in the orthogonal direction to the fluid flow. Also, the velocity magnitude in the fluid was analytically determined by U = ξω, where ξ represents the overall attenuation in the substrate and fluid up to the first-order term while the rest of higher-order terms with reflection coefficient Ri were terminated to avert complexity where = ( +1 − +1 + ).
where f(y,z) is the attenuation function that includes both the attenuation in substrate and fluid as well as the angle of refraction when a surface acoustic wave enters in a fluid at a certain angle which is θR = sin -1 (cf/cs). The ξ0 is initial displacement of SAWs, which was experimentally determined by laser Doppler velocimetry and particles image velocimetry. The first-order reduced attenuation function can be given as and incompressibility of fluid. Along with fluid flows, the trajectories of submicron particles under strong acoustic streaming effect was also simulated by using the particle tracing module.
Results and discussion
The magnitude of the time-averaged body force was directly related to the square of the acoustic velocity and attenuation coefficients. The numerical results show effective streaming in the fluid regime as compared to the fluid/substrate portion because the attenuation coefficient β -1 in fluid is the square of the frequency of the SAWs while it only correlates with unit power of frequency in the substrate. As shown in Fig. 1, the disturbance in flow streamlines was clearly observed as the ASFs become dominant over the lateral flow velocity. The simulated streamlines were plotted for increasing body force within the proximity of 11.4 wavelength and 129.5 MHz acoustic waves acting orthogonal to a 3 mm/s (averaged) lateral flow in 550 wide microchannel. A dimensionless parameter v * was introduce to represent the ratio of the time-averaged second-order streaming velocity (v2) to the lateral inflow velocity (vf). This specific parameter indicates the effectiveness of acoustic streaming in the fluid regime with applied SAWs. The increasing body force and streaming in the microchannel resulted in increasing value of v * in comparison with constant inflow velocity due to gradually elevated input power or acoustic wave amplitude as v * = v2 max /vf. In Fig. 1(a), the effects of ASFs in the fluid streamlines was insignificant due to the small substrate velocity (v * = 1.99) of the orthogonally applied acoustic beam. In Fig. 1(b-d), in contrast, the prominent ASF-induced vortices were gradually developed due to the strong body force with increasing v * . We found that the deflection of the flow streamlines due to the ASF-induced microscale vortices and resultant flow mixing were directly dependent on the amplitude of the acoustic field. It was also confirmed that the wave attenuation in the fluid as well as in the substrate, which in turn determined the size of the microscale vortices, was strongly correlated with the acoustic wavelength.
Conclusion
In the present study, we conducted numerical simulations of ASF-induced flow mixing by highly localized SAWs. We observed efficient ASFs in the microchannel to achieve efficient on-chip flow mixing in the microchannel. The results of numerical simulation advocate the effective mixing at microscale through acoustic attenuation of high-frequency SAWs generated from a FIDT.
|
2022-10-25T01:16:05.562Z
|
2022-10-23T00:00:00.000
|
{
"year": 2022,
"sha1": "bf33c71b1da9261b4e7f3c3ea74b4861ee78ec85",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bf33c71b1da9261b4e7f3c3ea74b4861ee78ec85",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
254926912
|
pes2o/s2orc
|
v3-fos-license
|
Generation-Augmented Query Expansion For Code Retrieval
Pre-trained language models have achieved promising success in code retrieval tasks, where a natural language documentation query is given to find the most relevant existing code snippet. However, existing models focus only on optimizing the documentation code pairs by embedding them into latent space, without the association of external knowledge. In this paper, we propose a generation-augmented query expansion framework. Inspired by the human retrieval process - sketching an answer before searching, in this work, we utilize the powerful code generation model to benefit the code retrieval task. Specifically, we demonstrate that rather than merely retrieving the target code snippet according to the documentation query, it would be helpful to augment the documentation query with its generation counterpart - generated code snippets from the code generation model. To the best of our knowledge, this is the first attempt that leverages the code generation model to enhance the code retrieval task. We achieve new state-of-the-art results on the CodeSearchNet benchmark and surpass the baselines significantly.
Introduction
Benefiting from the development of transformers (Vaswani et al., 2017;Feng et al., 2020b;Guo et al., 2021) as well as pre-training techniques (Li et al., 2022a), there has been a great amount of progress in code-related tasks, including code generation (Brown et al., 2020), code search (Husain et al., 2020), code auto-completion (Lu et al., 2022), etc. Among these, Code search or code retrieval (Husain et al., 2020) is an essential problem in software engineering, which enables the efficient finding and reuse of existing code snippets, boosting developers' productivity. In general, it aims to retrieve function-level code snippets given a natural * Work was done when Dong Li were interning at Microsoft. language documentation query. Recently, Code-BERT (Feng et al., 2020b) has achieved great performance in code-related tasks by pre-training a bimodal model to learn a general-purpose representation for both programming language (PL) and natural language (NL). By considering the inherent code structure -data flow, GraphCodeBERT (Guo et al., 2021) further boost the performance on downstream tasks. Meanwhile, CodeRetriever (Li et al., 2022a) adopts the contrastive-training approach to learn the semantic presentation of code on a large-scale pretraining corpus. Despite of the success of all these code-specific frameworks (Feng et al., 2020b;Guo et al., 2021;Li et al., 2022a;Guo et al., 2022;Husain et al., 2020), they only focus on learning the information within the code-documentation context (either by optimizing specific downstream tasks or by contrastively learning general-purpose representations), without considering external knowledge, which limits the """ Translate characters from lower to upper case for a particular column. :returns: new H2OFrame with all strings in the current frame converted to the uppercase. """ Documentation def translate_column (df, column, translations) return H2OFrame._expr(expr=ExprNode("toupper", self, column)) Generated Code Figure 2: An example that generated codes help documentation query to find the ground truth. Given an NL presented "Documentation", the primary goal is to retrieve the "Ground Truth" code snippet from a candidate pool, while the "Code Retrieved by GraphCodeBERT" is the actual one that the retrieval model (with doc query) (Guo et al., 2021) finds. After incorporating "Generated Code", the retrieval model proposed in this paper is able to retrieve the "Ground Truth" Code as its first choice.
expressiveness of the representations. Specifically for code retrieval tasks, due to the intrinsic difference between the NL and PL, the current end-toend code search models are insufficient to retrieve the most semantically similar code snippet. Since documentation queries and code snippets are typically embedded into the same latent space, the model would try to find an "efficient" shortcutmatching two vectors once some keywords are triggered. For example, in Figure 2, "Documentation" on the left-top is a natural language, an end-toend model (Guo et al., 2021) would find the "Retrieved Code by GraphCodeBERT" in the right-top whereas they share a few common keywords. This misleading result testifies to the drawbacks of these models.
To tackle this issue, we propose GACR, Generation-Augmented Code Retrieval, a two stage framework for code retrieval task, illustrated in Figure 1. First, the natural language (NL) text documentation is used to generate code snippets by a code generation model (for example, GPT-3). Then the generated code augmented NL documentation would serve as an expanded query for retrieval. The benefit of GACR is that it could leverage extra domain knowledge specifically generating code snippets according to a natural languagedescribed documentation prompt. This would bridge the gap between the NL and PL which has been overlooked in current end-to-end code retrieval models. Considering the same example in Figure 2, by leveraging the power of the generation model, the "Documentation" could be used to generate a code snippet -the "Generated Code", which is much more semantically close to the "Ground Truth". And thus, our proposed model -GACR is able to find the desired code.
Though there are some work (Parvez et al., 2021b;Lu et al., 2022) that leverage retrieval model to help generation, enhancing code retrieval tasks with generation model lacks attention. The main challenges are how to generate code snippets with good quality and fusion them with documentation queries. Thanks to the promising achievement of GPT (Brown et al., 2020), we select the Codex, an ad-hoc fine-tuned generation model on publicly available code datasets. As to the utilization, we design a dual representation attention paradigm that allows learning information from NL documentation and PL generated code snippet respectively, and mutually. The expansion queries fuse the contents from NL and PL, leading to informative and expressive representations.
We evaluate the performance of the GACR on CodeSearchNet benchmark (Husain et al., 2020) with 6 programming languages. The empirical results show that the proposed generation-augmented code retrieval models achieve significant improvements compared to the baseline models (Guo et al., 2021;Feng et al., 2020a). To this end, we summarize our contributions as follows: • We propose a generation-augmented framework for code retrieval tasks. To the best of our knowledge, this is the first attempt that leverages the power of the generation model to help with code retrieval tasks. • We design generation-augmentation frameworks, spinning from the utilization of a single generated code snippet to multiple distinct code snippets to adapt various deployments and investi-gate the fusion pattern of the NL documentation and PL generated code, as well as different pretrain models. • Extensive empirical experiments on the Code-SearchNet benchmark validate the superiority of the proposed augmentation paradigm.
The following of the paper is organized as: in Section 2, we introduce the background and formalize the notations. In Section 3, we talk about the details of how we incorporate the generated code for retrieval tasks. In Section 4, we conduct experiments and analysis empirical results. In Section 5, we summarize a few related works and in Section 6, we conclude the paper. Further, in Appendix A, we show specific cases of how the generated code snippets boost or depress the performance on retrieval tasks.
Dense Retrieval
Encoder-based dense representation frameworks have achieved great success in retrieval tasks (Shen et al., 2014;Husain et al., 2020;Zhang et al., 2022). In short, queries and documents are mapped into the latent space, and similarity metrics are computed according to their distance (in the latent space). Specifically for code retrieval tasks, there are documentation -code snippet (D, C) pairs in the training sets. Denote the sequence of natural language documentation tokens as D = {d 1 , d 2 , . . . , d m } with length m and each d i (i ∈ {1, 2, . . . , m}) is a token. Similarly, the sequence of code snippet is denoted as C = {c 1 , c 2 , . . . , c n }. The encoder would take the pairs as input and map them into vector representations, such that we have V d = Encoder(D) and V c = Encoder(C). The semantic similarity between document and code can be simply defined as the dot product: In the training stage, the encoder is optimized to maximize the scores in Equation (1) for related documentation -code snippet pairs which is analogous to minimizing the distance between them in latent space. In the inference stage, the similarity scores between the query vector (obtained by feeding D into the encoder) and all the candidate code vectors would be computed according to Equation (1). The retrieved code snippets are defined as the ones that have large scores.
Prompt-based Code Generation
Large pre-trained language models have demonstrated an awesome power to generate code (Chen et al., 2021;Brown et al., 2020;Wang and Komatsuzaki, 2021;Nijkamp et al., 2022;Li et al., 2022b). Prompt-based code generation or completion model (Chen et al., 2021) would take the natural language prompt as the input and output the corresponding code snippet. In this paper, we mainly study Codex (Chen et al., 2021) model, a GPT (Brown et al., 2020) based language model that is fine-tuned on publicly available code collected from GitHub and particularly to generate functionally code snippets given NL documentation strings. In our setting, the documentation D is prompt. After feeding it into the generation model (Codex), we would obtain the generated code snippet: G = Gen(D). Presenting in a sequence way:
Methodology
We propose to enhance the documentation query with its generation counterpart for code retrieval tasks in this section. At a high level, we first feed the given query D into a code generation model powered by Codex, which serves as a prompt, and generates a code snippet G accordingly. Afterward, the documentation along with its generated auxiliary code would be treated as the query vector jointly: V query = Encoder(D, G), used to retrieval the most correlated code snippets from the candidate pool.
Query Augmentation with Single Generated Code
One strategy is to append the generated auxiliary code tokens to the end of the documentation query and then feed it into the encoder model, obtaining the vector representation. It is worth being aware that the generated code and original documentation come from distinct semantic domains -natural language (NL) and programming language (PL), leading to a drawback if we take one single representation for them. Thus, we design a dual representation attention paradigm -two semantic input sequences (documentation and generated code) could possess their own vector representations respectively while the attention mechanism would elegantly ensure learning the information mutually. The overall model architecture is shown in doc Generation Model generated code tokens vectors query target distance Encoder Figure 3: An illustration of generation-augmented code retrieval framework. Figure 3. Given a sequence of documentation tokens D. We feed the documentation sequence D into a well-trained code generation model -Codex (aka GPT-3) and obtain the output -generated code token sequence G. Further, two sequences are concatenated together, by some special tokens, as a single input sequence: [CLS], g 1 , · · · , g p , [SEP ]} and the length of which is m + p + 4. The input sequence X, consisting of fusion tokens from documentation and generated code, shall further be converted as the vector representation Y . We denote the transformer operation as T . After feeding the input X into the K layers multi-head self-attention transformer model (Vaswani et al., 2017;Guo et al., 2021), we shall obtain a vector sequence Y = {y 1 , y 2 · · · , y m+p+4 }, with the same length as the input sequence: The vectors at index 1 and m + 3 of Y , corresponding to the CLS tokens of documentation and generated code, would be extracted and concatenated as the final query vector V query = [y 1 , y m+3 ]. Correspondingly, the target vector is replicated for dimension: V target = [z 1 , z 1 ].
Augmentation with Multi-Generated Codes
In Section 3.1, we proposed the framework that allows the incorporation of a single generated code snippet into the documentation query. Here we extend the framework and include multiple distinct generated code snippets to further generalize and boost the model. In short, multiple generated code snippets are integrated before they are fed into the Encoder. To enable a diverse expansion, we limit the length of each code snippet. Given k unique generated code snippets, the input token sequence can be formularized as X = {D, G 1 , · · · , G k }, where D represents the sequence of documentation tokens and G i , i ∈ {1, · · · , k} is the i-th generated code token sequence. Similar to Section 3.1, we still extract two special vectors, corresponding to the first and second CLS tokens (ahead of D and G 1 ), from the output vector sequence Y . In this case, all generated codes are mixed together before they are fed into the encoder, that's so-called "pre-attention" -fusion before the attention. This pre-attention mechanism allowed all distinct generated code snippets to learn from each other mutually.
Optimization and Inference
In the training stage (for each batch B), we are given |B| related/positive documentation-code pairs (D b , C b ), b ∈ {1, 2, · · · , |B|}. Either with single-generated code augmentation or multi one, we shall have vectors pairs (V b query , V b target ) correspondingly (see Figure 3). Similar to (Husain et al., 2020;Guo et al., 2021;Li et al., 2022a), an in-batch optimization is utilized: (Husain et al., 2020) benchmark (further crafted by (Guo et al., 2021)). We highlight in bold the best model and underline the second best one in each column. We also list the relative improvements in percent w.r.t the initialization base models (reproduced ones of GraphCodeBERT and CodeRetriever, respectively). As to the inference stage, the similarity score can be calculated by Equation (1) for each query D w.r.t all the candidate code snippets. And the desired retrieval code snippets can be obtained by ranking scores accordingly.
Experiments
We compare the performance of the proposed framework -GACR with a number of state-of-theart baseline models: The experiments are carried on CodeSearchNet code corpus datasets, which are initially released by (Husain et al., 2020) and further handcrafted by (Guo et al., 2021) for the code quality reason. Here, we take the same settings as (Guo et al., 2021). Overall, all models (both baselines and proposed ones) derive the vector representations for query and candidates, then compute the dot product of them as a score for rank and retrieve. The performance is measured in terms of Mean Reciprocal Rank (MRR).
RQ1. How does the
generation-augmented framework -GACR perform compared to the baselines? Table 1 shows the main results of all models. GACR is our proposed model, where 'S' represents expanding documentation query with a single generated code snippet (Section 3.1), 'M' for multiple generated snippets (Section 3.2). We highlight the best results for each coding language (each column). Overall, our proposed models outperform all the baselines, validating the efficacy of the generationaugmented query expansion framework. Even more, GACR-M model exhibits up to 27.8% significant improvement on the Php data compared to GraphCodeBERT. Figure 5 displays the superiority of the proposed generation-augmented frameworks from the perspective of counting relative ranks. Specifically, we compare the rank of the ground truth code snippet for the same documentation query (with or without augmentation). Then we count how many each has a smaller value (higher rank). From Figure 5, we can observe that augmented query consistently obsess higher rank over all datasets, especially in php language.
Pre-train Initialization
For different pre-training initialization -Graph-CodeBERT and CodeRetriever, GACR models consistently get better results while CodeRetreiverbased pre-training is better than GraphCodeBERTbased one, owing to large scale contrastive learning strategy.
RQ2. How could generated codes help retrieval tasks?
In this subsection, we would like to investigate and answer how the generated code snippets could help the retrieval tasks. We aim to discuss two aspects: the effect of each component in generated code function and a specific case of why generationaugmentation works.
Function-level generated code performance
In general, the generated code snippet is presented at a function level. At this point, a code of function can be divided into two parts in both semantic and spatial meanings: the function name (typically the Mask type A type B type C type D first line of a piece of code, including the input arguments), and the remaining part called the function body. Here we empirically investigate how these different components could help the downstream code retrieval task. We separately treat documentation, generated function (entire generated code), generated function name, generated function body as queries to retrieve the code, and the results are presented in Figure 4. For all the cases, we can find that the generated function name of the generated code outperforms the generated function body, which indicates that typically the name of a function contains sufficient or even more information than the function body. And comparing the documentation with generated function, sorely treating generated code as a query could achieve comparable performance w.r.t the documentation and sometimes even better. Thus, this confirms the benefit to combine these two pieces of information (from generated code and from documentation). Figure 7 gives a specific example of how the generated code helps the documentation query to find the
Takes a dictionary whose values are lists and returns a dict with the elements of each list as keys and the original keys as values. """ right code snippet. In the base model (GraphCode-BERT), "Documentation" in Figure 7 serves as a query, aiming to find the "(Ground Truth" while it actually retrieves the "Code Retrieved by Graph-CodeBERT". This can be interpreted that without an understanding of the description -"inverse keys and values of a dictionary", the model finds the code snippet by looking up and matching some keywords, for example, "merge", and "values", etc. On the contrary, the generation model could get a better understanding of the description of "Documentation". It indeed creates some semantically meaningful functions ("Generated Code (Sample 1 and 2)"), which share almost the same function name as "(Ground Truth)" as well as the real functionality. By leveraging the power of the generation side, a better interpretation of the documentation leads to more accurate retrieval.
Study of the Attention Mechanism
In this subsection, we study the affection of attention mechanisms in the proposed framework. Largely, we design four types of masks shown in Figure 6 when fusion documentation and code.
And the corresponding results are presented in Ta-
Numbers of Augmented Code
In Table 1, we observe that GACR-M achieves generally better results than GACR-S, concluding that with more generated code snippets incorporated, the model can get better retrieval results due to more information. Due to the limitation of the model, the total length of the input sequence (fusion of documentation and generated code tokens) is limited (typically less than 256). We are able to append more generated code snippets by limiting the maximum length for each code snippet. In Table 3, we list three different code length (from 32 to 128) limits and the results tell that 64 can be a good balance.
Generation Hidden Representation
A generation model takes a prompt (NL documentation) as an input and turns it into a hidden representation by an encoder, and further interprets (Wang and Komatsuzaki, 2021), a pre-trained model on python language. Table 4 indicates that the generated code plays a better role in retrieval tasks, which might be owed to the interpretability of the decoder of generation model.
Code Retrieval Tasks
Code retrieval or code search, known as to retrieve a relevant code snippet from the candidate pool given a natural language query, has been widely studied for decades spinning from traditional techniques (Vinayakarao et al., 2017) to deep learning methods (Li et al., 2022a;Feng et al., 2020b;Guo et al., 2021). CODEnn (Gu et al., 2018) is the early deep learning model that jointly learns embedding vectors for both code snippets and natural language descriptions and further calculates the similarity in the embedding space. CodeSearchNet (Husain et al., 2020) provides benchmark code search tasks in different programming languages. On top of this benchmark, CodeBERT is the first large bimodal pre-trained model with Masked-1 https://huggingface.co/NovelAI/genji-python-6B Language Modeling (MLM) and replaced token detection objectives. GraphCodeBERT (Guo et al., 2021) further improves the performance by utilizing the semantic-level code structure when designing attention pattern. UniXcoder (Guo et al., 2022) is a unified cross-modal pre-trained model which enhance the code representation by leveraging information from code comment and AST. CodeRetriever (Li et al., 2022a) adopts unimodal and bimodal two contrastive learning schemes and achieves the start-of-the-art in code search task.
Retrieval and Generation as Augmentation
Code retrieval or other information retrieval models can also serve as an auxiliary unit to help enhance related domain tasks, like code generation, code auto-completion or code summaries, etc (Xia et al., 2017). Hayati et al. (2018) propose a model named RECODE to explicitly refer to existing code snippets when generating code, relying on subtree retrieval. Hashimoto et al. (2018) propose a retrieveand-edit framework for code generation and completion by optimizing a joint objective. fine-tune the retrieval-augmented generation (RAG) models end-to-end. REDCODER (Parvez et al., 2021a) is a retrieval augmentation framework, where it supplies and enhances the code generation or summarization model by searching the relevant codes or summaries from the candidate database. ReACC (Lu et al., 2022) is a code completion framework that is augmented by leveraging external context, aka retrieving semantically and lexically similar code snippets. GAR (Mao et al., 2021) is a Generation-Augmented Retrieval model for open-domain questions answering.
Conclusion
In this paper, we propose a generation-augmented query expansion framework on code retrieval task.
To the best of knowledge, this is the first work that leverage generation paradigm to help code retrieval. Specifically, we design patterns that fusion documentation and its generated code function as a expansion query to search related code. We show by cases that generated code from natural language documentation would enhance the semantic similarity to its ground truth code snippet. Extensive empirical experiments validate the achievement of new state-of-the-art results on CodeSearchNet benchmark.
|
2022-12-22T06:42:30.780Z
|
2022-12-20T00:00:00.000
|
{
"year": 2022,
"sha1": "0f38267a8ba32789f5d3b1b19820f86940fea052",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0f38267a8ba32789f5d3b1b19820f86940fea052",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
5119064
|
pes2o/s2orc
|
v3-fos-license
|
Termite-Susceptible Species of Wood for Inclusion as a Reference in Indonesian Standardized Laboratory Testing
Standardized laboratory testing of wood and wood-based products against subterranean termites in Indonesia (SNI 01.7207-2006) (SNI) has no requirement for the inclusion of a comparative reference species of wood (reference control). This is considered a weakness of the Indonesian standard. Consequently, a study was undertaken to identify a suitable Indonesian species of community wood that could be used as a reference control. Four candidate species of community woods: Acacia mangium, Hevea brasiliensis, Paraserianthes falcataria and Pinus merkusii were selected for testing their susceptibility to feeding by Coptotermes formosanus. Two testing methods (SNI and the Japanese standard method JIS K 1571-2004) were used to compare the susceptibility of each species of wood. Included in the study was Cryptomeria japonica, the reference control specified in the Japanese standard. The results of the study indicated that P. merkusii is a suitable reference species of wood for inclusion in laboratory tests against subterranean termites, conducted in accordance with the Indonesian standard (SNI 01.7207-2006).
Introduction
Indonesia has extensive community forests which produce a diverse range of wood species. In these forests, four species of wood are particularly abundant in both quantity and future supply: Acacia mangium, Paraserianthes falcataria, Hevea brasiliensis and Pinus merkusii. However, most of the wood species from the community forests are highly vulnerable to wood-destroying organisms such as insects and fungal decay. Of the more than 4,000 wood species in Indonesia, most of them (80-85%) are regarded as poor quality due to their low natural durability. Furthermore, there is a profound lack of knowledge on their characteristics and uses [1].
Determination of the resistance of wood and wood-based materials to damage by termites is largely dependent on well-designed and well-executed laboratory evaluations. It is therefore important that the experimental design of such laboratory tests include a suitable reference material (control). This enables the researcher to compare the performance of the candidate material/s under test with that of a known reference control. Furthermore, a reference control can often be used to monitor the viability and vigor of the test termites used in the laboratory test. The inclusion of a suitable reference control in the laboratory test can also be used to compare mortality of termites with those that are exposed to the candidate materials. Consequently, the choice of an appropriate wood species to serve as a reference control is most important for the conduct of comparative laboratory tests on the termite-resistance of both untreated and preservative-treated woods.
Unfortunately, the Indonesian standard SNI 01.7207-2006 does not specify the use of a selected reference control that will enable comparison of data obtained by different researchers. Thus, the aim of our study is to identify a suitable wood species for inclusion as a reference control in the standard.
Wood Species
The candidate wood species used in this study were Acacia mangium, Paraserianthes falcataria, Hevea brasiliensis and Pinus merkusii. All four species are rated as durability classes III and IV (based SNI 01.7207-2006 [2]. Cryptomeria japonica was selected as the comparative reference control species as it is used as such in the Japanese Standard JIS K1571-2004 [3]. All five wood species were subjected to bioassay against termites according to the test methods described in SNI 01.7207-2006 and Japanese Standard JIS K1571-2004 (recently revised as JIS K1571-2010).
Test Method According to SNI 01. 7207-2006
This standard describes a 4-week, no choice laboratory test using 200 g of sand matrix, 50 mL of distilled water and 200 workers of Coptotermes formosanus Shiraki. Test specimens (25 × 25 × 5 mm) were placed upright position within a glass jar one of the 25 mm edges resting on the inside wall of the jar. We performed this research in five replicates. Full details of the test method are given in SNI 01.7207-2006 and [4]. A diagram of the test method according to SNI 01.7207-2006 is provided in Figure 1.
Test Method According to JIS K1571-2004 (Recently Revised as JIS K1571-2010)
This laboratory test method is similar to that described in the Indonesian standard. The Japanese method is also a no-choice test. Sugi (Cryptomeria japonica) sapwood specimens are used as the untreated reference wood species (control). This is implied by the no-choice description (Figure 2). The test specimen is placed on a plastic net to avoid direct contact with the moistened layer of Plaster of Paris on the base of a cylindrical acrylic container. C full. formosanus (150 workers and 15 soldiers) were added to each test container. Full details of the test method are given in JIS K1571-2004. A diagram of the test method according to JIS K1571-2004 is provided in Figure 2. Test containers containing termites were maintained at 28 ± 2 °C and 80% RH for three weeks in the dark.
Calculation of Results
Percent mass loss of the individual wood specimen is calculated by the difference between the before and after weights according to the following equation: Percent mass loss = (W 1 í W 2 )/W 1 × 100 where W 1 = weight of oven-dried wood specimen before test (g), W 2 = weight of oven-dried wood specimen after test (g). When the mean percent mass loss of five untreated wood specimens is <15%, the test is not valid and should be repeated.
In addition to the percent mass loss, termite mortality was calculated according to the following equations: Termite mortality (%) for SNI = (number of dead workers)/200 × 100 Termite mortality (%) for JIS = (number of dead workers)/150 × 100 The feeding (wood consumption) rates are most useful for comparing test results obtained with wood species of different densities. In order to calculate the feeding rate, an assumption is made that termites die linearly with time. This statement was also confirmed by Su and LaFarge 1984 [5].
On the basis of the above assumption, feeding rates were calculated according to the following equation: Feeding rate (mg/termite/week) = (weight of wood eaten by termites)/termites × test period (weeks).
Results of Laboratory Test According to SNI 01. 7202-2006
Data on mean mass loss, mean termite mortality and wood consumption obtained from the laboratory test conducted according to the Indonesian Standard (SNI) are presented in Table 1. Mean mass losses ranged from 11.6% (A. mangium) to 42.1% (C. japonica). One of the requirements of the Japanese standard method of test is that the untreated control shall sustain a mean mass loss of more than 15% for the test to be considered valid. If this occurs, the test is deemed valid. Given the Japanese requirement, the results of the Indonesian SNI test suggest that H. brasiliensis, P. falcataria and P. merkusii (mean mass losses of 21.0%, 24.5%, and 25.4%, respectively) could be suitable candidates for reference controls. A. mangium with a mean mass loss of 11.6% failed to qualify as a suitable reference control. It appears that under the conditions of the SNI test, A. mangium displayed some resistance to attack by C. formosanus, probably due to its inherent extractive content.
Results of Laboratory Test According to JIS K 1571-2004
Data on mean mass loss (%), mean mortality (%) and wood consumption obtained from the laboratory test conducted according to the Japanese Standard (JIS) are presented in Table 2. The mortality results obtained from the JIS and SNI termite tests both had similar trends. Table 2. Data on mean mass loss, mortality and wood consumption rates at the conclusion of the JIS laboratory test.
|
2016-03-22T00:56:01.885Z
|
2012-03-28T00:00:00.000
|
{
"year": 2012,
"sha1": "5b151da6414efac69135f266ad886b8d8a25113f",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/2075-4450/3/2/396/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b151da6414efac69135f266ad886b8d8a25113f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
134970969
|
pes2o/s2orc
|
v3-fos-license
|
SPATIAL AND TEMPORAL VARIABILITY OF WATER-FILLED CREVASSE HYDROLOGIC STATES ALONG THE SHEAR MARGINS OF JAKOBSHAVN ISBRAE , GREENLAND
The impact of melt water injection into ice streams over the Greenland Ice Sheet is not well understood. Water-filled crevasses along the shear margins of Jakobshavn Isbræ are known to fill and drain, resulting in weakening of the shear margins due to reduced basal friction. Seasonal variability in the hydrologic dynamics of these features has not been 10 quantified. In this work, we characterize the spatial and temporal variability in the hydrological state (filled or drained) of these water-filled crevasse systems. A fusion of multi-sensor optical satellite imagery was used to examine hydrologic states from 2000 to 2015. The monthly distribution of crevasse systems observed as water filled is unimodal with peak number of filled days during the month of July at 329 days, while May has the least at 15. Over the study period the occurrence of drainage within a given season increases. Inter-seasonal drain frequencies over these systems ranged from 0 to 5. The 15 frequency of multi-drainage events are correlated with warmer seasons and large strain rates. Over the study period, summer temperatures averaged from -1 and 2 0C and tensile strain rates have increased to as high as ~ 1.2 s. Intermittent melt water input during hydrofracture drainage responsible for transporting surface water to the bed is largely facilitated by high local tensile stresses. Drainage due to fracture propagation may be increasingly modulated by ocean-induced calving dynamics for the lower elevation ponds. Water-filled crevasses could expand in extent and volume as temperatures increase resulting in 20 regional amplification of ice mass flux into the ice stream system. 1 Motivation and Prior Work The Greenland Ice Sheet (GrIS) has experienced considerable mass loss over the last few decades (Krabill et al., 2004; Joughin et al., 2004; Alley et al., 2005a, 2005b; Luthcke et al., 2006; Hanna et al., 2008) resulting in negative mass balance and substantive contribution to sea level rise (Rignot et al., 2008; van den Broeke et al., 2009; Shepherd et al., 2012). 25 Commensurate with these changes has been the documented impact of surface meltwater on ice sheet velocity during the summer within the ablation zone (Zwally et al., 2002; Joughin et al., 2008; van de Wal et al., 2008; Shepherd et al., 2009; Bartholomew et al., 2010; Sundal et al., 2011; Palmer et al., 2011; Hoffman et al., 2011), via supraglacial lakes, channels, The Cryosphere Discuss., doi:10.5194/tc-2017-86, 2017 Manuscript under review for journal The Cryosphere Discussion started: 24 May 2017 c © Author(s) 2017. CC-BY 3.0 License.
Abstract
The impact of melt water injection into ice streams over the Greenland Ice Sheet is not well understood.Water-filled crevasses along the shear margins of Jakobshavn Isbrae are known to fill and drain, resulting in weakening of the shear margins due to reduced basal friction.Additionally, seasonal variability in the hydrologic dynamics of these features has not been quantified.In this work, we characterize the spatial and temporal variability in the hydrological state (filled or drained) of 7 groups of crevasse (CV) systems.A fusion of multi-sensor optical satellite imagery was used to examine hydrologic states during the melt season (May to September) from 2000 to 2015.The peak number of days in the monthly distribution of filled crevasse systems was during the month of July at 329 days, while May had the least at 15.Over the study period the occurrence of drainage within a given season increased.The number of drainages per crevasse group in a season ranged from 0 to 5. The frequency of multi-drainage events were correlated with large strain rates.Over the study period, average summer temperatures ranged from -1 and 2 ⁰C, and tensile strain rates have increased to as high as ~ 1.2 a -1 .Drainage due to fracture propagation may be increasingly modulated by ocean-induced calving for lower elevation systems.Overall water-filled crevasses could expand in extent and volume as temperatures increase resulting in regional amplification of ice mass flux through Jakobshavn Isbrae.Chapter 1 Introduction
Motivation and Prior Work
The Greenland Ice Sheet (GrIS) has experienced considerable mass loss over the last few decades (Alley, Clark, et al., 2005;Alley, Dupont, et al., 2005;Hanna et al. 2008;Joughin et al. 2004;Krabill et al. 2004;Luthcke et al. 2006) resulting in negative mass balance and substantive contribution to sea level rise (Rignot et al. 2008;Shepherd et al. 2012;van den Broeke et al. 2009).Commensurate with these changes has been the documented impact of surface meltwater on ice sheet velocity during the summer within the ablation zone (Bartholomew et al. 2010;Hoffman et al. 2011;Joughin et al. 2008;Palmer et al. 2011;Shepherd et al. 2009;Sundal et al. 2011;van de Wal et al. 2008;Zwally et al. 2002), via supraglacial lakes, channels, and moulins largely beyond regions of fast flow (Box & Ski, 2007;Das et al. 2008;Echelmeyer et al. 1991;Howat et al. 2013;Joughin et al. 1996;Koenig et al. 2015;Lampkin 2011;McMillan et al. 2007;Selmes et al. 2011;Sneed & Hamilton 2007;Sundal et al. 2009;Tedesco & Steiner 2011).However, the presence of ponded water within regions of fast flow has received little attention.Lampkin et al. (2013) evaluated the spatial and temporal variability of water-filled crevasse filling and drainage dynamics during the 2007 melt season within the shear margins of Jakobshavn Isbrae.Crevasses at elevations less than ~500 m start to fill around June 6 with a total area of ~0.15 km 2 .A peak total area of ~1.8 km 2 was reached in early July with most groups still maintaining some water on August 9, 2007.
Water-filled crevasse systems filled and drained at rates as large as 0.03 km 2 d -1 and 0.012 km 2 d -1 respectively (Lampkin et al. 2013).These systems likely drain resulting from the vertical propagation of fractures to the bedrock via hydrofracture (Alley, Dupont, et al. 2005;Benn et al. 2007;Das et al. 2008;Krawczynski et al. 2009;Nye 1955Nye , 1957;;Van der Veen 1998, 1999).A fracture will propagate to a depth where the stress intensity factor is equal to the fracture toughness, and can even reach the bedrock if a region is under tension (Van der Veen 1998, 2007).Lampkin et al. (2013) establish that local strain rates are sufficient to drive fractures through to the bed for most crevasse (CV) groups.These features have the capacity to inject substantial volumes of water into the shear margins equivalent to the largest supraglacial lakes found outside of the ice stream (Lampkin et al. 2013).However, we do not understand how these crevasse systems have changed over time, their specific impact on regional ice dynamics, and the mechanisms through which melt water from these features can be delivered to the bedrock.
Objective
This investigation performs the most comprehensive assessment of the spatial and temporal variability of water-filled crevasses along the shear margins of Jakobshavn Isbrae (Figure 1).We seek to characterize variability in drainage dynamics over water-filled crevasse systems at annual and interannual time scales using a fusion of multi-sensor data from several optical satellite systems acquired over a 16 year period from 2000 to 2015 each season during the period from May to September.We restrict our assessment to characterizing the 'hydrologic state' (filled or drained) of these 7 water-filled crevasse systems over the analysis period.We do not quantify changes in the volume or areal extent of these systems.We characterize temporal patterns in the drain state of waterfilled crevasse systems responsible for hydrologic weakening of Jakobshavn Isbrae (Lampkin et al. 2013).We also explore first-order controls on observed drainage behavior.This work provides an important benchmark from which future changes in this component of supraglacial hydrology and the role of meltwater in fast flowing ice streams will be evaluated.The results from this study have implications for understanding processes driving mass discharge from marine-terminating outlet glaciers throughout GrIS.
Satellite Imagery
Satellite imagery was acquired from several optical satellite systems spanning a range in performance capacity.Cloud-free images from seven imaging systems were used to quantify the hydrologic state of each water-filled crevasse system.The presence of ponded water is easily identified in imagery acquired over the visible part of the electromagnetic spectrum resulting from the propensity for water to absorb incoming solar radiation more effectively than the surrounding ice and firn (Lampkin & VanderBerg 2011).Data from Landsat-7 ETM+, Landsat-8 OLI, Quickbird-1/2, Geo-Eye, Worldview-1/2, EO-1 ALI, SPOT-5 and ASTER, were used in this analysis.The combination of data from these systems increases the frequency of sampling resulting in enhanced temporal resolution which offsets the impact of cloud cover.The number of cloud-free images varies for each satellite system, resulting in a non-periodic sampling interval.The overall temporal resolution was improved, though the sampling rate was inconsistent.For more details on imagery and data sources see Table 1.
Surface Temperature Data
Near surface temperature data were acquired from the Greenland Climate Network (GC-Net) near Jakobshavn Isbrae from the Cooperative Institute for Research in Environmental Sciences (CIRES) (Steffen et. al. 1996).Hourly 2 m surface temperatures sampled at the JAR 1 and Swiss Camp GC-NET stations were used to create a composite daily average temperature.This time series was used to evaluate patterns in the filling and drainage variability of water-filled crevasse systems.
Velocity Data
Velocity data used in this analysis were derived from the National Science and Ice Data Center (NSIDC) MEaSUREs data archive on Greenland Ice Velocity: Select Glaciers InSAR (release v1.1) (Joughin et al. 2016) (http://nsidc.org/data/nsidc-0481/).Surface velocity fields were derived from TerraSAR-X (TSX) image pairs based on speckle tracking and interferometric techniques (Joughin et al. 2010(Joughin et al. , 2016)).Available transverse and longitudinal component surface velocity grids were acquired from this archive over the Jakobshavn Isbrae study area from 2009 to 2015 during the months of May through August.Nominal spatial resolution of these grids are at 100 m.
Elevation Data
Elevation data used in this analysis were derived from the NSIDC MEaSUREs Greenland Ice Mapping Project (GIMP) Digital Elevation Model (Howat et al. 2015a(Howat et al. , 2015b) ) (http://nsidc.org/data/docs/measures/nsidc-0645/).This DEM was created using a combination of ASTER and SPOT 5 DEMs over the ice sheet periphery and margin south of 82.5° N, and Advanced Very High Resolution Radiometer (AVHRR) photoclinometry for the ice sheet interior and far north.Land elevations were calibrated to 2003-2009 average ICESat and Geoscience Laser Altimeter System (GLAS) elevations (Howat et al. 2015a).
Calving front positions were delineated manually through visual interpretation and digitization of satellite imagery annually.ERS and ENVISAT Synthetic Aperture Radar (SAR) imagery, and optical data from Landsat 5, 7, and 8 platforms were used to build the terminus location archive.Terminus positions were assessed for 22 of GrIS 28 major marine-terminating outlet glaciers.We acquired calving front location data over Jakobshavn Isbrae from 2002 to 2015.
Determination of Hydrologic State of Water-filled Crevasses
The hydrologic state of water-filled crevasse systems (ψ) were quantified through visual interpretation of imagery.The occurrence or presence of water within the water-filled crevasse groups indicate a filled (ψ=1) hydrologic state, while the absence defines a drained (ψ=0) state.A crevasse group was assumed to remain filled until a subsequent image indicates that a particular group is devoid of water.We assumed a given crevasse group remained water-filled during intervening periods when conditions prevented direct observation.This scenario can occur when an initial cloud-free image displays water in a system and is followed by a period of cloud-covered or lack of available images.After such a period, if the subsequent image no longer showed water present then we assumed drainage occurred during the intervening interval.In general, when a crevasse was observed to be filled, we assumed the crevasse was filled until we either observed the crevasse to drain, or the study period for that given year ended.We did not document the areal extent of ponds and did not record partial drainage events.If water was present at all regardless of pond size, we designate the pond as "filled" otherwise it is classified as "drained".
Derivation of Strain Rate
The surface velocity field was decomposed into components consisting of vector (u) oriented in the prevailing direction of ice flow (x) within the main trough of the ice stream, and an orthogonal component (v) perpendicular to ice flow (y).Horizontal strain rates () were estimated from measured surface velocity through differentiation of component velocity grids where the strain rate tensor is given by: Strain rate component fields were used to calculate the magnitude in the principle strain axis in the horizontal plane, where the magnitude of minimum (1) and maximum (3) tensile strains are given by: In this analysis, we were specifically interested in the ̇3 field as we wanted to examine the spatial and temporal variability in the tensile strain field, which controls fracture propagation within the shear margins.Given this, we did not compute the angle between ̇1 and ̇3.All grids based on velocity image pairs of estimated ̇3 between May and August of each season were averaged.The seasonal averages values were sampled within the maximum areal extent of water-filled crevasse system delineated from Landsat ETM imagery during the 2007 melt season (Lampkin et al., 2013) and spatially averaged ( 3 ̇) ̅̅̅̅̅ .
Logistic Regression Modeling
We implemented a logistic (logit) regression model to assess temporal changes in the hydrologic state of water-filled crevasses over the analysis period.The use of logit regression was appropriate as our response variable (yi) was binary (0 or 1) where Here yi is considered the realization of a random variable Yi that can take binary values with probabilities and 1 − respectively.The distribution of Yi is a binomial distribution in which probabilities are expressed as assuming observations on yi were independent.Given this we defined a model such that is a linear function of covariates where β is a vector of regression coefficients.This ordinary least squares model was transformed to accommodate a binary dependent variable through the logit transform through expressing probabilities as odds ( ) and deriving the log-odds given by where the fitted model in Eq. ( 7) using the maximum likelihood estimation routine computes β, which represents the change in the logit of the probability associated with a unit change in the independent variable assuming all else held constant.Solving for the probabilities resulted in the transformed model as Our model is built such that the dependent variable was the observed hydrologic state, and the independent variable is time (tn) for only days a direct observation was recorded (non-interpolated).In this analysis, we were primarily interested in the change in the probability of filled states over time.This was accomplished through hypothesis testing on the modeled β coefficient.We employed the Wald statistical test, which tests the significance of the null hypothesis (H0: = 0) at the 5% significance level, by calculating a metric (z) with a 2 distribution when H0 is true given by where ̂ is the standard error (deviation) of the model.
Spatial and Temporal Variability of Hydrologic State
The availability of cloud-free images varied across the study period with some years having more samples than others.The number of total samples per season increased with time (Figure 2a).The maximum scenes available for any given month in the study period was 87 in June 2010.There were 17 months in the study period that cloud-free imagery was not available (Figure 2a).The total number of images across all seven CV groups ranged from 163 to 228 scenes.The total number of cloud-free images varied for each crevasse system throughout the study period.The month of May maintained the least number of clear scenes at 55, while July had the most at 433 scenes (Figure 2b).The monthly distribution of filled crevasses over the 16 year study period was unimodal with a peak in the number of filled days during the month of July at 329 days, while May had the least at 15 days (Figure 3).All CV groups had a minimum of 1 day between observations.The maximum days between observations was 140 for CV7, while the maximum days between observations was 107 for CV2 and 3.The maximum days between observations for CV4, 5, and 6 was 108 days, and 111 days for CV1.Total percent of days filled (interpolated) over the study period ranged from roughly 36-55% (Figure 5).CV2 was filled for the longest duration, while CV1 was filled for the shortest duration over the entire study period.
Drain Frequency
Throughout the study period water-filled crevasse systems were observed to fill and drain throughout each season (Figure 3).Some systems were observed to refill and drain multiple times during a season, this will be referred to as a multi-drain event for the remainder of the paper.Over the 16 year study period, there were 9 seasons where at least one crevasse group demonstrated a multi-drain event.For the 2011 season 13 drainage events were observed, with CV1 draining 5 times, both of which were maximums for the study.In 2003 we only observed 5 drainage events, indicating that 2 crevasse groups did not have water observed.Temporal patterns in seasonal hydrologic state over all crevasse groups was also examined.A time series of cumulative days over which crevasse systems were filled with water (Nfill) demonstrated five distinctive multi-year patterns (Figure 6a).These patterns generally ranged between 3-4 years and were consistent across all groups.
The mean number of filled days per month were evaluated (Figure 6b), and varied across the seven crevasse groups.All seven crevasse groups maintain the same range of filled days per month over the study period.CV1 had the lowest mean at 11.21 days, while CV2 had the maximum mean at 16.75 filled days per month.Mean filled days across all groups were within 1 of each other indicating that most groups did not demonstrate significant differences in the duration of filled states.
Relationships between Drain Frequency and Near-Surface Temperature
Relationships between drain frequency and near-surface atmospheric temperature were explored (Figure 7).Average summer temperature ranged from -1 and 2 ⁰C for each season.From 2000 to 2006, average summer temperatures varied only between 1 to 0 ⁰C.Commensurately, water-filled crevasse groups only demonstrated single drainage events.After 2006 temperatures increased to as high as 2 ⁰C and varied over a larger range as high as ~3 ⁰C difference between successive seasons.During this period, the number of drainage events per season across all CV groups (except CV7) increased with some variation in year to year drainage count.
Relationships between Strain Rate and Drainage
Variations in 3 ̅ can be indicative of conditions that drive fracture propagation responsible for drainage of water-filled crevasses.Unfortunately, we were not able to correlate changes in strain rate with observed drain occurrence because we did not always have velocity data available for each crevasse group over every season.However, we were still able to identify some relationships between strain rate, and drainage.There were only 2 multi-drain events across all crevasse groups before 2009 (Figure 7).From 2009-2015 there were 20 multi-drain events across all crevasse groups.Generally, strain rates increased over most groups from 2009 to 2015.CV4, and 5 showed the largest increase in strain during this period of ~ 1.2 a -1 and 0.9 a -1 respectively.The range in tensile strain rate magnitudes varied across the water-filled crevasse groups.CV1, 4, and 5 demonstrated the largest magnitudes, while CV3, 6, and 7 were the lowest.Specifically, the CV1 group experienced an increase in multi-drain events in 2010 and 2011 with a commensurate increase in tensile strain, which reached a peak of ~1.2 a -1 with no further changes afterwards.Interestingly, CV2 had only one season where it demonstrated multiple drainage events (2011).From 2009 to 2015, strain rates over CV2 increased slightly from 2009 to 2010 but decreased by 0.1 a -1 in 2011.After 2011, strain rates rose dramatically but CV2 returned to draining only once per season.The CV3 group experienced four multiple drainage events throughout the study period.Strain rates for CV3 were only available from 2009 to 2010 and 2012 to 2013.Strain rate values ranged from ~ 0.06 to 0.1 a -1 .Over these four seasons, CV3 maintained three multiple drainage events with 2010 being the exception.CV4 showed four seasons where multiple drainage events occurred.Strain data for this group was available from 2009-2013, showing increasing strain from ~0.8 to 2 a -1 .During this period, CV4 showed an increase in the occurrence of multiple drainage events.CV5 was similar and experienced an increase in multiple drainage events as well.CV6 was observed to experience four multi-drain years.
CV6 and 7 had limited data available from which to estimate tensile strain rates.
Therefore only the 2009, 2010, 2012, and 2013 seasons are shown.Strain rates over CV6 ranged from ~0.12 to 0.18 a -1 , which occurred during a period with a higher frequency in occurrence of multiple drainage events than the period before 2009.Lastly, CV7 had only one multiple drainage event throughout the entire study period during the 2009 season, which did not correlate to the observed periods of strain increases.
Relationship between Terminus Location and Drain Occurrence
Fluctuations in local strain rates in the vicinity of each CV group could be induced by downstream calving events at the glacier terminus.We tracked seasonal and interseasonal changes in calving front location from 2002 to 2015 (Figure 8).Additionally, the lower panel displays the inter-seasonal changes in front location (ΔDf) where the ( ) connected by a bar indicate the effective period over which the terminus retreated based on satellite observations.The top panel shows the magnitude of observed frontal change (ΔDCV) displayed in 0.5 km categories, which corresponds to observed drainage events from our multi-sensor archive for each crevasse group.
Over this period of time, the terminus of Jakobshavn Isbrae had retreated inland substantially (Figure 8).We tracked the mean seasonal distance from the lowest elevation water-filled crevasses system (CV1) ( ̅ CV1) to the terminus.From 2002 to 2013 the terminus had retreated ~ 8 km towards CV1.By 2015, the glacier front was ~ 2 km away from CV1. Inter-seasonal changes in front location (ΔDf) showed large variations over the analysis period.The shorter the interval over which the front locations were observed, the smaller the magnitude in front movement.The 2004 and 2008 summers showed the largest magnitude in front retreat of ~ 2.5 km.Generally, the magnitude of change in frontal movement decreased over the analysis period.
Additionally, we examined relationships between front changes and the timing of drainage from each CV group over the study period (Figure 8, top panel).We documented the magnitude of observed frontal change (ΔDCV) that corresponded to the time period when drainage was observed to occur from the water-filled crevasse groups.
During the 2003 season, all CV groups drained during a period when 1.5< ΔDCV < 2 km.
In 2004, CV1, 2, and 3 drainage events corresponded to 0.5 < ΔDCV < 1 km, while the others corresponded to 0 < ΔDCV < 0.5 km.The 2005The , 2006The , and 2007 seasons mainly demonstrated that ΔDCV values ranged between 1 and 1.5 km for most CV groups.From 2009 to 2015, most drain occurrences tended to correspond to the smallest range in ΔDCV (0 to 0.5 km) commensurate with an increase in multiple drainage events (boxes) (Figure 8, top panel).
Logit Model
A logistic regression analysis was conducted to evaluate the temporal variability in hydrologic state of water-filled crevasses.The time series for each CV system was fit with a logit model.The models provide an estimate of the probability of the crevasse group to be water-filled, P(ψ=1).We were strictly concerned with evaluating the trend in the occurrence of ψ and not interested in using models to predict or interpolate hydrologic states.Generally, CV1, 5, and 6 maintained a decrease in P(ψ=1) with time, while CV3 and 7 increased and CV4 was invariant (Figure 9).Significance test on the ̂ indicated that the trends in the temporal change in P(ψ=1) were largely insignificant (p > 0.1) and not enough to reject H0 for most groups.The exception is CV1, which was weakly significant (Table 2).Chapter 5. Discussion
Impact of Sampling Bias
The impact of cloud cover and varying temporal resolution has a significant impact on how we interpret drain dynamics of the water-filled crevasses using our archive.These issues result in variation in the number of observations over the crevasse groups for each season.The numbers of samples used to characterize the hydrologic state of water-filled crevasses were fewer before 2009 than after.This raises the possibility that drainage events earlier in the study period were missed.Furthermore, the lack of daily sampling over all seasons means our estimates of drainage dates may not be exact.Drainage events were based on the observation of the lack of water present over a given CV group.
Images were not necessarily obtained the same day a drainage occurred.However, in most cases, we were able to bound the time interval over which a particular CV group drained since most observations occurred on an interval of 10 days or less.Additional data from new sources, such as visible imagery from Cubesats, will be considered to improve sampling resolution for future studies.
Factors Influencing Drain Behavior
Everett et al. (2016) hypothesize that drainage and filling downstream of Helheim Glacier may be the result of a high pressure wave passing down glacier following a lake drainage.
We have not observed coordination in drain and fill behaviors among adjacent pond groups.There is no relationship between supraglacial lake drainage and water-filled crevasse drainage within the shear margins of Jakobshavn as the closest lake to many of our CV groups is more than 15 km away in the extra-marginal ice field.Lastly, it is not feasible for drainage of crevasse groups within the northern margin to impact the filling and drainage behavior of those within the southern margin and vice versa.The margins are separated by a deep trough with no evidence for connected subglacial hydrology transverse to the main direction of ice flow.
For our study we considered that seasonal variability in surface temperatures responsible for melt production and runoff may be an important driver on the observed hydrologic state patterns.During seasons with relatively warmer temperatures, waterfilled crevasses can demonstrate multiple drain and refill cycles.This process is facilitated by the capacity for short-term localized melt production and runoff, which could rapidly refill crevasses after a drainage event.However, in cooler seasons, there would be insufficient energy to drive rapid refilling after an initial drainage event.
Surface temperatures have demonstrated a positive trend of 0.47±0.55( ∘ C/Decade) over central-west Greenland (Hall et al. 2013).Specifically, average near-surface temperatures near Jakobshavn over the study period varied between -1 to 2 °C.High frequencies of multi-drainage events corresponded to warmer seasons, while cooler periods had few to any multi-drainage events.It is not clear that the occurrence of filled states over time is proportional to changes in regional temperature.This is also consistent in the logit model results as we might expect to see statistically significant trends in the probability of filled states with time commensurate with an increase in temperatures.The insignificance in the trend on the probability of filled states regardless of the sign seems to indicate that temperature is not a control on whether a given crevasse group will be more or less likely to be filled.It is likely that temperature is more influential in determining the seasonal areal extent of these water-filled crevasse groups.Since we did not assess the areal of extent of these system in this analysis, additional work is necessary to evaluate these relationships.
The water-filled crevasses examined in this analysis are in a field of closelyspaced fractures.An air-filled fracture in a field of other crevasses maintains a lower net stress intensity factor at the fracture tip, which requires a larger tensile stress to propagate the fracture to the bed (van der Veen 1998).Mean distance between fractures within the boundaries of the CV groups can range from ~78 m (CV7) to 110 m (CV3), which corresponds to stress intensity factors between ~0.5 to 0.7 (MPa m 1/2 ) (van der Veen 1998).These values exceed the ice fracture toughness (0.1 -0.4 MPa m 1/2 ) (van der Veen 1998), but only for the case where the fractures are water-filled.A water-filled crevasse can readily penetrate to the bed because the density of water is greater than ice such that if the fracture remains water-filled, the resulting hydrostatic pressure is sufficient to overcome lithostatic pressures (van der Veen 1998(van der Veen , 2007)).Therefore, the filling rate is the most important factor controlling fracture propagation (van der Veen 2007).Filling rates estimated during the 2007 melt season ranged from 0.04 to 1.25 (m h -1 ) (Lampkin et al. 2013).Given these rates, for a tensile stress of ~ 300 kPa, a single fracture could penetrate between ~400 to 1100 m, which is equivalent to ice thickness in the vicinity of the water-filled crevasses.Melt water production and runoff responsible for filling crevasses are variable both intra and inter-seasonally.This would induce intermittent hydrofracture crack propagation, and may not allow for a fracture to penetrate to the bed.
Under these circumstances, delivery of meltwater to the bed could only be possible if local strain rates are sufficiently large to overcome the reduced stress intensity in the closely-spaced crevasse fields.Estimated strain rates during the 2007 season over the water-filled crevasse systems were sufficiently large (Lampkin et al. 2013).
In this analysis, maximum tensile strain rates have increased over the last 16 years and were correlated with an increase in multi-drain events.The events are likely driven by both an increase in melt production and local tensile stress.This is consistent with laser altimetry estimates of surface roughness, which show substantial variability within the shear margins relative to the main trough from 2003 to 2009 (Herzfeld et al. 2014).
There was no expansion of roughness within the shear margins during this period of rapid thinning (10-15 m a -1 ) (Herzfeld et al. 2014).
Terminus Perturbations and Crevasse Drainage
Ocean-induced terminal perturbations have been implicated in the observed acceleration and thinning in the lower trunk of the ice stream (Aschwanden et al. 2016;Holland et al. 2008;Joughin et al. 2008).Bondzio et al. (2015) assert that ice acceleration within the main trough of Jakobshavn increases strain along the shear margins, while amplifying rheological softening.Therefore, ice fracture toughness would be reduced enough to easily facilitate fracture propagation.Bondzio et al. (2017) have established that the impact of calving would be limited to within 10 km of the terminus.Clearly, the impact of terminal perturbations on strain rates in the vicinity of the water-filled crevasse groups were negligible over much of the analysis period.It is only during recent seasons that the front has reached a position such that the lower elevation systems (CV1, 2, and 4) are within the 10-15 km range (Joughin et al. 2012) where longitudinal coupling from calving events could be influential.This may change as the terminus of Jakobshavn continues its rapid retreat.
Chapter 6. Conclusions
Controls on drainage and filling are a complex set of interacting factors that include surface melt production/runoff, and local/regional ice dynamics driving fracture propagation.
Crevasse systems within Jakobshavn shear margins fill and drain annually, therefore the frequency in the occurrence of water alone is not sensitive to observed increases in temperature.However, pond extent and depth are likely to be sensitive to regional warming over the study period but was not examined in this work.Regardless of the limitations in our archive, each water-filled crevasse system experienced considerable variability in filling as a result of local variability of melt water production, accumulation, and storage capacity.The size and configuration of subglacial morphology is probably a first-order control on storage capacity and indirectly influences local strain rates, fracture propagation, and drainage.More work is required to assess factors that control the magnitude and rate of filling such as components in the surface energy balance (i.e.solar insolation and turbulent heat flux) or the potential for localized inter-fracture percolation.
The major findings in the study indicate that water-filled crevasse groups demonstrate differences in both spatial and temporal variability of hydrologic state that reflects local conditions in melt production, run-off, and drain behavior.Drainage frequency is not sensitive to increasing temperatures over the study period.Frequency of drainage increases with increasing strain rates though factors driving inter-seasonal changes in strain along the margins have not been evaluated.The impact of calving on drain behavior was negligible until the end of the study period, when the lower elevation crevasse systems were within ~10 km of the terminus.This may become an important factor in influencing water-filled crevasse systems in the future as the terminus continues to retreat.We discovered increased occurrence in multiple inter-seasonal drainage events, which may be related to dynamics of the subglacial hydrologic environment.Additional work is required to understand how englacial and subglacial systems impact drain propensity.
Enhanced mass flux from Jakobshavn Isbrae over the last couple of decades is driven by a combination of various factors.In particular, the impact of hydrologic weakening of the shear margins could increasingly become a major factor in both enhancing extra-marginal ice flow as well as amplifying the impact of ocean-induced terminal perturbations.Current trends and projections indicate a warmer arctic.Under prognostic scenarios, we could expect expansion in both the distribution of ponded water in the shear margins to higher elevations and areal extent.Large volumes of water would become available for infiltration driving regional changes in ice dynamics.Hydrologic weakening of the shear margins could play a critical role in the future stability of not only
Figure 1 .
Figure 1.Study area showing the location of water-filled crevasse systems (CV) (white) within the shear margins of Jakobshavn Isbrae, west-central Greenland.The spatial extent is a composite based on observed areal extent from cloud-free, Landsat-7 panchromatic imagery only from 2000-2013.Contours of elevation in meters are superimposed.
Figure 2 .
Figure 2. (a)Time series of total monthly cloud-free, optical images from various sensors (Table1) over the study period from 2000 to 2015.(b) Cumulative number of cloud-free optical images over the 16 year study period for each water-filled crevasse group per month during the ablation season (May to September).
Figure 3 .
Figure3.Cumulative monthly distribution of the total number of days over the 16 year study period where each water-filled crevasse system was observed from optical imagery to occupy the 'filled' hydrologic state.
Figure 4 :
Figure 4: Histogram showing the days between subsequent observations for each CV group.
Figure 5 :
Figure 5: Cumulative percentage of days that each crevasse group was designated the filled state (interpolated) for the entire study period.
Figure 6 .
Figure 6.(a) Time series of cumulative monthly days that each crevasse group was filled over the study period.(b) Histogram showing the distribution in duration of filled hydrologic state (days) over the entire study period for each crevasse system.Plot shows mean ( ), minimum and maximum (whiskers), 1σ (box edge), and 50% (line).
Figure 7 .
Figure 7. Time series of number of drainage events per year per crevasse group (ζ, gray bars) with average temperature (<T>) ( ) derived from Greenland Climate Network (GC-Net) 2 m surface temperatures (°C) sampled at the JAR 1 and Swiss Camp stations superimposed on the lower panel.Mean maximum tensile strain rate ( ) for each CV group estimated from measured surface velocity.
Figure 8 .
Figure 8. Relationships between the temporal evolution of Jakobshavn Isbrae terminal retreat and drain occurrence over each water-filled crevasse group from 2002 to 2015.Lower panel depicts seasonal change in distance between the lowest elevation crevasse system (CV1) and the terminus (red line) ( ̅ CV1).Additionally, the lower panel displays the inter-seasonal changes in front location (ΔDf) where the ( ) connected by a bar indicate the effective period over which the terminus retreated based on satellite observations.The top panel shows the magnitude of observed frontal change (ΔDCV) displayed in 0.5 km categories, which corresponds to observed drainage events from our multi-sensor archive for each crevasse group.
Figure 9 .
Figure9.Logit regression analysis on the probability of filed state P(ψ=1) vs time for all crevasse groups over the study period.Circles are hydrologic states such that ψ=1 is filled and ψ=0 is drained.Solid line is logit model fit.
Table 2 .
Logit regression model parameters for all water-filled crevasse groups.
|
2018-07-25T11:02:06.927Z
|
2017-05-24T00:00:00.000
|
{
"year": 2017,
"sha1": "7de61f69267f2d06f42b1070ebf3e79ae3874386",
"oa_license": "CCBY",
"oa_url": "https://www.the-cryosphere-discuss.net/tc-2017-86/tc-2017-86.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cc0eceb96ad70541983cf31770f6493f4e18976d",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
118571087
|
pes2o/s2orc
|
v3-fos-license
|
Auxiliary-boson and DMFT studies of bond ordering instabilities of t-J-V models on the square lattice
We examine the influence of strong on-site Coulomb interactions on instabilities of the metallic state on the square lattice to general forms of bond order. The Mott correlations are accounted for by the auxiliary-boson method, and by dynamical mean field theory calculations, complementing our recent work (arXiv:1402.4807) using Gutzwiller projected variational wavefunctions. By the present methods, we find that the on-site Mott correlations do not significantly modify the structure of the bond ordering instabilities which preserve time-reversal symmetry, but they do enhance the instability towards time-reversal symmetry breaking"staggered flux"states.
I. INTRODUCTION
In a recent paper 1 , we examined instabilities of t-J-V models on the square lattice to arbitrary orderings in the spin-singlet, particle-hole channel, and accounted for the on-site Coulomb interactions by a variational wavefunction which projected out sites with double occupancy. In the present paper we will examine essentially the same models, but will account for the on-site interactions by the auxiliary-boson method (also called the "slaveboson" method) and dynamical mean field theory (DMFT) calculations. As in the previous work 1 , our analysis allows for charged stripes, 2 checkerboard and bond density waves, [3][4][5] Ising-nematic order, 6-8 staggered flux states, [14][15][16][17][18] and states with spontaneous currents. 10 In our works 1,11,12 , ordering wavevectors associated with hot spots on the Fermi surface play a special role (see Fig. 1). In Section II, we will introduce the instabilities in the simpler context of a 'generalized RPA' analysis of a model which includes an on-site repulsion, U , between the electrons. Our main results are in Section III, where we will take the limit U → ∞ using the large N limit of a model with SU(2N ) spin rotation symmetry. In Section IV we perform an alternative calculation where the effective of large repulsion is included via a DMFT self-energy.
II. RPA ANALYSIS
This section will carry out a computation similar to that in Ref. 11, but we will work with a more general Hamiltonian and use a slightly different formalism. We consider electrons c iα on the sites, i, of a square lattice, with α =↑, ↓ the spin index, and repeated spin indices, α, β . . ., are implicitly summed over. We work with the following Hamiltonian where σ a are the Pauli matrices with a = x, y, z. We will consider first, second, and third neighbor hopping t 1 , t 2 , t 3 . Similarly, we have first, second, and third Coulomb and exchange We now introduce our generalized order parameters, P Q (k) , at wavevector Q in the particle-hole channel by the parameterization A conventional charge density wave at wavevector Q has P Q (k) independent of k so that Eq. (2) is non-zero only for i = j. However, optimization of the bond energies requires that we allow P Q (k) to be an arbitrary function of k in the first Brillouin zone. Here, we will find it useful to expand P Q (k) in terms of a set of orthonormal basis functions φ (k) and the coefficients P (Q) become our order parameters. As we will shortly see, for the A key step is to rewrite the interaction terms Eq. (1) in the following form where the φ (k) are 13 orthonormal basis functions in Table I, and J and V are the corresponding couplings shown in Table I. The appearance of a finite set of basis functions in Eq. (4) is the reason we are able to truncate the expansion in Eq. (3).
We can now use the basis φ (k) to also decompose the Bethe-Salpeter equation in the spin-singlet, particle-hole channel, as shown in Fig. 2. The eigenmodes of the resulting T -matrix T m (Q) will determine the structure of the ordering, P (Q) at the wavevector Q.
Summing ladder diagrams for both direct and exchange interactions we obtain is the direct interaction, and Π m (Q) is a 13 × 13 matrix which is the polarizability of the with ε(k) is the single particle dispersion: We choose the dispersion ε(k) to have hot spots which intersect the magnetic Brillouin zone boundary, as shown in Fig. 1. The hot spots for this dispersion are separated by the vectors shown with Q 0 = 4π/11. Note that Q 0 is simply a geometric property of the Fermi surface, and plays no special role in the Hamiltonian.
By rearranging terms in Eq. (5), we see that the charge-ordering instability is determined by the lowest eigenvalues, λ Q of the matrix In Fig. 3 we consider a case with vanishing on-site interactions, as in Ref. 11. As found previously, the lowest eigenvalue is at Q ≈ (Q 0 , Q 0 ) and the corresponding eigenvector is purely d-wave.
We turn on Coulomb interactions in Fig. 4, while keeping other parameters the same.
The main change is that the eigenvalues near Q = (π, π) become significantly smaller. The Fermi surface is as in Fig. 1, and the interaction couplings are Minimized over Q, the lowest eigenvalue is at Q = (0.38, 0.38)π; this is very close to the value Q 0 = 0.36π as determined from the Fermi surface in Fig. 1. The eigenvector at Q = (0.38, 0.38)π is P Q (k) = 0.9996(cos(k x ) − cos(k y )) + 0.0275(cos(2k x ) − cos(2k y )).
eigenvectors in this region of Q break time-reversal 11 , and the eigenvector at Q = (π, π) is P Q (k) = sin(k x ) − sin(k y ). Some intuition about which wavevector is favored with the corresponding eigenvector can be gained from the plots of the relevant integrand in the instability equation.
So the largest component at this Q remains a d-wave on the nearest neighbor bonds, but there are also small, but slightly larger, eigenvalues near Q = (π, π) with eigenvectors which break time-reversal.
now there is a significant on-site density wave.
There is also a local minimum in Fig. 4 at Q = (π, π). Here the eigenvector is This represents the "staggered flux" state of Refs. 14-18. This state was called a "d-density wave" in Ref. 16, which is an unfortunate terminology from our perspective. With our identification of the bond expectation values in Eq. (2), this state is actually a p-density wave, 11 as is evident from Eq. (12).
where b i is a canonical boson and f iα is a canonical fermion, along with the constraint Here we allow the index α = 1 . . . 2N , so that the model has SU(2N ) symmetry. The constraint can then be systematically implemented in the large N limit. 14,19 We can write the SU(2N ) Lagrangian as where we have decoupled the exchange interaction by a Hubbard-Stratanovich variable P ij residing on the bonds, and absorbed a contribution of −J ij /4 into the definition of V ij . Also, we have written the fermion hopping as t 0 because this will undergo a renormalization before determining the fermion dispersion.
The mean-field equations for the P 's are obtained from the N = ∞ saddle point condition, which yield The constraint equation from the saddle point of λ i is And finally, the saddle point equation for b is
B. 1/N fluctuations
It is useful to manipulate the exchange interactions into the following form where a extends over first, second, and third neighbors, and the J and the φ are the same as in Table I. Note that in this section the index extends from = 1 to = 12 (implicitly, where not noted), and the = 0 basis states in Table I with P (−Q) = P * (Q). We can now see that the P (Q) are similar to the order parameters as those introduced in Eq. (3), but they now refer to the fermions f α rather than the electrons c α . These differ by a factor of b in the large N limit, and so the corresponding P (Q) differ by a factor of b 2 . The mean-field values of the P (Q) are For the fluctuations about mean-field, we fix the unitary gauge, and work at zero frequency of all bosonic fields. Then we can parameterize the fluctuations as where . Then the Lagrangian (15) can be written as where γ V (k) = 2V 1 (cos(k x ) + cos(k y )) + 4V 2 cos(k x ) cos(k y ) + 2V 3 (cos(2k x ) + cos(2k y )).
We integrate out the fermions and obtain where with and Π m (Q) defined as in Eq. (7).
We now perform the Gaussian integrals over the fields λ(Q) and b(Q), and then diagonalize the resulting quadratic form for the fields p (Q). This step is the analog of our solution of the Bethe-Salpeter equation in Section II. Note that the quadratic form for the p (Q) in Eq. (32) begins with a 2δ m , which is to be compared with the δ m in Eq. (9); consequently, the present eigenvalues λ Q are to be compared with twice the eigenvalues in Section II. We also note that a related computation was carried out in a different gauge in the early work of Ref. 20, but they did not consider Fermi surfaces with hot spots.
Our results for the λ Q are shown in Fig. 6, with the same set of parameters as in Fig. 4 in Section II but with the U = ∞ limit taken in the large N method. The results are
IV. DMFT APPROACH FOR LARGE U
In this section we present results of an alternative approach to describe the strong local repulsion. We first perform a dynamical mean field (DMFT) calculation 21 for the tightbinding model with dispersion ε k for a certain filling factor and value of the interaction U .
We use the resulting k-independent self-energy Σ(iω n ) to compute the instability matrix [cf. i , similar to the auxiliary-boson calculation. It additionally accounts for damping effects of the excitations away from the Fermi surface and a split into low energy dispersion and Hubbard bands. In order to project out double occupancy completely, one should perform the DMFT calculation at U → ∞. However, this leads to very small renormalization factors z, 25 at odds with experimental observations. 23, 24 We therefore prefer to perform the calculation for values of U ∼ 1 − 1.5W , where W is the bandwidth of the tight-binding model. Double occupancy is reduced to less than 0.05 in such calculations. There is no problem of double counting in this procedure since the Jinteraction is absent in paramagnetic DMFT calculations. 21 The DMFT self-consistency problem is solved with the numerical renormalization group 22 at low temperature. The result of such a calculation for J 1 = 0.5 and filling factor n = 0.85 are displayed in Fig. 8.
As before the dominant instability is at (Q 0 , Q 0 ) with subdominant instabilities at (Q 0 , 0) and (π, π), and the eigenfunctions are as discussed above. The value of Q 0 0.44π is a bit larger than what is expected from the Fermi surface geometry (see Fig. 1), where for the parameters Q 0 0.39π. We have restricted the analysis here to only finite J 1 such that the relevant basis functions are φ n (k) with n = 1, 2, 7, 8. Note that the strength of the instability is reduced by the renormalization factor z 0.25 which also acts like a quasipartice weight.
For other filling factors and interactions U ∼ 1.5W we find similar results as in Fig. 8. It is worth noting that at higher temperatures the global minimum can shift to (π, π). We conclude that the structure of dominant charge/bond ordering instabilities obtained from treating Mott correlations with DMFT is very similar to the results in Section III.
V. CONCLUSIONS
Our main conclusion is that Mott correlations, as implied by the auxiliary-boson and DMFT methods, do not significantly modify the conclusions of Ref. 11. As long as the metallic state has "hot spots" on its Fermi surface, its dominant instability in the spinsinglet, particle-hole channel is towards a bond-ordered state near wavevectors (±Q 0 , ±Q 0 ) with a local d-wave symmetry of bond ordering; such a state has also been called an "incommensurate nematic". However, our present computations do show an enhanced instability towards a time-reversal symmetry breaking state with spontaneous currents: the "staggered flux" state.
The experimentally observed charge ordering at (±Q 0 , 0) and (0, ±Q 0 ) remained subdominant to ordering at (±Q 0 , ±Q 0 ). Nevertheless, our computations do predict a predominantly d-wave form for the order parameter P Q (k) at Q = (±Q 0 , 0) and (0, ±Q 0 ), as shown in Eqs. (11) and (36). We note the variational computations in Ref. 1, using a wavefunction with double occupancy projected out, did find a regime in which the dominant charge ordering was at (±Q 0 , 0) and (0, ±Q 0 ). Other mechanisms for selecting the observed wavevector have also been proposed. 26,27 Finally, we mention a recent experimental report 28 concluding that the charge order at (Q 0 , 0) is predominantly d-wave, i.e. the = 1 coefficient of the basis functions φ (k) in Table I is significantly larger than all other . This is just as in Eqs. (11) and (36).
|
2014-04-02T21:05:46.000Z
|
2014-02-25T00:00:00.000
|
{
"year": 2014,
"sha1": "bac50fbaa9872bcbd0fbd3124e40fba912fdc961",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1402.6311",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "66f8c3b91d763c6a6a52eaaf39ff8246576e95c0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
252217679
|
pes2o/s2orc
|
v3-fos-license
|
An artificial neural network model based on standing lateral radiographs for predicting sitting pelvic tilt in healthy adults
Background Spinopelvic motion, the cornerstone of the sagittal balance of the human body, is pivotal in patient-specific total hip arthroplasty. Purpose This study aims to develop a novel model using back propagation neural network (BPNN) to predict pelvic changes when one sits down, based on standing lateral spinopelvic radiographs. Methods Young healthy volunteers were included in the study, 18 spinopelvic parameters were taken, such as pelvic incidence (PI) and so on. First, standing parameters correlated with sitting pelvic tilt (PT) and sacral slope (SS) were identified via Pearson correlation. Then, with these parameters as inputs and sitting PT and SS as outputs, the BPNN prediction network was established. Finally, the prediction results were evaluated by relative error (RE), prediction accuracy (PA), and normalized root mean squared error (NRMSE). Results The study included 145 volunteers of 23.1 ± 2.3 years old (M:F = 51:94). Pearson analysis revealed sitting PT was correlated with six standing measurements and sitting SS with five. The best BPNN model achieved 78.48% and 77.54% accuracy in predicting PT and SS, respectively; As for PI, a constant for pelvic morphology, it was 95.99%. Discussion In this study, the BPNN model yielded desirable accuracy in predicting sitting spinopelvic parameters, which provides new insights and tools for characterizing spinopelvic changes throughout the motion cycle.
Introduction
As shown in Figure 1, the spinopelvic coordination maintains the sagittal balance of body posture. It enables straightened lumbar spine and posterior pelvic tilt in the sitting position to accommodate flexion and internal rotation of the femur and prevent anterior impingement and posterior dislocation in normal physiology. In the standing posture, in contrast, it allows increased lumbar lordosis and anterior pelvic tilt to increase acetabular coverage, thus preventing posterior impingement and anterior dislocation (1).
The traditional safe zone for cup position in total hip arthroplasty (THA), definitive treatment for advanced hip arthritis, has been based on a vertically oriented anterior pelvic plane (2). Therefore, it does not account for the spinopelvic balance of each individual or the change of pelvic tilt in various body postures (3). As a result, some patients may engage in more aggressive hip motions to maintain sagittal balance when they change position from sitting to standing. This abnormality may lead to secondary dislocation and impingement, and the resulting edge loading compromises prosthesis survivorship. This condition becomes more severe with concomitant lumbar spine diseases (4). Thus, surgeons should consider the sagittal spinopelvic balance when planning for THA (5). Unfortunately, although it is an increasingly accepted concept, studies on solutions are scanty (6).
Artificial neural network (ANN), investigating correlation among subjects, has been attempted in prognostication and drug discovery with demonstrated accuracy and robustness (7). Composed of musculoskeletal and ligament structures and controlled by neuromuscular interaction, the spinopelvic system engages in coordinated sagittal motion with the essential correlation of spinopelvic features between standing and sitting positions. Using the back propagation neural network (BPNN) and standing lateral spinopelvic radiographs of healthy volunteers, this study aims to predict how the pelvic tilts when the human body changes position from standing to sitting. The results of this study will pave the way for characterizing the dynamics of the spinopelvic system at various positions along the motion cycle.
Study type
This prospective study has been approved by the Ethics Committee of our institution (project number IRB00006761-2012066). All volunteers provided written informed consent.
Eligibility criteria
The inclusion criterion was participants should be between 18 and 30 years old. The exclusion criteria were as follows: (1) chronic lower back and leg pain, spine deformity, and history of disease or surgery of the spine, pelvis, hip joint, and lower limbs; and (2) spondylolisthesis, scoliosis with a Cobb Angle >10°, and kyphosis on spinopelvic frontal and lateral radiographs.
Radiographs
Radiographs in standard standing and sitting positions of the whole spine and pelvis, including bilateral hip joints, were As the body posture changes from standing to sitting, the spinopelvic coordination maintains the sagittal balance of the body posture. See the "Parameter measurement" section for the description of spinopelvic parameters (PI, PT, SS, TK, LL, LT, TLK, TPA, and T1PA).
Zhao et al. 10.3389/fsurg.2022.977505 Frontiers in Surgery obtained from all research subjects. Participants were asked to stand as straight as possible in the standard standing position without leaning forward or backward. In the standard sitting position, they were asked to remain seated as straight as possible, without leaning forward or backward, and with both knees and hips flexed at 90°. For improved quality of the xray film, the elbow joints were flexed fully, and the fists rested on the ipsilateral clavicle. After continuous exposure, the image was automatically spliced.
Parameter measurement
Pelvic and spinal parameters, as shown in Figure 1, were measured in Picture Archiving and Communication Systems (Centricity RIS/PACS, GE Healthcare: https://www. gehealthcare.com/). All parameters were measured independently by two senior radiologists. They produced two readings from every image, then compared the results within (intraobserver) and between themselves (interobserver), and took the average value as the final result. The following parameters were measured in both standing and sitting position radiographs: (1) pelvic incidence (PI): the angle between the line perpendicular to the sacral plate at its midpoint and the line connecting the same point to the center of the bicoxofemoral axis;
Statistical analysis
Statistical analyses were performed using SPSS software (version 18.0). Measurement data were expressed as mean ± SD (min-max), and Pearson correlation coefficient (8) was used for the correlation analysis. Values of p < 0.05 were considered to indicate a statistically significant difference.
Back propagation neural network
Input and output. Inputs were the correlated parameters identified in Pearson correlation analysis, as shown in Table 1. The sitting PT (PT in sitting position) was correlated to PI, PT, SS, LL, LT, and T1PA in standing position (as shown in the input layer of Figure 2). The sitting SS (SS in sitting position) was correlated to PI, SS, LL, LT, and SVA in standing position. Likewise, the sitting PI (PI in sitting position) was correlated to PI, PT, SS, LL, LT, TLK, and T1PA in standing position. Outputs were sitting PT, SS, and PI.
Model theory. The BPNN (9) was used to construct the nonlinear regression between input and output. As shown in Figure 2, taking predicting sitting PT as an example, the BPNN used standing PI, PT, SS, LL, LT, and T1PA as the input layer. To avoid overfitting due to the lack of training data (10), a single hidden layer with seven units was selected. The sitting PT was the output layer.
Volunteer grouping. A total of 145 volunteers with standing-sitting pelvic and spinal parameters were randomly divided into training, validation, and test sets according to a priori of 8:1:1. The training set was used to train the model and determine the model parameters. The validation set was used to adjust the model's super parameters and to preliminarily evaluate the model's ability. The test set was used to evaluate the generalizability of the final model.
Training of the BPNN. The training sample data were normalized and then inputted into the network. The activation functions of the hidden and output layers were set Frontiers in Surgery as tansig (hyperbolic tangent S type) and purelin (linear) functions, respectively. The training function of the network was trainlm, the performance function of the network was MSE (11), and the number of neurons in the hidden layer was initially set to 7. The number of network iterations was 5,000, with an expected error of 0.0000001 and a learning rate of 0.01. After setting the parameters, the training network was started, and the experiment platform used was Matlab (12) (2017a) + Windows 10.
Verifying of the BPNN. Training 10 times in the same way, the model with the best performance on the validation set was taken as the functional BPNN. After the functional BPNN was obtained, we verified the BPNN on the test set. The evaluation indicators were as follows: Relative Error (RE)=|predicted value −actual value|/actual value, Prediction Accuracy (PA) = 1-RE, and the Normalized Root Mean Squared Error (NRMSE) (13).
Results
General information. A total of 145 volunteers (51 men, 94 women) of 23.1 ± 2.3(19-29) years old on average were recruited. Spinopelvic parameters in standing and sitting positions are described in Table 2.
Correlation analysis. As shown in Table 1 Table 3). For sitting SS, the PA of the BPNN model was 71.17% (RE = 28.83%, NRMSE = 11.76%) ( Figure 4; Table 3). For sitting PI, the PA of the BPNN model was 95.99% (RE = 4.01%, NRMSE = 4.09%) ( Figure 5; Table 3). Compared with some simpler artificial models such as multi-linear regression (14), elastic net regression (15), and support vector regression (SVR) (16), the BPNN is better at dealing with complex nonlinear relationships in prediction. As outlined in Table 4, the BPNN achieves the best results by a clear margin. It indicates that the BPNN based on standing lateral radiographs for predicting sitting pelvic tilt in healthy adults is feasible and superior.
Discussion
The spine and pelvis are characterized by close relations in the sagittal view (17). The spinopelvic relations at various body The BPNN framework for predicting the sitting PT with the standing parameters related to the sitting PT as input. (18) investigated two structural issues of spinopelvic balance, spinal stiffness and hypermobility, and developed a classification system and THA solution for each class. Nevertheless, an elevated risk of impingement was present after surgery in nine patients with malpositioned cups and seven with pathological imbalance. The authors cited ignoring clinical conditions while emphasizing radiological data as the critical limitation of the study. Tang et al. (23) developed an algorithm for an individualized safe zone for prosthetic placement with mathematical modeling developed from a small cohort. This algorithm, however, is of limited value in clinical use as the range of motion criteria of standing position was also adopted for sitting, and the dynamic motion of the spine and pelvis during position change was not delineated. Therefore, the spinopelvic Comparison between the actual and the predicted values of sitting PT. dynamics has yet to be clarified, and the answer to accurate surgical planning remains elusive. Robust in predicting nonlinear relationships, the ANN may reveal underlying correlations among research subjects (24). For example, Galloway et al. predicted hypokalemia with an analytic model based on artificial intelligence, achieving 91% sensitivity and 72% specificity (25). Likewise, Fei et al. obtained 87.5% sensitivity and 84.43% specificity in predicting acute lung injury through an ANN model built upon 217 patients with severe acute pancreatitis (26). Recently, DeepMind's Alphafold2 has been reported with remarkable accuracy, with a potential FIGURE 4 Comparison between the actual and the predicted values of sitting SS. Comparison between the actual and the predicted values of sitting PI.
Zhao et al. 10.3389/fsurg.2022.977505 role in forecasting the structure of almost any protein that human cells express and searching for drug targets (27).
Composed of musculoskeletal and ligament structures and commanded by neuromuscular interaction, the spinopelvic system moves within the limit of anatomy and biomechanics regardless of the health status of individuals (28). It is thus logical that the spinopelvic mechanism is characterized by an essential correlation of measurements between standing and sitting positions. This concept is corroborated by the Pearson analysis of this study, where sitting measurements were found correlated with standing LL and LT. Interestingly, adjacent to the pelvis and more adaptable than the thoracic spine, the lumbar has been acknowledged with a pivotal role in the spinopelvic balance in many studies. Therefore, future research designs should lay more emphasis on lumbar lordosis.
The outcomes of the prediction model in this study were PT, the angle between the line connecting the midpoint of the sacral plate to the center of the bifemoral heads and the plumb line, and SS, the angle between the sacral plate and the horizontal. Both measures shall be acute angles and function to describe pelvic motion tied to spinal motion as PT increases and SS decreases when the pelvis tilts posteriorly. The best model concluded from this study achieved 78.48% and 77.54% accuracy for sitting PT and SS, which is robust given the small sample utilized in the ANN. Meanwhile, the PT and SS test sets observed a disparity between the projected and the actual measurements, which could be ascribed to the small sample size or inherent error in manual measurement, though senior radiologists obtained the measurements. The manual error might be overcome in future studies with results from Weng et al. (29), where computerized measurement of AI technology scored an absolute error of 1.18 mm with a speed of 0.2 s for each film for 990 patients. In addition, PI, another outcome of prediction in the study, represents the sagittal pelvic profile and has been proved constant and independent of pelvic position after skeletal maturity (30). Built upon a small cohort, our best model still reached 95.99% accuracy in predicting sitting PI, suggesting high reliability of the model. This article presents an innovative method in predicting changes in the sagittal parameters of the spinopelvic structure in various pelvic positions, a model built upon standing lateral radiographs of the entire spine, pelvis, and lower extremities. In particular, the model yields unprecedented accuracy of how the pelvic tilt changes as the pelvic moves, providing grounds for future studies of incremental depth. This study, however, was not immune to limitations; for example, it observed only a small number of healthy volunteers, which might not reflect the conditions of elder patients undergoing THA. To tailor the model to clinical practice, the research team will modify the model in a larger pool of data with computerized measurement technology, higher modeling complexity, and diminished overfitting. The model can also be expanded to include bidirectional change of the spinopelvic structure between standing and sitting positions and the dynamics of the entire motion cycle using motion capture systems.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
Ethics statement
Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
MZ was responsible for conceptualization, the methodology, writing the original draft, and writing-review and editing; YH performed data curation, prepared the methodology, supplied the software, undertook visualization, and wrote the original draft; SL was responsible for conceptualization, formal analysis, project administration, supervision, and writing the manuscriptreview and editing; HC conducted the formal analysis, visualization, and wrote the original draft; WL and HT were in charge of project administration, supervision, and writing-review and editing. All authors contributed to the article and approved the submitted version.
Funding
This study was supported by the Peking University Medicine Fund of Fostering Young Scholars' Scientific & Technological Innovation (BMU2021PYB034).
|
2022-09-14T18:25:39.592Z
|
2022-09-14T00:00:00.000
|
{
"year": 2022,
"sha1": "f596059834b149d3c935f6b307992f88e76378fb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "f596059834b149d3c935f6b307992f88e76378fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
110655262
|
pes2o/s2orc
|
v3-fos-license
|
Static and dynamic models for spiral bevel gears
– This paper synthesizes a whole of work carried out on the spiral bevel gears from quasi-static and dynamic models viewpoint. A sophisticated quasi-static model makes it possible to calculate the tooth loads, the pressures, the instant mesh stiffness, and the deflections on the flanks of spiral bevel gears. Based on these results, two three-dimensional lumped parameter dynamic models are presented. The mechanical system under consideration comprises: a spiral bevel pinion and gear connected by a time-varying non-linear mesh stiffness function, and mounted on two shafts simulated by Timoshenko’s beams supported by bearings. Two variants are considered which rely on different contact stiffness simulations: (a) using an averaged mesh stiffness function acting at the centroid of the loaded areas on tooth flanks and (b) a more local approach based on a discrete distribution of the local mesh stiffness, elements over the contact areas. A number of results are presented and commented which illustrate the interest of these dynamic models.
Introduction
The dynamic behaviour of gears has been extensively studied over last decades with two main objectives [1]: improve life time and reduce mesh noise.Because dynamic tooth loads can be significantly higher than those in quasistatic conditions, the life time of mechanical transmissions can be strongly affected.In this context, it is essential to predict the possible overloads on tooth contacts and the associated critical frequencies.From an acoustical point of view, gears are considered as potentially significant noise sources, mainly because of the vibration transfer from the a Corresponding author: michele.guingand@insa-lyon.fr meshing to the casing by the bearing-shaft assembly.In addition to experimental methods, theoretical analyses of the dynamic behaviour of the systems have been largely used.A number of dynamic models have been proposed for spiral bevel and hypoid gears [2][3][4].In particular, Lim and Cheng [4] have introduced a three-dimensional dynamic model of hypoid gears extended later to account for misalignments [5] and nonlinear time-varying mesh stiffness [6,7].In the continuation of these works, the influence analysis of four misalignment errors on the dynamic behaviour of spiral bevel gears has been performed by Peng and Lim [8].Li and Hu [2] have developed a 47 degree of freedom model for spiral bevel gears, based on their initial model of bevel gears [9].Finally, Gao et al. [3] have proposed a model dedicated to the study of shocks in spiral bevel gear systems.In this paper, two dynamic models are presented which make it possible to obtain instantaneous displacements and tooth loads over a broad speed range.In addition, the second model provides dynamic tooth loads and contact pressures on tooth flanks.The two models incorporate actual spiral bevel gear geometries and some of the results delivered by an accurate quasi-static contact model (ASLAN, software developed by LaMCoS of INSA of Lyon).The fundamentals of the quasi-static modeling are synthesized in the first section and it is shown how the instant mesh stiffness and kinematical error obtained using ASLAN are used as input data in the dynamic model.
Static model
The load sharing computation is decomposed as follows: definition of gear geometry by simulating the gear manufacturing process, simulation of no-load kinematics (taking into account misalignments), load calculations: contact pressure distribution, transmission error, stiffness. . .
Definition of the geometry
The proposed method enables the definition of spiral bevel gear geometry based on the Gleason type generation process.In this study, all the UMC (Universal Motion Concept) motions are not considered and a simplified case is analyzed.The method presented is based on Litvin works [10].
Tool characteristics
The envelopes of the tool cutters are considered as the union of a conical surface generating the active tooth surfaces and a toroidal surface cutting the tooth root shapes.
Once the tool characteristics are defined, the cutting process is simulated to obtain the pinion and gear geometry.The process is illustrated in Figure 2, where the different reference frameworks are defined as follows: -R m2 (x m2 , y m2 , z m2 ): linked to the machine, -R f (x f , y f , z f ): linked to the machine, -R s (x s , y s , z s ): linked to the tool, -R 2 (x 2 , y 2 , z 2 ): linked to the gear.
Gear positioning during the cutting process
Generation is kept limited to two generative movements which, in the cutting machine framework, correspond to the rotation of the cradle (tool support) and the rotation of the cut pinion.These two rotations are linked by a constant ratio: m P2 .Compared with the UMC model, the hypotheses are: the two gear flanks are manufactured simultaneously, the tool axis remains parallel to the principal machine tool axis (no tilt angle, no swivel), ratio m P2 is constant (no Modified Roll Motion), the pitch cone tip of the gear is on the principal axis of the cutting machine (no radial or axial offset).
From a general point of view, the equation of an envelope is obtained with the following form: where α o and β o are two parameters describing the location of a point on the tool whose envelope is sought, γ o is a tool positioning parameter and f e is an equation linking these parameters deduced from: e, m and o refer respectively to the reference axes linked to the envelope, cutting machine and tool.− → V and − → n correspond respectively to the velocity and normal vectors at any point.
Loaded analysis
A number of contributions on the calculation of tooth load sharing can be found in the literature.Sentoku [11] and Bruyere [12] considered bevel gears, when Gosselin [13], Icard [14], and Simon [15,16] studied spiral bevel and hypoid gears.In the majority of these works, approximate elastic models are employed which rely on curve-fitted experimental results and/or simplified tooth representations (for example, tooth structural deflections are deduced from a cantilever beam theory).The theory of Hertz is used to calculate the contact characteristics such as pressure distribution, contact area and deflections between the mating teeth.The main advantage of these models is that the computational times are limited and extensive parameter analyses can be performed; on the other hand, the accuracy of the simulations can be questionable in some cases.Alternative methods such as the Finite Prisms Method (Olakorede [17]) or the Finite Band Method (Gosselin [18]) have been proposed which are still effective from a computational time viewpoint but more precise in terms of deflections under load, although they are limited to standard cases.In this paper, the load sharing between the teeth is determined by combining the compatibility conditions in terms of displacements and the static moment balance.The procedure makes it possible to obtain the instantaneous contact pressures, the transmission error under load as well as the mesh stiffness and the root stresses.The compatibility conditions take into account, at the same time, the total bending and contact deflections which are both characterized by coefficients of influence.Structural and contact effects are separated (Icard [14]) the structural deflections of the parts are calculated by using standard Finite Element Model whereas the local contact compliance is computed via a local approach based on a discrete formulation of the results of Boussinesq [19] for elastic half-spaces.
Assumptions on the contact area
Since the pinion and the gear have the same elastic properties, they are quasi-identical solids in contact.For most of the practical applications, the contacts are fully lubricated and the coupling between the normal force and tangential displacements and between the tangential stresses and normal displacement can be neglected so that the zone of contact under load depends on normal loads only.
The potential contact area under load is supposed to be located in a plane parallel to the tangent plane at the unloaded contact point which is discretized in N rectangular cells of constant size on which the pressure distribution is considered as constant.
Algorithm of the load sharing computation
On the potential contact zone (taking into account several tooth pairs), load sharing is determined such that the compatibility conditions in terms of displacement (no inter-penetration of the parts) are satisfied, i.e.
outside the contact area (4) where U 1i and U 2i are the normal displacements of bodies 1 and 2 at point i, ei i is the initial gap at point i, α is the global normal approach of the contacting surfaces and p i the pressure at point i.The distance between the two bodies at point i after load is denoted y i .
Equations ( 3) and ( 4) can be synthesized as: Assuming that the pressure on a small rectangular cell of area S i is constant (S i is attached to the potential point of contact i and lies in the tangent plan), the global torque equilibrium leads to: where N is the number of rectangular surfaces considered in the model, p i is the pressure on the element i of surface S i and R i the associated lever arm.
The elasticity of the contacting solids is accounted for by using influence coefficients such that the displacement at point i reads: where C ij are the influence coefficients (effect of point j on point i).
The problem can then be formulated as follows: By combining all these equations and introducing: the pressure field can be calculated as: Based on this formulation, the system can be solved iteratively by using a Fixed Point Method which ensures a uniform, fast and accurate convergence.Parameters U 1i , U 2i , ei i , ef i and α are represented in Figure 4. Since the surfaces in contact are supposed to be planar and all parallel, convergence is achieved when the final gaps ef i at all the contact points are identical and equal to the normal approach α.
Definition of the influence coefficients
The matrix of influence coefficients takes into account both bending and contact compliances.However, these two effects are supposed to be independent so that three elemental matrices can be separated: one for contact deformations (C s ij ) and two for pinion (C Pf ij ) and gear (C Rf ij ) bending deflections respectively such that: A -Influence of bending coefficients The coefficients of influence for bending are calculated using a standard Finite Element Model because analytical formulations are not possible for complex rims or webs, particular assemblies, etc.However, full FE simulations including the modeling of rims, shafts, casings, etc., can be time consuming and/or require significant computational resources not compatible with parameter analyses for example.In order to significantly reduce computational times, the coefficients of influence are not determined for each kinematical position, neither for each potential contact points, but using the results at a limited number of points and for one position as described in [20] by Teixeira Alves.
B -Influence coefficients for contact
The pinion and gear teeth in contact are approximated by two half -elastic spaces and the displacements under normal forces are deduced by using the potential functions of Boussinesq [19].A general expression is derived which depends on the elastic constants of the two halfelastic spaces, on the dimensions of discrete cells, and on Pinion Gear Tangent plane the coordinates of the different points of contact in the tangent plan.
Unloaded gaps
In order to deal with the conditions of compatibility (Eqs.( 3) and ( 4)), the initial separations ei i between the mating surfaces must be known for each tooth pair potentially in contact.Projecting the meshing points (for example point I in Fig. 5) in the normal direction with respect to the tangent plane and denoting P the projection on the pinion surface and R that on the gear surface, the initial separation of gap at that point is expressed as the distance RP .
Once all the initial separations for all the possible tooth pairs in contact are known, the computation of the load sharing can be undertaken.As specified earlier in the paper, the fixed point method is used to solve the equations of displacement compatibility which leads to pressure distributions, transmission errors and the global mesh stiffness.
Mounting the gear and pinion
Assembly errors can be introduced which are linked to the change of basis described below (Figs. 6 and 7).
− → r a and − → r b represent the coordinates of a point in the Ra (O a , x a , y a , z a ) pinion reference system and Rb (O b , x b , y b , z b ) gear reference system respectively.P , G and E are the axial error for the pinion, for the gear, and the offset.
is the shaft angle.
Examples
In this section, two kinds of results are presented that can be obtained with the numerical model developed in this part: load sharing and instantaneous contact pressures.Other results, such as contact patterns under load, transmission errors or mesh stiffness are not shown here.All these results cannot be compared with those find in the literature since the simulated cutting process is simplified.
A -Load sharing
The first result deals with load sharing (Fig. 8).The number of teeth in contact is obtained for each kinematic position, as well as the global load supported by each tooth pair in contact.The total force is also shown.
B -Instantaneous contact pressures
Figure 9 presents, for a given kinematic position, the contacts on each loaded tooth flank (the study is achieved for 5 teeth potentially in contact), for both pinion and gear teeth.The pressure on the mean contact line is also indicated for all the teeth.
Dynamic models
Based on the results of the quasi-static model presented in Section 2, two dynamic models are developed and presented in this section.
Model
The model of spiral bevel gear is presented in Figure 10.The system of coordinates R(O, X, Y, Z) is rooted at O placed at the apex of the pitch cone of the pinion.The origins for the local coordinate systems attached to the pinion and the gear are their respective centers.The pinion-gear pair is discretized into 6 nodes denoted E, 1, 2, 3, 4, S (Fig. 10) respectively with six degrees of freedom each: w S , ϕ S , ψ S for bending displacements and rotations and finally, θ E , θ 1 , θ 2 , θ 3 , θ 4 , θ S for torsion.
The pinion and the gear are assimilated to rigid bodies and a simple Wrinckler foundation model is used for the tooth contacts [21].By so doing, the deflections at any potential point of contact Mi can be expressed in terms of the degrees of freedom at the centers of the pinion and the gear [22].The local mesh forces are derived by multiplying the local stiffness and the mesh deflection.Lagrange's equations lead to a non-linear time-varying mesh stiffness matrix and a forcing term vector generated by the initial separations between the mating flanks.The shafts are simulated by two-node finite elements, including shear secondary effects, whose mass and stiffness matrices are conventional [23].
Equations of motion
After assembling all the elementary mass and stiffness matrices, the equations of motion of the system are derived from Lagrange's equations: The inertial kinetic energy associated with the spiral bevel gear is given by Equation ( 13) and it generates a constant mass matrix [M E ] and a time-dependent forcing term [F 2 ] due to the inertial effects when rotational speeds are not constant (produced by the no-loaded transmission error).
where: I 1 , I 4 : moments of inertia of the pinion and the gear J 1 , J 4 : polar moments of inertia of the pinion and the gear.
Global model
In a first version of the dynamic model (referred to as model 1), potential energy is calculated by assuming that mesh stiffness can be reduced to a unique stiffness element in the normal direction and located at the centroid of the contact area.The deflection at this point is equal to the normal approach with respect to rigid-body positions and can be expressed by projecting the contributions of the degrees-of-freedom in the normal direction as: from which the mesh strain energy can be expressed under the form: where: projection (or structural) vector in the normal direction, j: location in the meshing, B m : centre of the contact area, n 1 , n 2 : direction of the action lines of the meshing loads.
The global mesh stiffness is given by the numerical code named ASLAN and it is estimated as: where F is the global mesh force and α is the normal approach.
Discretized model
For the second version of the model (model 2), a discrete distribution of the local stiffness elements on contact areas is taken into account and it leads to the following formulation: where A is the contact area and i the points in the contact area.
In this case, it is to be noted that an additional forcing term F 1 appears which is caused by the initial separations between the tooth flanks (as opposed to what is obtained using model 1).The elemental stiffness elements k i are approximated by dividing the elemental load at point Mi by the local deflection at the same point as: where ef i and ei i are the final and initial gaps at point i delivered by ASLAN with the method presented in Section 2 (see Fig. 4 for their definitions).
Energy dissipation
In what follows, energy dissipation is accounted for by introducing a global Rayleigh viscous damping matrix [C] (a linear combination of the global mass and the averaged stiffness matrices).
The equations of motion resulting of Equations ( 14), (15) and/or (17) point to a parametrically excited nonlinear differential linear system of the form: where q represents the global generalized displacements vector.
The equations are solved by combining a time-step integration scheme which is combined with a unilateral contact algorithm as described in [22].
Dynamical loads
Considering model 1, Figure 11 shows an example of dynamic tooth load time variation (the abscissa represents the number of mesh periods) for a pinion speed of 17 200 rpm.The numerical transients, observed for the first mesh periods, are due to the initial conditions (here, the static solution with averaged mesh stiffness) which can be substantially different from the actual dynamic solution.However, it is observed that steady state conditions are obtained very rapidly.The corresponding spectrum (calculated by FFT once steady motion is established) exhibits the classic discrete peaks associated with the mesh frequency and its harmonics.
Pressure distribution
By using model 2, the spatial distribution of mesh forces on the flanks of the teeth can be estimated (Fig. 12).At low speed (1 rpm on the pinion), the results are expressed in terms of contact pressure and compare very well with the results given by ASLAN in quasistatic conditions.It is to be noted in this example, that the pressure peaks are located in the central part of the tooth flank.The dynamic contact patterns at higher speed (17 600 rpm) are represented in Figure 13 where significant fluctuations of the pressure field can be observed.Compared with the classic quasi-static analyses in the literature, the proposed model makes it possible to have access to dynamic pressures (and stresses) thus analyzing more precisely the actual behaviour of high-speed spiral bevel gears.
Dynamics factors
The preceding results can be generalized over a broad range of speeds by introducing the dynamic tooth load factor defined as: F d , F s : dynamic and static loads.The maximum dynamic tooth load results of the integration of the dynamic pressures, and care was taken to make sure that the maximum value at every speed was sought after the system had reached steady state conditions.The results in Figure 14 reveal a structure of dynamic response close to that obtained for spur and helical gears.For both models 1 and 2, a major tooth critical speed emerges around 17 000-18 000 rpm on the pinion whereas secondary response peaks excited by the higher harmonics of the mesh frequency are present in the lower speed region.Some differences can be reported between the results of models 1 and 2: the amplifications are not the same and the critical frequencies are slightly shifted.In both cases, significant dynamic effects are observed (maximum DF between 1.5 and 1.6) which certainly reduce the reliability of the system and must be integrated in any realistic strength analysis.
Conclusion
The definition of the geometry of spiral bevel gears has been presented along with a static model aimed at calculating tooth load distributions and transmission errors.Two lumped parameter dynamic models with several degrees of freedom have been then introduced, which use accurate descriptions of the pinion and gear geometries but employ a different mesh stiffness model.The first model is similar to the classic approaches in the literature, with a single time-varying mesh stiffness connecting the pinion and the gear.The second model relies on a local description of the instantaneous contact conditions via a distribution of mesh stiffness elements (Wrinckler's elastic foundation) over the potential contact area between the pinion and the gear.One of the advantages is that dynamic pressure distributions on the flanks can be estimated.Whatever the model used, the dynamic curve response presents similar trends with i) significant tooth force amplifications at the major critical speed and ii) several secondary resonances generated by the harmonics of the mesh excitations.
In order to validate the numerical results presented in this article, a test bench is currently realized.
Fig. 1 .
Fig. 1.Envelope definition of the cutter for the numerical model.
Figure 3
Figure 3 corresponds to a pinion and gear set obtained with the developed numerical model, based on the simulation of a simplified Gleason type generation process.
Fig. 5 .
Fig. 5. Projections of the meshing points on the pinion and the gear.
Fig. 9 .
Fig. 9. Contact lines and instantaneous pressures for a given kinematic position.
Fig. 13 .
Fig. 13.Dynamic contact pattern at high speed in 3 dimensions (A), and projected on the tooth flank (B).
Fig. 14 .
Fig. 14.Dynamic coefficients by the two models versus pinion rotational speed.
Table 1 .
Geometrical data of the spiral-bevel gear.
Table 2 .
Geometrical data of the shafts.
|
2019-04-13T13:10:55.556Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "6542f058009507f99dc28fb6ac465bd529a96e5d",
"oa_license": "CCBY",
"oa_url": "https://www.mechanics-industry.org/articles/meca/pdf/2012/05/mi110099.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1ea5285f4c46359ffa64cae0567fad882e313917",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
247232577
|
pes2o/s2orc
|
v3-fos-license
|
Artificial intelligence in diagnosis of knee osteoarthritis and prediction of arthroplasty outcomes: a review
Background Artificial intelligence is an emerging technology with rapid growth and increasing applications in orthopaedics. This study aimed to summarize the existing evidence and recent developments of artificial intelligence in diagnosing knee osteoarthritis and predicting outcomes of total knee arthroplasty. Methods PubMed and EMBASE databases were searched for articles published in peer-reviewed journals between January 1, 2010 and May 31, 2021. The terms included: ‘artificial intelligence’, ‘machine learning’, ‘knee’, ‘osteoarthritis’, and ‘arthroplasty’. We selected studies focusing on the use of AI in diagnosis of knee osteoarthritis, prediction of the need for total knee arthroplasty, and prediction of outcomes of total knee arthroplasty. Non-English language articles and articles with no English translation were excluded. A reviewer screened the articles for the relevance to the research questions and strength of evidence. Results Machine learning models demonstrated promising results for automatic grading of knee radiographs and predicting the need for total knee arthroplasty. The artificial intelligence algorithms could predict postoperative outcomes regarding patient-reported outcome measures, patient satisfaction and short-term complications. Important weaknesses of current artificial intelligence algorithms included the lack of external validation, the limitations of inherent biases in clinical data, the requirement of large datasets in training, and significant research gaps in the literature. Conclusions Artificial intelligence offers a promising solution to improve detection and management of knee osteoarthritis. Further research to overcome the weaknesses of machine learning models may enhance reliability and allow for future use in routine healthcare settings.
machine learning. ML is a branch of AI involving algorithms that automatically "learn" from data, with incremental optimization and improvements in accuracy during the training process [2,4]. Deep learning is a form of ML that does not require a labelled or structured dataset [4,5]. For example, the use of artificial neural networks (utilizing the layers of increasing complexity and abstraction for information processing) to "learn" the important features of a model without human input [4].
AI can handle very large, complex datasets, and generate predictions to improve accuracy and efficiency of healthcare decisions, such as KOA and TKA [1]. ML algorithms have also been used to develop models to assist with pre-TKA planning and predict the value metrics of TKA, such as predicting implant size [6], reconstructing three-dimensional CT data of lower limb to facilitate robotic-assisted TKA [7], and assisting with component positioning and alignment [8]. ML potentially improves surgical precision and reduce the cost of manual labor. Regarding value metrics, ML methods have been used to predict the length of hospital stay, hospitalization charges, and discharge disposition. It impacts the economic burden of TKA and thus potentially affects decisions on payment models in healthcare settings [9][10][11].
This review aimed to summarize the existing evidence and highlight recent developments of AI and ML in diagnosis of KOA, prediction of the need for and outcomes of TKA.
Materials and methods
We searched PubMed and EMBASE databases for articles published in peer-reviewed journals between January 1, 2010 and May 31, 2021. We searched for the following terms: ' AI' , 'machine learning' , 'knee' , 'osteoarthritis' , and 'arthroplasty' . We selected studies focusing on the use of AI in diagnosis of KOA, predicting the need for TKA, and predicting outcomes of TKA. We excluded non-English language articles and the articles with no English translation. A reviewer screened the articles for the relevance to the research questions and strength of evidence.
Results
The search produced 136 individual results, among which a total of 22 papers were included in the narrative synthesis following screening against inclusion/exclusion criteria (Table 1). Only one study was externally validated by testing the model using a dataset not used during model training to assess model performance and generalizability. The most commonly reported metric among the published articles was the area under the receiver operating characteristic curve (AUC), which evaluates the ability of an algorithm in discriminating between the individuals who experienced and those who did not experience the outcomes immediately after surgery and thereafter. AUC values ranged from 0.5 (indicating performance equal to a random predictor) to 1 (indicating a perfect predictor).
Other reported metrics included sensitivity, specificity, Kappa coefficient (a measure of inter-rater reliability, where a value of 0 indicates no agreement while a value of 1 indicates perfect agreement), and positive and negative predictive values. The characteristics, performance, strengths, and weaknesses of AI algorithms are summarized in Table 2. AI algorithms used to predict the outcomes of TKA are shown in Table 3.
Diagnosis and predicting the need for TKA
Multiple machine learning models have been developed for radiological diagnosis and severity grading of KOA (based on the most widely used the Kellgren-Lawrence Classification System) ( Table 2). Tiulpin et al. [19] developed an automatic grading model based on the Deep Siamese Convolutional Neural Network. The model was first trained using 18,376 knee radiographs from the Multicenter Osteoarthritis Study (a longitudinal, prospective, observational study of KOA in older Americans), and further tuned for hyperparameters using 2,957 KOA radiographs from the Osteoarthritis Initiative (a multicenter, longitudinal, prospective observational study of knee osteoarthritis), and finally tested on 5,960 randomly selected KOA radiographs from the Osteoarthritis Initiative that are unseen during the training process. The model achieved a kappa coefficient of 0.83 and an average multiclass accuracy of 67%, indicating excellent agreement (comparable to intra-and inter-rater reliability by arthroplasty surgeons) [34,35]. The key benefit of this model is the provision of probability distributions for each Kellgren-Lawrence grade prediction. In clinical practice, the model may be used to select the closest Kellgren-Lawrence grade in ambiguous cases. Similarly, Norman et al. [18] used DenseNet neural network architectures to develop an automatic Kellgren-Lawrence grading model. Saliency maps revealed important radiographic features in algorithm's decision-making, such as osteophytes and joint space narrowing. For detecting Kellgren-Lawrence grades, the sensitivity and specificity of the model were 69-86% and 84-99%, respectively. The kappa coefficient was 0.83, which was the same as the model proposed by Tiulpin et al. [19]. Most existing algorithms focus on the radiographic diagnosis of KOA or rely heavily on radiographic information as candidate predictors of TKA. This may be due to substantially increased imaging data availability following the recent creation of public datasets such as the Osteoarthritis Initiative.
In a recent study, Leung et al. [15] developed a deep learning model that directly predicted the need for TKA based on knee radiographs. This model demonstrated superior performance in predicting TKA than the conventional binary outcome models based on the Kellgren-Lawrence or Osteoarthritis Research Society International grades. The deep learning model used additional image-based information that might not be captured by simple numerical grading systems [36].
The discrepancies between radiologic and clinical severity of KOA have been widely reported [37][38][39][40]. Clinical diagnosis is typically made according to American College of Rheumatology criteria, taking into account patient age, symptoms, physical examination, and radiographic assessments [41]. The decision for surgery is driven primarily by symptom severity instead of radiological findings. Thus, the ML algorithms (automate Kellgren-Lawrence grading or predict TKA using imaging data alone) are limited in clinical decision-making. Nevertheless, the ML-based studies mentioned above offer insight to the development of radiograph-based prediction models using different machine learning approaches and may serve as a stepping stone to future studies that include additional clinical parameters, which may be more suitable for clinical decision-making support.
In 2020, Heisinger et al. [13] first designed an ML prediction model by investigating knee symptomatology (e.g., pain, function, and quality of life), Kellgren-Lawrence grading, and socioeconomic and demographic factors four years before TKA. The longitudinal analyses showed that significant worsening in knee symptomatology before TKA was the most important factor in decision making for TKA, compared to the radiographic progression of KOA. The artificial neural network can predict patients who may undergo TKA in the next two years with an accuracy of 80%, with a positive predictive value of 84%, and a negative predictive value of 73%.
El-Galaly et al. [12] were the first to attempt to develop a clinical ML algorithm to predict early revision TKA using preoperative data. The models were trained on the Danish Knee Arthroplasty Registry. Patient age, postfracture osteoarthritis, and weight were statistically significant preoperative factors. Nevertheless, the authors were unable to develop a clinically useful model based on preoperative information [12]. Hence, further study is needed to identify clinically useful predictors of revision TKA.
Predicting postoperative outcomes of TKA
The improvement following TKA is commonly assessed using the patient-reported outcome measures with or Training and testing sets were selected from the same dataset.
Results from this study suggest that future models predicting early revision TKA may benefit from including more pre-operative information or predicting revision over a longer follow-up duration. without accompanying "minimally clinically important improvement", i.e., the minimum benefit assessed with the patient-reported outcome measures [42,43]. Huber et al. [28] used ML algorithms to predict postoperative improvement in the patient-reported outcome measures.
The models were trained and tested using the National Health Service data (130,945 observations), and the area under the receiver operating characteristic curve of the best performing models was approximately 0.86 (visual analogue scale) and 0.70 (Q score, i.e., sum of the Oxford Hip Score and Oxford Knee Score) for TKA. The results showed that preoperative visual analogue scale, Q score, and specific Q score dimensions were the most important predictors of postoperative patient-reported outcome measures [28]. Harris et al. [20] developed another model to predict post-TKA 1-year achievement of MCID and demonstrated fair discriminative ability for the prediction of some, but not all, PROMs included. Further development of similar machine learning algorithms for routine patient care could potentially assist postoperative outcome prediction. AI can be used to predict post-TKA patient dissatisfaction. Kunze et al. [25] developed a random forest algorithm which demonstrated an AUC of 0.77 in identifying patients most likely to experience dissatisfaction. Farooq et al. [22] found that models built using ML achieved significantly higher AUC than using binary logistic regression on the same dataset (0.81 vs. 0.60). Given that a significant 20% of patients are dissatisfied following TKA and that existing statistical models cannot fully explain the reason for dissatisfaction [22], supervised machine learning models offer an alternative approach to automate the search for predictors of patient dissatisfaction.
The major complications of TKA are bleeding, thromboembolism, vascular injury, etc. [44] Many risk prediction calculators exist, such as the American College of Surgeons-National Surgical Quality Improvement Program universal surgical risk calculator and other arthroplasty-specific calculators [45,46]. These conventional calculators have substantial weaknesses, such as poor accuracy, limited generalizability to external datasets, and preoperative use restrictions due to requiring intraoperative data as input variables [47,48]. ML models offer an alternative approach to predict postoperative complications. Harris et al. [27] developed prediction models for 30-day mortality and major complications following elective arthroplasty. The models were trained on the American College of Surgeons National Surgical Quality Improvement data and externally validated using Veterans Affairs Surgical Quality Improvement Program data which had different patient demographics and clinical characteristics compared to the training data. The models showed acceptable performance in predicting mortality (AUC: 0.69) and cardiac complications (AUC: 0.72) (but not renal complications -AUC: 0.60) during external validation using the Veterans Affairs Surgical Quality Improvement Program data [27]. One important limitation of this study design is that the training dataset does not contain complete patient medical data (e.g., comorbidities) and only includes the patients from a small number of hospitals, limiting its generalizability [27]. Overall, ML has not been extensively applied in predicting post-TKA complications, and further efforts in model development with rigorous internal and external validation are warranted.
Discussion
We find AI and ML models improve automatic grading of knee radiographs, patient selection for TKA, and predictin of postoperative outcomes of patient-reported outcome measures, patient satisfaction, and short-term complications. The weaknesses of current AI algorithms include the lack of external validation, inherent biases of clinical data, the need for large datasets for training, and significant research and regulatory gaps.
Weaknesses of AI in arthroplasty
The current use of artificial intelligence algorithms has its limitations. First, accuracy and generalizability are key obstacles as very few models have been externally validated, and high AUC values do not necessarily translate to good clinical performance [26]. More rigorous external validation of prediction models is needed during algorithm development and testing, to ensure robustness and reliability before algorithms can be considered for routine clinical use. An important issue regarding generalizability lies in the fact that patient selection and postoperative outcomes are influenced by structure-and region-related confounders, such as institutional policies, hospital sites, and organizational culture [10]. For example, the threshold for booking TKA may differ between institutions depending on resource availability and hospital policy. Institutions may benefit from using regionspecific machine learning algorithms for more accurate predictions.
Second, a practical disadvantage of machine learning models is the requirement of large datasets to train these models. These datasets often contain millions of unique data points and require hours or days of training, and additional datasets are needed to assess generalizability [49]. The increased availability of public datasets such as Multicenter Osteoarthritis Study and OAI could help overcome this obstacle and facilitate further research on machine learning in arthroplasty.
Third, a common concern surrounding the use of artificial intelligence is the "black-box" nature of machine learning models. Machine learning algorithms' decision-making processes are opaque, using hidden layers and unknown connections between inputs and outputs, resulting in poor understanding and difficult scientific interpretation of how it generates predictions and recommendations [50]. Visualization of attention maps cannot directly provide information on these hidden relationships, and other efforts to increase the transparency of deep learning models are still ongoing [51]. Nevertheless, this poses more of a problem to scientific understanding rather than clinical application. By contrast, the reliance on data for model development is a key limitation of artificial intelligence in clinical use. Models developed are limited by the biases and limitations of current clinical data. Machine learning models are also "plastic", i.e., changing when presented with new data [50], and the input parameters included in a machine learning algorithm, such as models predicting TKA need, may continuously change as new data becomes available to the model. Finally, significant research and regulatory gaps exist, given the novel nature of this technology. There is a paucity of literature on the use of machine learning algorithms to predict the need for arthroplasty, and current machine learning models are unable to predict the long-term outcomes of TKA. ML models are limited by the biases of current clinical data, and future implementation of these algorithms into routine hospital care will also come with regulatory concerns of algorithm quality control, security issues and adversarial attacks.
Conclusions
KOA is an important public health problem worldwide. AI offers a promising solution to detect KOA and improve pre-TKA planning. Further research is needed to overcome the limitations of ML models and ensure reliability for future use in routine healthcare settings.
|
2022-03-05T14:21:48.798Z
|
2022-03-05T00:00:00.000
|
{
"year": 2022,
"sha1": "2c3803904c48c99ccb3c78039b44a0cd564a9340",
"oa_license": "CCBY",
"oa_url": "https://arthroplasty.biomedcentral.com/track/pdf/10.1186/s42836-022-00118-7",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "9caa7165bf949b464c734d109141eb8c8535157e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
323219
|
pes2o/s2orc
|
v3-fos-license
|
Promoting Active Urban Aging : A Measurement Approach to Neighborhood Walkability for Older Adults
Understanding the role of the built environment on physical activity behavior among older adults is an important public health goal, but evaluating these relationships remains complicated due to the difficulty of measuring specific attributes of the environment. As a result, there is conflicting evidence regarding the association between perceived and objectively measured walkability and physical activity among urban-dwelling older adults. This suggests that both actual environmental features and perceptions of these attributes influence walking behavior. The purpose of this pilot project is to create an Objective Walkability Index (OWI) by census block using a Geographic Information System (GIS) and supplement the results with resident perceptions thus more accurately characterizing the context of walkability. Computerized Neighborhood Environment Tracking (ComNET) was used to systematically assess environmental risks impacting activity patterns of older adults in two New York City neighborhoods. In addition, the Senior Center Evaluation of the Neighborhood Environment (SCENE) survey was administered to older adults attending two senior centers located within the target neighborhoods. The results indicate that there is substantial variation in OWI score both between and within the neighborhoods suggesting that residence in some communities may increase the risk of inactivity among older adults. Also, low walkability census blocks were clustered within each neighborhood providing an opportunity for targeted investigation into localized threats to walkability. A lack of consensus regarding the association between the built environment and physical activity among older adults is a consequence of the problems inherent in measuring these determinants. Further empirical evidence evaluating the complex relationships between the built environment and physical activity is an essential step towards creating active communities.
INTRODUCTION The Built Environment and Physical Activity
Past research has shown that remaining active into old age has numerous public health benefits. Physically active older adults are more likely to avoid functional limitations, prevent disease and disability, and improve survival (Wagner et al. 1992; US Department of Health and Human Services 1996; Clark and Nothwehr 1999;Satariano and McAuley 2003). Despite the numerous benefits of physical activity, adults age 60 and over represent the most inactive segment of the adult population. According to the National Health Interview Survey, inactivity increases with age; by age 75, over 80% of adults do not engage in regular leisure-time physical activity (Schoenborn and Adams 2010). Promoting physical activity among seniors is a national health objective (Satariano and McAuley 2003). However, most research efforts have focused on individual-level determinants of, and barriers to, physical activity, which fail to consider the broader environment in which physical activity occurs (Li et al. 2005b).
Remaining active into old age is achieved when physical activity is integrated into daily routines such as walking for transport, leisure, or exercise. Walking is one of the most common forms of exercise among seniors because it is versatile, inexpensive, and generally low-impact (US Department of Health and Human Services 1996; Michael et al. 2006a). Older adults are particularly vulnerable to the effects of their environment and thus, neighborhoods are an important place to study physical activity and walking behavior (Pastalan and Pawlson 1985;Glass and Balfour 2003). First, as adults grow older, their spatial area shrinks to the vicinity of their home or immediate neighborhood and resources within the community become increasingly important (Lawton 1978;Glass and Balfour 2003). Second, age-related diseases, as well as cognitive and physical changes, may decrease the ability of older adults to cope with environmental stress (Glass and Balfour 2003). Factors associated with the aging process such as physical vulnerability, visual impairment, mobility limitations, and cognitive disorders reduce the ability of seniors to handle person-environment interaction as they once did. However, small modifications to the physical environment may help to maintain levels of independent functioning among senior residents (Pastalan and Pawlson 1985). Thus, understanding the role of the built environment on physical activity and walking behavior among older adults is an important goal in promoting active aging.
Measuring the Built Environment
The built environment is a multidimensional concept, defined by the United States Center for Disease Control and Prevention as "human-formed, developed, or structured areas." For the purposes of measurement, the built environment can be partitioned into three distinct dimensions: land development patterns, microscale urban design, and transportation systems (Handy et al. 2002). Land development patterns reflect the juxtaposition of different types of land-use (i.e., residential, office, commercial, industrial, and open/green space) and activities in a neighborhood (Handy et al. 2002). They also describe the distance between trip origin and destinations such as shops, entertainment venues, recreation facilities, and parks (Cunningham and Michael 2004). Microscale urban design refers to the organization of the city and microelements (e.g., sidewalks, crosswalks, streetlights, etc.) within it (Handy et al. 2002;Cunningham and Michael 2004). Urban design also characterizes the arrangement, complexity, and appeal of urban space (Cunningham and Michael 2004). Transportation systems are comprised of the physical infrastructure that provides connections between people, places, and activities. In addition to public transportation, traffic levels and pedestrian safety are also key components of this system (Handy et al. 2002). Neighborhood walkability is a broad concept designed to evaluate a range of built environment features using a composite index or scale which facilitates area-based comparisons.
Today, the study of the built environment and its influence on physical activity is experiencing academic growing pains caused by the emergence of a plethora of different measurement approaches from different fields of study. Many of these approaches lack a clear conceptual framework and supportive theory to guide methodology, which has mainly been driven by the availability of datasets (Dietz 2002;Macintyre et al. 2002;Diez Roux 2003;Brownson et al. 2004;Diez Roux et al. 2007;Messer 2007;Mujahid et al. 2007). As a result, there is conflicting evidence regarding the association between different features of the built environment and physical activity among urban-dwelling older adults. One of the greatest challenges facing researchers in the field is choosing an appropriate method for evaluating the specific features of the built environment hypothesized to be related to physical activity among older adults. The following sections will discuss a few current trends of data measurement, which include two main categories of built environment measures-subjective surveys measures and objective data audit measures.
Subjective survey measures are designed to assess an individual"s perception of their neighborhood environment and are usually obtained via interviews or self-reported questionnaires (Brownson et al. 2004;Araya et al. 2006). Indirect measurement of the built environment by subjective survey evaluates how residents perceive the quality of their physical environment including opportunities for physical activity. Participant responses are then aggregated to selected geographical/spatial areas (and sometimes by population subgroup) to represent the subjective context of different neighborhoods. This category of measure is typically resource light (i.e., expense and time), but has potential limitations in other areas. Only a few subjective survey instruments report reliability (test-retest) and validity (content and construct) and those that do vary substantially both between studies and within specific features of the built environment (Moudon and Lee 2003;Brownson et al. 2009). Reporting bias may overstate associations between the built environment and physical activity if the same individuals are reporting both exposure (built environment) and outcome (physical activity) (Dunstan et al. 2005;Araya et al. 2006;Mujahid et al. 2007;Brownson et al. 2009). The subjective nature of these measures also brings into question whether the findings actually represent the context of a neighborhood or are simply the aggregate of resident perceptions and/or individual characteristics (compositional confounding) (Dunstan et al. 2005;Araya et al. 2006;Brownson et al. 2009). It is important to control for individual characteristics to ensure that the variance is explained by place-based, rather than by individual effects (Araya et al. 2006).
The most commonly used survey to assess walkability is the Neighborhood Environment Walkability Scale (NEWS), a 68 item questionnaire developed by Sallis et al. (Brownson et al. 2004;Brownson et al. 2009). NEWS was created from a conceptual model which sought to obtain information on residents" perceptions of certain built environment characteristics found in urban planning and transportation fields, and how those features are related to walking and bicycling behavior (Cerin et al. 2006). Subscales were comprised from sets of questions to include residential density, proximity to stores and facilities, perceived access to these destinations, street connectivity, facilities for walking and cycling, aesthetics, and safety from traffic and crime. Unlike many other subjective survey instruments, NEWS has strong test-retest reliability and construct validity (Saelens et al. 2003;Brownson et al. 2004).
Objective data audit measures use systematic observation to collect primary data regarding features of the built environment. This method measures attributes in a neighborhood as they are directly observed, attempting to remove subjective evaluations. The intent is to gather information on the presence and quality of specific items that are not included in existing Geographical Information Systems (GIS) or urban planning databases (Brownson et al. 2009). Audit tools typically involve direct in-person observation by trained individuals who walk or drive through neighborhoods using a standardized form to code built environment characteristics (Araya et al. 2006;Brownson et al. 2009). The forms are either in pencil and paper format, or contained within hand-held electronic devices and include close-ended questions such as quantifiable check boxes or Likert scales (Brownson et al. 2009). The unit of analysis for most audit tools is a street segment or block, and due to the amount of time needed to observe, many of the studies sample only segments of neighborhoods.
Direct observation is resource-intensive; particularly when the time needed to select sites and sample segments, train observers, collect and enter data, and analyze raw data is considered (Araya et al. 2006;Brownson et al. 2009). However, the cost and time needed for objective data audits depends on the number of items measured and the size of the geographical area (Brownson et al. 2009). The use of portable electronic devices will speed up the process and also minimize data entry and collection errors. The use of objective data audits tends to be contextually valid, especially compared to methods that employ aggregated individual-level data (Araya et al. 2006). However, some items may not be readily observable or may require subjective inference by the observer. Inter-observer reliability is the most frequently tested measure of reliability and tends to be strongest for objective items relating to land-use mix and street characteristics (Brownson et al. 2009). Test-retest reliability is usually only evaluated to see how features of the built environment have changed over time. Brownson et al. (2009), reviewed 20 objective audit tools and found that they varied significantly in content, detail, and how they characterized various features (i.e., some items represented by a single question and others by a series of questions). The most commonly assessed variables include land-use mix, streets and traffic, sidewalks, bicycling facilities, public space and amenities, building characteristics, parking and driveways, maintenance, and indicators of safety (Brownson et al. 2009). Several environmental audit tools have been developed specifically for older adults, including the Senior Walking Environmental Audit Tool (SWEAT) and the Healthy Aging Research Network Environmental Audit Tool (Cunningham et al. 2005; Center for Disease Control and Prevention"s Healthy Aging Research Network 2009).
Inconsistencies Among Associations
As discussed above, some studies of neighborhood walkability are based upon resident perceptions whereas others use environmental audits as an objective measure. However, associations between the built environment and walking behavior differ according to which type of measure was employed. A review of the literature identified two studies that assessed built environment attributes, using both resident perceptions and environmental data audits, and their impact on physical activity (Hoehner et al. 2005;Michael et al. 2006b). However, only one of these articles focused on older adults (Michael et al. 2006b). Michael et al. sought to determine the degree of concordance between resident perceptions and environmental audit data, and the relationship between these elements and neighborhood walking among older adults. Results indicated poor agreement between objective and perceived measurements of trails, graffiti and vandalism, sidewalk existence, and sidewalk obstruction. In addition, after adjusting for covariates, the only significant attributes remaining in the walking models were objective and perceived presence of a mall, and the objective existence of graffiti and vandalism (Michael et al. 2006b). Hoehner et al. 2005 evaluated the impact of the built environment on transportation and recreational physical activity among adults by using a subjective survey and an environmental audit in four urban settings. Results indicated that participants with greater access to nonresidential destinations (measured both objectively and subjectively) were more likely to walk for transportation. Other neighborhood attributes that demonstrated consistent associations with transportation or recreational activity across measurement type were access to public transportation (e.g., bus stops) and neighborhood quality as assessed by the quantity of garbage, litter, or broken glass and physical disorder. However, the effect of perceived safety from traffic and objectively measured quantities of trees, benches, and other comfort amenities were both found to be related to transportation activity, but their corresponding measures were not.
Conclusions varied depending on the method of measurement, which suggests that both the actual environmental factors and perceptions of these attributes influence walking behavior (Hoehner et al. 2005;Michael et al. 2006b;McGinn et al. 2007;Nagel et al. 2008;Adams et al. 2009;Gebel et al. 2009;Maddison et al. 2009;Frank et al. 2010;Gómez et al. 2010). However, there is a dearth of research investigating the differences between features of the built environment measured via resident perceptions and environmental data audits specifically for older adults. The purpose of this pilot project was to calculate an Objective Walkability Index (OWI) for older adults using data from an environmental audit of two New York City (NYC) neighborhoods in a Geographic Information System (GIS). The OWI is based on an objective data inventory utilizing Computerized Neighborhood Environmental Tracking (ComNET), a tool developed by the Fund for the City of New York"s Center on Municipal Government Performance. The OWI will then be compared to resident perceptions, obtained from the Senior Center Evaluation of the Neighborhood Environment (SCENE), a subjective survey instrument.
METHODS
This study uses primary data collected in 2008-2009 by the author and secondary data downloaded in the form of spatial data layers or shapefiles. Shapefile sources include the United States Census Bureau (2000 Census), the NYC Department of City Planning (DCP)-Bytes of the Big Apple, and the NYC Department of Information Technology and Telecommunications (DoITT). The following sections will discuss primary data sources and the methodology of the OWI.
Objective Data Audit
ComNET was developed by the Fund for the City of New York"s Center on Municipal Government Performance (CMGP) to assist residents in collecting built environment data for community needs assessments (Fund for the City of New York 2009a; Fund for the City of New York 2009b). ComNET is fully customizable; it allows the user to select any size geographical area and to choose items from the CMGP"s core feature list or to create their own (Fund for the City of New York 2009b). The selected areas are then turned into routes and uploaded into a hand-held personal digital assistant (PDA). The innovative software guides the observer to follow a direct pre-determined route, which ensures that all street segments will be covered. Unlike other data audit tools, ComNET assesses built environment characteristics by creating a systematic inventory of all features on each block segment complete with address coordinates. For example, to evaluate land-use mix, observers note the presence and exact location of specific types of commercial, residential, recreational, and industrial facilities in a community. It also evaluates the quality of the physical environment by creating a record of where there is litter, graffiti, drug paraphernalia, etc. Trained observers record conditions in a uniform, verifiable, and replicable manner and are able to take photos and link them directly to the specific feature in the database (Fund for the City of New York 2009b). Once a route is finished, the raw data are uploaded via the internet to a holding database where edits can be made. The dataset can then be exported in a variety of formats (MS Access, MS Excel, Text file, etc.) and is ready for validation and analysis using GIS or other methods.
The pilot study targeted two socioeconomically, racially, and ethnically different neighborhoods in NYC: Crotona Park East in the Bronx and Lenox Hill in Manhattan (See Figure 1). Features of the built environment were determined through a comprehensive literature review and included those associated with walking behavior among older adults (See Table 1 for a list of ComNET attributes). In the fall of 2008, trained observers worked in the field collecting data in pairs to increase rater reliability and objectivity, and to ensure safety. ComNET was developed for community needs assessments and this was the first time the tool was used in a research capacity, so validity estimates are not available. However, the purpose of ComNET was to systematically inventory specific attributes of the built environment and represents a count of different features of the neighborhood. It is therefore not purporting to measure an unobservable latent construct. Data were recorded at the block-face level and contained in an Excel spreadsheet, where each row represented an inventory item.
Subjective Survey Data
The SCENE survey was developed by the author to assess physical activity levels and perceptions of the built environment among older adults attending senior centers in NYC. The structured instrument was designed to evaluate which features of the physical and social environment residents perceive to impact physical activity and walking behavior. The physical activity section was based on the Neighborhood Physical Activity Questionnaire, which evaluates physical activity and walking behavior within and outside of residents" local area (Giles-Corti et al. 2006). Perceptions of walkability were assessed using several items from the Neighborhood Environment Walkability Survey (NEWS) (Saelens et al. 2003), along with some original questions. A demographic section includes items on respondent"s age, sex, socioeconomic status, marital status, length at present residence, and residential zip code. The target study areas of Crotona Park East and Lenox Hill are shown above with census blocks outlined. Also included are proximate zip codes and NYC Department of City Planning neighborhoods. The middle map shows the location of the two neighborhoods with respect to NYC using extent rectangles.
Interviews were conducted in the summer of 2009 in two senior centers located within the target neighborhoods: Neighborhood Shoppe in Crotona Park East and Carter Burden in Lenox Hill (See Figure 1). Trained interviewers conducted face-to-face interviews with randomly selected seniors in the participant"s language of choice (English or Spanish). A total of 103 questionnaires were completed-50 at Carter Burden and 53 at Neighborhood Shoppe. Response rates were 76% for Carter Burden in Lenox Hill and 98% for Neighborhood Shoppe in Crotona Park East. Data was entered into a spreadsheet and the walkability score for each respondent was calculated by subscale (see Table 1) using SPSS version 15. The score for each subscale was then averaged by zip code within the targeted study areas. Walkability scores ranged from 1-4, with higher scores representing greater walkability.
Objective Walkability Index (OWI)
The ComNET data was geocoded using the DCPLion address locator and added to the map layout as a point layer file. The census block shapefile was overlaid, and spatially joined to the ComNET point layer to create a new combined ComNET shapefile by census block layer. The Objective Walkability Index (OWI) was calculated by summing the number of points (i.e., inventory items) within each census block and then added as a new field in the attribute table. More specifically, each item from subscale 1 was assigned a value of "-1" since this subscale represents positive features of neighborhood walkability. Conversely, subscales 2-5 were given a value of "1" to demonstrate negative attributes of walkability (see Table 1). An OWI score was calculated for each census block, which was then ranked into quartiles (bottom quartile-low OWI, top quartile-high OWI) by ArcGIS. Table 1 demonstrates the comparability of the objective (ComNET) and subjective measures by subscale.
Objective Walkability
A total of 104 census blocks were inventoried using ComNET-59 in Lenox Hill and 45 in Crotona Park East. The mean OWI score was 3.36 for Lenox Hill and 11.87 for Crotona Park East. Mean OWI score between neighborhoods was statistically different at the p<0.05 level suggesting that objective walkability in Lenox Hill is significantly greater than in Crotona Park East. The same trend was observed when the OWI score was divided into quartiles by census blocks, with Quartile 1 representing low walkability and Quartile 4 indicating very high walkability (see Table 2). Over 55% (n=33) of census blocks in Lenox Hill scored in the 75 th percentile (very high walkability) as compared with approximately 2% (n=1) of blocks in Crotona Park East. Conversely, over 37% (n=17) of blocks in Crotona Park East scored in the 25 th percentile (low walkability) versus slightly over 3% (n=2) of Lenox Hill census blocks.
Figure 2 is a graphic representation on the OWI by census block for Crotona Park East and Lenox
Hill. Darker colored census blocks depict areas of greater walkability as compared with lighter colors. The low walkability census blocks appear to be clustered in the southwestern corner of Lenox Hill and in the southwestern and eastern areas in Crotona Park East. Differences in objective walkability both between and within the neighborhoods can be further evaluated by subscale to provide a more nuanced view of the built environment. Table 3 displays total inventory counts and average count per census block by subscale dimension for each target area. The most striking difference was land-use mix where Lenox Hill has an average of 7.32 destinations (i.e., retail, commercial, recreation, open space, etc.) per block as compared with only 1.91 in Crotona Park East. In addition, Crotona Park East residents are more likely to encounter poor street connectivity and trip hazards (7.67 versus 5.15 per block) than their Lenox Hill counterparts. In terms of both pedestrian and overall neighborhood safety, fewer problems per census block were recorded in Lenox Hill than in Crotona Park East (0.31 versus 2.62 for pedestrian safety and 0.19 versus 0.71 for neighborhood safety). Interestingly, Crotona Park East scored higher on neighborhood aesthetics, indicating a greater presence of graffiti/scratchiti, litter, dumping, and other factors in Lenox Hill.
Subjective Walkability
Data from the SCENE survey represents perceptions of neighborhood walkability and thus can provide additional context for the OWI. A total of 103 surveys were administered in two senior centers located within the target neighborhoods. However, only 42 of the respondents resided within a zip code targeted by the study (14 in Lenox Hill,28 in Crotona Park East). Surprisingly, this suggests that instead of attending a senior center in their immediate residential vicinity, older adults may travel to outside centers. For this analysis, the responses were limited to participants residing in one of the three study zip codes (i.e., 10021, 10459, and 10460, see Figure 1). Table 4 demonstrates the mean perceived walkability scores for each subscale and for the total walkability score by zip code. Interestingly, the Lenox Hill zip code had the lowest mean walkability score (3.11) as compared to mean scores for the two Crotona Park East zip codes (3.29 and 3.21). However, the mean total walkability scores stratified by zip code were not statistically different from each other. Mean subscales scores demonstrate that the Lenox Hill zip code scored lower in land-use mix, street connectivity/maintenance, and pedestrian safety than either of the two zip codes representing Crotona Park East. Despite the appearance of neighborhood walkability trends, none of the mean * 59 Census block in Lenox Hill, 45 census blocks in Crotona Park East perceived subscale scores were significantly different from each other. The lack of variation between neighborhoods may be due to the small sample size for each zip code.
DISCUSSION
This study aimed to supplement the results from an objective data audit with resident perceptions to more accurately define neighborhood walkability. The results from the OWI indicate significant differences in both between and within neighborhood walkability scores. Approximately 78% of the census blocks in Lenox Hill are characterized by high or very high walkability as compared with only 27% of blocks in Crotona Park East. In addition, the mean OWI score for Lenox Hill was significantly greater than that of Crotona Park East. Low walkability census blocks appear to cluster in both of the neighborhoods, suggesting that local effects, although larger than a single block, may be appropriately identified by analysis at the census block level.
In this analysis, results from the SCENE survey were less informative than the ComNET data for several reasons. First, the unit of analysis for SCENE was the zip code, which is ultimately too large a unit to appropriately measure between-group differences in perceived walkability. Second, the majority of the survey respondents were excluded from the analysis due to residence outside of the study area. This unforeseen situation led to a significant reduction in sample size and power that may help to explain the lack of variation in perceived walkability across zip codes. Despite the absence of statistical differences, it is surprising that the zip codes characterizing Crotona Park East scored higher on mean total walkability and several of the subscale-specific scores than the Lenox Hill zip code. Although small sample size may explain this finding, it is also possible that older adult residents of Lenox Hill have a greater expectation of neighborhood walkability than their Crotona Park East counterparts. This may be due to differences in income level and thus have implications for the validity of self-reported measures, particularly in lowincome neighborhoods. More research is needed to elucidate variation in perceptions of neighborhood attributes among older adult subpopulations with differing socio-demographic characteristics.
Lenox Hill scored particularly low on the perceived subscale of pedestrian safety with a mean score of 2.99 as compared to 3.45 and 3.25 for Crotona Park East which is most likely due to the heavy volume of traffic experienced in Lenox Hill. However, Lenox Hill had on average fewer pedestrian safety inventory items per block than Crotona Park East (0.31 versus 2.62). This suggests that pedestrian safety * Lenox Hill Zip Code † Crotona Park East Zip Code is perceived to be a greater problem in Lenox Hill despite having fewer missing pedestrian cross lights and crosswalks at intersections. Hoehner et al. (2005) found the same discrepancy between pedestrian safety measured via resident perceptions and through an environmental audit. Contrasting results were also found for the subscales of land-use mix and street connectivity/maintenance, where resident perceptions indicated poorer scores for Lenox Hill, but objective measures demonstrated the opposite. Despite fewer food or retail venues, open space/recreational facilities, benches, and public transportation stops per census block in Crotona Park East as compared with Lenox Hill, participants from Crotona Park East were more satisfied with land-use mix in their neighborhood. Similarly, Lenox Hill had fewer trip hazards, missing curb cuts, and blocked sidewalks per census block than Crotona Park East, yet Lenox Hill residents scored lower on perceived street connectivity. Michael et al. (2006b) also found a lack of agreement between resident perceptions of sidewalk obstruction and measures obtained using systematic observation suggesting the importance of understanding how residential perceptions may differ from objective measures.
These results suggest that the comparability of perceived and objective measures will differ depending on what aspect of walkability is being evaluated. Further research is needed to elucidate these distinctions, as well as to explore the associations of both perceived and objective measures with socioeconomic status and health outcomes. In addition, although a comprehensive comparison of measures encompassing the construct of walkability (i.e., subscales) is beyond the scope of this paper, it is important to acknowledge the relative importance of these different components. Not all features of the built environment will be relevant for all types of physical activity and not all aspects of physical activity are appropriate for all populations (Diez Roux 2003;Story et al. 2009). This ambiguity points to the importance of specificity and operationalization in defining research questions which should be based on the a priori hypotheses of potential pathways to be tested (Macintyre et al. 2002;Diez Roux 2003;Brownson et al. 2004).
Calculating the OWI score for small units of analysis (i.e., census blocks) allows for a targeted approach to understanding and improving neighborhood walkability. Clusters of low-walkability census blocks provide a unique opportunity to further investigate threats to specific dimensions of walkability on a smaller, and thus a less resource-intensive scale. For example, as shown in Figure 2, Crotona Park East contains two clusters of low-walkability blocks in the southwestern and eastern areas of the neighborhood. Figure 3 displays subscale inventory items for the low-walkability cluster along the eastern border of Crotona Park East. As demonstrated in the figure, the majority of the inventory items fall under the "Street Connectivity/Maintenance" subscale indicating the considerable presence of trip hazards. Past research has revealed that poor sidewalk quality (which leads to trip hazards) influences walking behavior among older adults (King 2008). Improving sidewalk quality within these six blocks would greatly increase neighborhood walkability for senior residents. Additional case-study examination of walkability including both quantitative and qualitative data sources would help to provide detailed context of the local neighborhood environment within these clusters. In addition, structural changes made on a small scale (such as census block clusters) are more likely to be implemented than changes to larger areas (i.e., zip codes) due to greater feasibility.
Ultimately, the conceptual framework and research questions should guide the definition of appropriate spatial scale; multiple scales may be needed for different built environment measures (Diez Roux 2003). Older adults are particularly vulnerable to their immediate local environment (Lawton 1978;Glass and Balfour 2003), which means that physical activity behavior and walkability should be measured using a small geographic scale or buffer (Macintyre et al. 2002). Unfortunately, most researchers are constrained by the availability of data and must rely on imperfect spatial units. This study, which relies on census block boundaries for the objective measure and zip codes for the survey data is no exception. The low walkability cluster represents an area of six census block groups where each dot on the map represents a recorded inventory item geocoded to its exact location within the block. The five subscales are represented by different colored dots.
Ideally, both objective and subjective walkability indices would have the same spatial scale based on the smallest possible geographical area. In addition, relying on census-defined or administrative boundaries as a proxy for neighborhoods without taking into account how residents perceive or define their local community is problematic (Macintyre et al. 2002;Diez Roux 2003). Arbitrarily defined neighborhood boundaries often use street segments to delineate a spatial border, which assumes no spillover effect from residents on either side of the boundary. However, residents in close proximity to this border do not view it as a boundary and will freely cross the border, thus raising concerns regarding the validity of results. It is important to make these limitations clear and to evaluate how the scale of the neighborhood may impact the results. These measurement issues must be taken into account when evaluating the impact of neighborhood walkability (using the OWI) on walking behavior among older adults.
CONCLUSION
A lack of consensus regarding the association between the built environment and physical activity among older adults is a consequence of the problems inherent in measuring these determinants (Hoehner et al. 2005;Michael et al. 2006;McGinn et al. 2007;Nagel et al. 2008;Adams et al. 2009;Gebel et al. 2009;Maddison et al. 2009;Gómez et al. 2010;Frank et al. 2010). The challenge remains to link built environment constructs to suitable measures taking into account the target population, outcome, and location. After determining the specific built environment features to be tested, researchers must then decide which measures operationalize those features most appropriately. Due to conflicting evidence regarding whether perceived or objectively measured data has more explanatory power for certain features, both types of measures should be used (i.e., triangulation) whenever possible to more accurately capture the built environment (Messer 2007;Brownson et al. 2004). The tradeoffs intrinsic to each of the categories of data measures must be considered along with resources (such as time frame and funding) available for the study (Brownson et al. 2004). Additionally, direct observation and other objective data audit measures may be unnecessary if archival data on the specific feature already exists (Brownson et al. 2009); thus it is important to have a good understanding of the type and quality of available data. Directions for future research include a comprehensive correlation analysis that evaluates the similarities and differences between resident perceptions and environmental audit measures for each component of walkability (land-use mix, street connectivity/maintenance, neighborhood aesthetics, pedestrian safety, and neighborhood safety). Ultimately, measurement tools are in their infancy and continued investment in improving both theoretical frameworks and measures will ensure future progress in the field.
|
2017-06-12T06:51:06.740Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "b503edbcb917feb0e57d375171c7325ef70a6978",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15365/cate.31122010",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "de361742ff389e9d04050036b69bcba655d6b2a9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
257665459
|
pes2o/s2orc
|
v3-fos-license
|
Molecular associations of response to the new-generation BTK inhibitor zanubrutinib in marginal zone lymphoma
Key Points • Molecular profiling of MZL may assist in the rational use of BTK inhibitor therapy.• BTK and PLCG2 mutations confer acquired resistance of MZL to BTK inhibition.
Introduction
Marginal zone lymphoma (MZL) is a heterogeneous, indolent, but incurable B-cell non-Hodgkin lymphoma (NHL) characterized by cellular dependency on B-cell receptor (BCR) signaling, leading to the activation of NF-κB and related pathways. 1 According to the WHO 2022 classification of hematological malignancies, 4 recognized subtypes of MZL in adults exist: extranodal MZL of mucosa-associated lymphoid tissue (MALT), splenic (SMZL), nodal (NMZL), and primary cutaneous MZL. 2 The genomic landscape of MZL is less well-defined than that of other B-cell NHLs, but genes involving pathways regulating the marginal zone development, such as the NOTCH pathway (NOTCH1, NOTCH2, SPEN, CREBBP, and DTX1), NF-κB signaling (MYD88, TNFAIP3 (A20), BIRC3, TRAF3, and CXCR4), or BCR signaling (CARD11, CXCR4, and KLHL6) are often affected. [3][4][5][6] Notably, primary activating mutations of BTK, such as the E41K mutation, which are rarely reported in diffuse large B-cell lymphoma have not been observed in indolent lymphomas such as MZL. 7,8 Although responses to frontline chemoimmunotherapy for the treatment of MZL are often favorable, the disease is characterized by frequent relapses, and there is no established standard of care for subsequent lines of treatment. 9 A previous single-arm, open-label, phase 2 clinical trial of 63 patients with relapsed/refractory MZL (rrMZL) treated with the BTK inhibitor (BTKi) ibrutinib demonstrated a 58% objective response rate (ORR) with 10% complete response (CR), a median duration of response (DOR) of 27.6 months (95% confidence interval [CI]: 12.1 months to not estimable [NE]), a median progression-free survival (PFS) of 15.7 months (95% CI: 12.2-30.4 months), and a median overall survival not reached (95% CI: NE-NE). 10 Responses were observed across all MZL subtypes, with biomarker studies identifying patients bearing lymphoma with mutated TNFAIP3 (A20) and MYD88 as more likely to respond. In contrast, lymphomas with mutated KMT2D (MLL) and CARD11 were less likely to respond to ibrutinib. This study did not, however, assess for the emergence of acquired mutations affecting BTK or its enzymatic substrate, PLCG2. [11][12][13][14][15] Mutations in either of these lead to acquired BTKi resistance in chronic lymphocytic leukemia, but data in MZL are limited to a single case study of a patient who was ibrutinib-treated and developed large cell transformation and molecular profiling, identifying acquired BTK (C481S) and PLCG2 (R665W) mutations. 11,16 Zanubrutinib, a second-generation BTKi, occupies the BTK-binding site in a concentration-dependent manner similar to that of ibrutinib but with more than 3 times the potency. 17 It is also more selective, with fewer off-target effects and, hence, fewer adverse reactions. 17 The present cooperative trial group correlative study, sponsored by the Australasian Leukaemia and Lymphoma Group, sought to determine whether a baseline molecular profile using whole exome sequencing (WES) could predict primary resistance to zanubrutinib and whether the emergence of resistance mutations in circulating tumor DNA (ctDNA), a component of cell-free DNA (cfDNA), heralds clinical progression in patients treated on the MAGNOLIA clinical trial.
Methods
This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the local governing institutional review board. Eighteen patients of the Australasian Leukaemia and Lymphoma Group LS21 correlative study were part of the MAGNOLIA study and provided informed consent before procurement of clinical materials. 19 DNA from primary tumor, buccal swabs, and Streck Cell-Free DNA BCT tubes (La Vista, NE) was isolated using commercial isolation kits (Qiagen, Venlo, The Netherlands). Additional information on processing and evaluation of quality metrics are provided in the supplemental Methods. All NGS libraries were constructed using Agilent XTHS reagents and protocols incorporating unique molecular barcoding (Agilent Technologies, CA). For primary tumor, WES was performed using Agilent WES Ver7 reagents; however, bioinformatics analysis was restricted to 48 candidate genes, as listed in supplemental Table 1. This set of genes was selected based on current literature, focusing on those previously reported in MZL studies: those affecting NF-κB, NOTCH, BCR pathways, tumor suppressors/oncogenes, and genes involved in MZL development and related transcription factors/chromatin remodeling as well as those that are commonly occurring. 3,4,6 For the sequencing of cfDNA, an NGS bespoke bait capture set for the same 48 genes was used (443 kbp capture area, including a copy number variation [CNV] backbone; SureDesign, Agilent Technologies, CA). Sequencing was performed on a NovaSeq 6000 instrument (Illumina, SP flow cell; 2 × 150 bp chemistry).
Data processing of FASTQ files was performed via an in-house bioinformatic pipeline incorporating unique molecular identifiers (UMI) deduplexing, VarDict for variant calling, and CNVkit for CNV analysis. Variant calls in patient samples (identified using VarDict) were manually curated by inspecting Binary Aligment Map (BAM) files in the Integrative Genomics Viewer. 21 Only nonsynonymous mutations were included in the final data set, as per predetermined curation criteria (supplemental Methods). Pathogenicity was assessed using OpenCRAVAT, 22 which annotates variants with an impact on protein structure (eg, stop-gain, frame-shift deletions/ insertions, and complex substitutions) and integrates online database information, such as ClinVar (version 2022.06.14) and COSMIC (version 94.0.0). 23 For the 17 patients (94%) with tumor samples available for WES and treated with zanubrutinib, mutational analysis was correlated with investigator-assessed ORR and PFS using Fisher exact test and the Kaplan-Meier (log-rank) method, respectively (GraphPad Prism version 9.3.1). Mutational plots were visualized using Gen-VisR, and the represented protein structural model of BTK is described in supplemental Methods. Complete NGS data (primary tumor and ctDNA) is available in the NCBI/SRA repository.
Results
NF-κB, NOTCH, and BCR mutations commonly occur in rrMZL WES was performed on 19 tumor samples from 18 patients before zanubrutinib therapy. Seventeen patients were administered with zanubrutinib during the study, and 1 patient failed screening but was still eligible for molecular characterization of tumor. Germ line comparison was available for 4 of the patients. The median patient age was 71 years (range: 37-86 years), with a male predominance (67%). All patients received at least 1 prior line of chemoimmunotherapy (median: 1.5; range: 1-4). Patient and tumor sample characteristics are summarized in supplemental Table 2.
Ninety mutations were identified, with multiple mutations of the same gene detected in 7 of 19 tumor samples (37%). Thirty-three (69%) of the candidate genes interrogated were affected ( Figure 1; supplemental Table 3). A median of 5 mutations were detected per tumor sample (range: 0-12), with missense mutations predominating (76%; Figure 1). Paired lymph node and gastric tissue were analyzed in 1 patient (MZ17): NOTCH1, NOTCH2, and KMT2D mutations were identified in both samples, but TNFAIP3 and FAS mutations were identified only in the lymph node. One sample with limited tumor tissue available for analysis failed to yield any variants.
Baseline-screened ctDNA samples were also analyzed for 2 patients (MZ03 and MZ07), for whom tumor WES samples were not available ( Figure 1). Before zanubrutinib therapy, TNFAIP3 and KMT2D mutations were detected in both patients, whereas MZ03 ctDNA also harbored a BTK E41K mutation ( Figure 1).
NF-κB pathway gene mutations may predict response to zanubrutinib in MZL
Seventeen patients (94% of the cohort) with tumor samples available for WES were treated with zanubrutinib. One patient died from COVID-19 infection while on therapy. The median follow-up was 11.1 months (range, 2.76-16.8 months). There was no correlation between MZL subtype or number of previous lines of therapy and survival (median PFS not reached [NR]; P = 0.876; median PFS NR vs NR vs 10.85; P = 0.439). There was also no correlation between patient tumor mutational profile and response rates; however, most of the cohort achieved a response (ORR: 88% and CR: 24%; supplemental Table 4). The mutational profile did associate with PFS: 7 patients with a tumor sample containing at least 1 KMT2D mutation (total mutations: 11) had a shortened PFS despite 6 of 7 achieving an objective response (median PFS 13.4 months vs NR; P = 0.05; HR: 6.15; 95% CI: 1.00-37.78; Figure 2A). Two of these patients (MZ11 and MZ17) had tumors with concomitant TNFAIP3 mutations and did not undergo disease progression during the follow-up period. PFS (median PFS, NR vs 11.1 months; P = 0.008; HR: 0.09; 95% CI: 0.01-0.52; Figure 2B).
NOTCH1 and NOTCH2 mutations were each observed in 4 samples and co-occurred in MZ17. NOTCH mutations were not associated with the outcome (median PFS, NR vs NR; P = 0.49; HR: 1.89; 95% CI: 0.31-11.34; Figure 2C). CARD11 mutations were not identified in our cohort.
Baseline mutations of PLCG2 were detected in 2 samples (MZ02 and MZ21); however, these mutations (H244R and L704V) are not known to confer BTKi resistance. MZ02 also harbored a MYD88 L265P mutation; the patient achieved a partial response (PR), and their disease did not progress during the census period (PFS: 11.51 months). MZ21 harbored 2 KMT2D mutations and 1 TP53 mutation (affecting the DNA-binding domain in the latter); this patient never achieved an objective response and had disease progression after 2.5 months of zanubrutinib therapy. A TP53 R290C mutation was also detected in MZ15; this patient achieved a CR and did not have disease progression (the follow-up period was 10.85 months). The mutation was in a non-DNA-binding domain with a variant allele frequency (VAF) of 46%.
FAT genes (FAT1, FAT3, or FAT4) were the most frequently affected genes in our cohort, with 13 mutations detected in 6 tumor samples, including 9 in FAT1. However, they did not correlate with clinical outcomes (median PFS NR vs NR; P = 0.54; HR: 0.56; 95% CI: 0.08-3.66). The frequency of other mutations detected (3 or fewer) was too low for meaningful clinical correlation. CNV analysis was performed but was uninformative in terms of association with response to zanubrutinib (supplemental Figure 1).
Changes in ctDNA burden during therapy
In the responder cohort, MZ01 and MZ02 had the MYD88 L265P mutation identified in tumor sample WES; both patients achieved PR and demonstrated decreasing mutation burden in ctDNA during therapy ( Figure 3A-B). Baseline ctDNA from both patients also demonstrated mutations at screening that were not detected in WES: the KMT2D Q2416H mutation (with decreasing VAF) and the TP53 R248W (stable VAF) in MZ01 and MZ02, respectively. No acquired mutations associated with resistance to BTKi were observed in this cohort.
Emergence of new mutations in ctDNA conferring resistance to BTKi during therapy
Baseline tumor WES was not available for the 3 patients who experienced disease progression on zanubrutinib and had ctDNA available (the progressor cohort). Samples from patients MZ03 ( Figure 3C) and MZ07 ( Figure 3E) demonstrated the acquisition of mutations associated with BTKi resistance. Patient MZ07 achieved a PR but subsequently had disease progression on day 253, with PLCG2 R665W and L742P mutations observed in the ctDNA at progression but not at baseline ( Figure 3E). MZ03 ctDNA demonstrated a BTK E41K mutation before zanubrutinib therapy; this patient's disease progressed early (day 86) and had new detectable BTK C481F and C481Y mutations, in addition to the persistence of the BTK E41K mutation. This patient continued therapy for another 41 days, with a repeat sample showing the VAF of these respective mutations changing over time ( Figure 3C). The BTK E41K mutation was detected at all 3 time points, including screening. MZ05 did not have baseline tumor WES or ctDNA samples available ( Figure 3D). Samples for ctDNA were available after the commencement of zanubrutinib and did not demonstrate the acquisition of BTK or PLCG2 mutations, but notably, a mutation affecting BIRC3 and 3 TP53 mutations (R280G, I255F, and R213*) were detected at the earliest time point sample available (day 84) as well as progression (day 334) samples.
Differences between ctDNA and baseline tissue WES
Three patients (MZ01, MZ02, and MZ06) had tumor WES and screening ctDNA available for comparison. Of the 13 mutations detected in the WES from these patients, 8 (62%) mutations were detected in the ctDNA. In contrast, 2 mutations detected in the screening ctDNA were not present in the WES. Five mutations, including KMT2D and TNFAIP3, identified in the tumor sample of the screen-failure patient (MZ06), were detectable in the ctDNA. Four additional mutations (CACNA1H, EP300, and 2 TBL1XR1) were detected in the ctDNA but not present in the WES. A PLCG2 H244R mutation detected in the WES of MZ02 was not detected in the ctDNA. The complete list of genes detected via WES and ctDNA can be found in supplemental Tables 3 and 5, respectively.
Discussion
Chronic active BCR-mediated signaling has been identified as a critical step in MZL pathogenesis, providing a rationale for BTKi as a therapeutic tool. 1 Furthermore, the role of genes affecting BCR and related pathways, particularly NF-κB and NOTCH, has been established in several studies of B-cell malignancies, including MZL. 3,4,6,24 In this cohort of patients with rrMZL, all but 2 patients (89%) had at least 1 of these pathways affected by a mutation determined by baseline WES. Our findings are consistent with those of the available literature: Noy et al described a response to ibrutinib among patients with MZL harboring the MYD88 or TNFAIP3 mutation and a worse outcomes in those with KMT2D mutations. 10 This indicates an expected overlap between the response and resistance determinants of ibrutinib and zanubrutinib, consistent with a common therapeutic target. NOTCH mutations, which are associated with improved responses to BTKi in other lymphoproliferative disorders such as mantle cell lymphoma and chronic lymphocytic leukemia, did not appear to be associated with response in our MZL cohort nor that of Noy et al, although this observation is limited by the small sample size. 10,25,26 The biological mechanisms underpinning BTKi responsiveness have not been fully elucidated but, at least in the case of the MYD88 mutation, the finding is not surprising. Somatic activating mutations of MYD88 promote toll-like receptor activation via BTK interaction and NF-κB signaling. [27][28][29] BTK inhibition has proven efficacious in patients with Waldenström macroglobulinemia, >90% of whom harbor the MYD88 L265P mutation. 2,30 BTKis can also be effective in patients with Waldenström macroglobulinemia harboring wild-type MYD88, likely because of the presence of other mutations in NF-κB pathway genes. 31 Loss of the ubiquitin-editing enzyme TNFAIP3 results in increased NF-κB signaling, which can be counteracted by a BTKi. 32,33 Our findings support those recently reported in which only one mutation affecting NF-κB pathway signaling (MYD88 or TNFAIP3) is sufficient to affect BTKi response, because these mutations were mutually exclusive in both our WES-assessed cohort and other studies. 3,4 The biological consequences of KMT2D mutations in MZL development are less well established. KMT2D encodes a histone methyltransferase, which can function as a tumor suppressor; it is affected by NF-κB signaling, and a deficiency of KMT2D perturbs germinal center B-cell development and promotes lymphomagenesis. 34,35 Interestingly, 2 of the patients harboring KMT2Dmutated MZL whose disease did not progress during zanubrutinib therapy also harbored TNFAIP3 mutations.
We also interrogated genes known to be commonly mutated in MZL. 3,4,6 Of these, mutation of FAT genes was most frequently observed. FAT genes encode atypical cadherins, which can exhibit known tumor suppressor activity in solid organ malignancies. 36 Although commonly mutated in MZL, their role in the disease's development remains unclear, and their presence was not associated with progression outcomes in our cohort treated with zanubrutinib.
Using a bespoke hybrid-capture bait technology with unique molecular indexes, we were able to detect and track the MYD88 L265P mutation in 2 of our patients with a sensitivity of 0.1%. This is relevant given the predictive value of the MYD88 mutation for response to BTKis. The decreasing, but persistent, VAF is consistent with the PRs achieved by both patients, which were ongoing at the time of final sampling.
Two patients with disease progression had detectable PLCG2 (R665W, RL42P) and BTK C481Y/F mutations at time of progression, confirming that patients with MZL appear susceptible to the same acquired resistance mutations seen in CLL. 11 The latter case was informative for 2 reasons: firstly, the baseline ctDNA sample (before zanubrutinib commencement) harbored the BTK E41K mutation previously reported in diffuse large B-cell lymphoma but not in the genomic landscapes of indolent NHL, including MZL. 7,8 This mutation in a noncatalytic site in the pleckstrinhomology domain is remote from the catalytic site where zanubrutinib binding occurs and has previously been validated as an activating mutation using preclinical modeling. 8,37 To our knowledge, this novel finding in our cohort represents the first report of an activating BTK mutation before BTKi in a patient with indolent NHL. 8 Secondly, there was evidence of clonal selection at a nucleotide level within the BTK catalytic site, consistent with the complex resistance patterns that occur at the single-cell level. 38 Such mutations would not have been detected via digital droplet polymerase chain reaction, which is one of the recommended technologies for ctDNA detection. 39 The third patient in the progressor cohort did not have mutations of BTK or PLCG2 detected; however, mutation of BIRC3 may potentially account for the disease progression, because this has been demonstrated to confer resistance to BTKi in other B-cell NHL via a noncanonical NF-κB pathway activation. 40,41 Patients who develop the BTK C481S mutation may respond to noncovalent BTKi (pirtobrutinib, ARQ-351, fenebrutinib, and vecabrutinib), though these other agents do not overcome other select BTK or downstream PLCG2 mutations. 8,42 Our study is limited by small numbers and the germ line comparator being available for only 22% of the patients. We also cannot exclude that some of the mutations are germ-line variants, especially those with VAFs approximating 50% (eg, MZ15 TP53 R290C), and may not be pathogenic in certain oncogenic contexts. Furthermore, we predominantly limited our analysis to the coding regions of the 48 genes implicated in signaling pathways related to BTKi and MZL development. Although our candidate gene list was comprehensive for these pathways, we cannot exclude that there may be other mutations, including those affecting noncoding regions that may be associated with response to BTKi.
Furthermore, not all the mutations detected in WES were detected in ctDNA. Concordance between tumor-based and ctDNA-based genotyping is reported at >70%, but it is related to tumor shedding and resultant cfDNA concentrations. 39 In general, low-grade lymphomas have lower cfDNA concentrations than aggressive lymphomas. 39,43 The cfDNA concentration in our cohort (mean: 10.6 ng/mL; median: 6.9 ng/mL) compares favorably to that of ctDNA studies in other low-grade lymphomas of~1.15 to 6.5 ng/ mL but is still significantly lower than aggressive lymphomas (~650 ng/mL). 43,44 Even with an assay sensitivity of 0.1%, not all mutations detected in the tumor will be present in the ctDNA without sufficient tumor shedding.
In contrast, several mutations present in ctDNA were not identified in WES. This is commonly observed and likely represents spatial tumor heterogeneity, as demonstrated in the patient where we sampled 2 different tumor samples. 39 We cannot exclude that other patients may have demonstrated intratumoral heterogeneity, but obtaining multiple tissue samples from patients was not feasible in this study. There is also the possibility that some of the mutations do not originate in the lymphoma itself but represent either clonal hematopoiesis of indeterminate potential or shedding from other undetected malignancies (ie, TP53 R248W in MZ02).
In summary, the correlative studies described herein have demonstrated that mutations in MYD88 and TNFAIP3 associate with improved PFS, and mutations in KMT2D may associate with reduced PFS for patients with rrMZL treated with zanubrutinib. The novel finding of the noncatalytic BTK E41K mutation in our cohort also describes a potential primary resistance mechanism to BTKi treatment. The hypothesized ability to detect mutations potentially predicting response via noninvasive sampling (as exemplified by the detection of the MYD88 L265P mutation in ctDNA) was demonstrated. Furthermore, our hypothesis of acquired resistance to BTKi mediated through acquired BTK and PLCG2 mutations was supported and may herald clinical progression. These studies have been informative for the use of BTKis for rrMZL in terms of predicting the primary response or resistance and demonstrating acquired resistance mechanisms. Larger cohort studies should be performed to provide the power to validate our observations with a view to optimizing the selection of patients with rrMZL that may derive clinical benefit from therapeutic BTK inhibition.
Acknowledgments
The Australasian Leukaemia & Lymphoma Group (ALLG) acknowledges the support of our participating member hospitals and patients who consented to participate. ALLG acknowledges the National Blood Cancer Registry for its role in data and the Blood Cancer Therapeutics Laboratory at Monash Health for its role in the laboratory work. Four ALLG-associated sites contributed 4 tumor and buccal samples and cfDNA from 7 patients. BeiGene Ltd provided 14 tumor samples from international patients that had provided consent for tissue use for biomarker studies. BeiGene Ltd provided financial support for the studies and reviewed the final manuscript.
J.S. is supported by an Australian NHMRC EL2 Fellowship.
|
2023-03-23T06:17:31.454Z
|
2023-03-22T00:00:00.000
|
{
"year": 2023,
"sha1": "cbf0f799d2dda73db517dadeab101fabd2a08b4e",
"oa_license": "CCBYNCND",
"oa_url": "https://ashpublications.org/bloodadvances/article-pdf/doi/10.1182/bloodadvances.2022009412/2040009/bloodadvances.2022009412.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "706943f99887817f6eae3cd00b287704304157bd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249251758
|
pes2o/s2orc
|
v3-fos-license
|
Molecular study to Detect Escherichia coli in Diarrheic Children and its Antibiotic Resistance
Diarrheal diseases can lead to infections and cause morbidity and mortality in children. Diarrheagenic Escherichia coli (DEC) is an etiological agent, which is considered the major causative agent of diarrhea in children in some developing countries. The aims of this work were to estimate Escherichia coli (E. coli ) causing diarrhea in children less than 5 years old, and to detect some biofilm virulence factors and the effect of some antibiotics. For the methodology, a total of 112 specimens were collected from children from two health centers, Al-Zahraa Teaching Hospital and Public Health Laboratory (located in Al-Kut city/ and the Wasit province in Iraq). All specimens were grown on simple and rich media. A total of 43 (38.4%) E. coli isolates were identified using different traditional methods, such as biochemical tests and 16S rRNA sequencing. Polymerase chain reaction (PCR) testing was used to detect some virulence factor genes that play an important role in the pathogenesis of diarrheic E. coli e.g., 16S rRNA, bfpA, and eaeA. In this study, several antibiotics were used to estimate the sensitivity and resistivity of E. coli isolates. A total of 43 isolates were fully identified as E. coli . these samples were used to detect the virulence factor genes, and 31 (72.1%) and 29 (29.4%) isolates carried bfpA and eaeA, respectively. The preponderance of E. coli isolates were completely resistant to penicillin 43 (100%). Additionally, 33 (76.7%) and 27 (62.8%) isolates were resistant to cephalothin and amoxycillin-clavulanic acid, respectively. Furthermore, the isolates of E. coli isolates showed different levels of sensitivity to antibiotics, including polymyxin B 40 (93%), norfloxacin 38 (88.4%), gentamycin 26 (60.4%), and meropenem 22 (51.2%). In conclusion, diarrheagenic E. coli isolates were the prevalent among diarrheic children. Most isolates showed varying results for the presence of virulence factors. In addition, all isolates were resistant to penicillin and sensitive to polymyxin B.
Diarrheal disease is a major global problem, causing more than 2 million deaths annually and primarily affecting children under five years of age.In addition, diarrhea is considered as one of the major disease-contributing factors for infection and death among children and the 2 nd major cause of death globally among different groups of children who are under five years of age, following mortality resulting from respiratory tract infections. 1,2Hebbelstrup et al. 3 mentioned that one of the major bacteria that causes diarrhea is diarrheagenic Escherichia coli (DEC), which causes gastrointestinal infections.In addition, there are six DEC pathotypes, including enteropathogenic E. coli (EPEC), enterotoxigenic E. coli (ETEC), enterohemorrhagic (Shiga toxin-producing) E. coli (EHEC STEC), enteroinvasive E. coli (EIEC), enteroaggregative E. coli (EAEC), and diffusely adhering E. coli (DAEC).ETEC, EPEC, and EAEC primarily targets the gut while DAEC, EHEC-STEC, and EIEC affect the colon.Furthermore, ETEC is a common bacterial agent that causes diarrhea and death in most developing countries.The signs of ETEC are similar to those of other bacterial infections, such as Vibrio cholera, but appear milder.The specific virulence factors resemble the enterotoxins of ETEC from other DEC. 4,5Kaper et al. 5; and Alikhani et al. 6 reported that EPEC is an important pathogenic group of DEC that is linked to diarrhea in children in developing countries.Strains of typical EPEC have an extra-chromosomal DNA (plasmid) named EPEC adherence factor (EAF).It encodes a type 4 pilus called the bundleforming pilus (bfp).Several types of EPEC possess a chromosomal gene called eae gene, which encodes the 'outer membrane protein intimin' and affects the gastrointestinal tract mucosa.In addition, isolates of EHEC-STEC caused bloody and/or non-bloody diarrhea and hemolytic uremic syndrome.Furthermore, the virulence factor key EHEC is Shiga toxin (stx gene), which is recognized as Vero-toxin (Vtx) and consists of two subgroups: stx1 and stx 2 .The most important serotype among the EHEC-STEC strains were shown to be O157:H7. 7AEC causes diarrhea in adults and travelers.This pathway is defined as a novel gut pathogen that causes various disorders worldwide.EAEC adheres to the cells of HEP2, and the mucosa of the gut by fimbria named aggregative adherence fimbria (AAFs), which is encoded by the gene (aggR), which is placed in the essential plasmids of EAEC named pAA. 8,1Jafari et al. 9 clarified that EIEC strains are a patho-type that causes inflammatory invasivE.colitis, and sometimes bacterial dysentery.In most cases, EIEC causes watery diarrhea.
Bonkoungou et al. 10,11 mentioned that these diseases are common in developing countries, particularly in areas with poor sanitation, hygiene, and a limited amount of safe drinking water.In addition, poor health conditions, such as malnutrition, increase the risk of infection with diarrhea.The causative agents of diarrhea, especially in acute cases, involve a wide range of pathogens, including bacteria, viruses, parasites, and fungi.In previous studies, viruses such as Rotavirus and E. coli were the two major causative agents of diarrhea, in addition to other pathogens, such as Campylobacter spp. 12 Zaidi et al. 13 and Vu Nguyen et al. 14 demonstrated that the major causative agents of diarrhea represented by diarrheagenic pathogens include DEC, Rotavirus, bacterial dysentery (Shigella spp.), Salmonella spp., entameba dysentery (Entamoeba histolytica), Bacteroides fragilis (enterotoxigenic), Campylobacter jejuni, parasitic Cryptosporidium spp.Furthermore, DEC is a major causative agent of severe diarrhea and is a major public health. 15EC coli is an important bacterial pathogen that causes diarrhea and death, especially during childhood age. 16In addition, DEC is a remarkable cause of childhood diarrhea and is responsible for 30-40% of acute diarrhea cases in developing countries. 17DEC is a remarkable etiological cause of both sporadic and diarrheal outbreaks worldwide.The most common DEC pathotypes cause increased morbidity and mortality globally. 18esistance to antibiotics has recently emerged as the most common problem worldwide.This has been attributed to random sale of different antibiotics, incentives for healthcare supply to prescribe antibiotics, human expectations, and rising costs due to emergence of antibiotic resistance. 19,20Moreover, Liu et al. 21and Jones et al. 22 stated that the noteworthy benefits of antibiotics in decreasing mortality and morbidity rates were challenged by the emergence of antibiotic-resistant strains in recent years.Apart from the acquisition of virulence genes by E. coli, there are a large number of cases of antibiotic resistance gene possession by the microorganism present in the clinical samples for animal and environment. 23,24Genetic changes associated with phenotypic resistance to several antibiotics, such as tetracycline, gentamycin, quinolones, sulfa-trimethoprim, and β-lactams, have been investigated in DECs. 25 Jafari et al. 26 demonstrated that studies in the Tehran capital (Iran) demonstrated a high frequency of resistance to STEC in populations with EAEC-, STEC-, EPEC-, and ETEC-infected children with diarrhea.In contrast, another study conducted in western Iran reported an increased phenotypic rate of resistance to EHEC in a population of STEC-, EHEC-, and EPEC-infected children. 27Bai et al. 28 and Montealegre et al. 29 stated that phenotype E. coli resistance is highly polymorphic, and this case is attributed to genome flexibility in E. coli, accelerating the emergence of pathogenic types showing individual resistance to the antibiotic phenotype.
The goals of this study were to estimate the incidence E. coli causing diarrhea in children less than five years old, detect some virulence factors, and determine the effect of some antibiotics.
Collect and culture for different specimens
A total of 112 clinical samples were collected from different sites, which included swabs from children's' stools who were hospitalized in two health centers: Al-Zahraa Teaching Hospital and the Public Health Laboratory (located in Al-Kut City, Wasit Province, Iraq).All these specimens were grown on traditional and rich media.Initially, bacteria were cultured on blood agar and nutrient agar, then on selective media, such as MacConkey, eosin methylene blue (EMB), and brain heart infusion broth (BHI B) agar; all samples were incubated 37°C for 18-20 hours.Furthermore, conventional and molecular methods have been used to extract several isolates of bacteria.Microbiological techniques, such as biochemical tests and PCR, were used to detect isolates.Moreover, Mueller-Hinton agar was used to assess antibiotic sensitivity against different E. coli isolates.Strain E-2348 was used as a control for the PCR assay (Center for Vaccine Development, USA).
Extraction protocol for E. coli DNA and PCR technique
The DNA in several isolates of E. coli were diagnosed using the Geneaid Genomic DNA Extraction Kit (U.S.A.).DNA was extracted with commensurate company guidance.Briefly, E. coli specimens were centrifuged, and the pellets were suspended in 0.2 ml buffer for ten min.A total of 0.2 ml of GD buffer was tested for ten min.Subsequently, 0.2 mL absolute of ethanol added to the lysate.A 2 ml tube was used, and then collected and centrifuged using GD columns.A buffer of W1 was added to the GD column and centrifuged.In addition, the wash buffer was tested and eluted from the column.The mixture was added and left for 3 min in order to ensure that pure DNA was obtained.Several virulence genes are required for the detection of DEC, all of which are recognized by PCR testing.
Preparation of reaction master mix for PCR
The PCR master mix was prepared using the GoTag Green Master Mix Kit (Promega, USA), and the master mix was prepared according to the manufacturer's instructions, as summarized in Table 1.
Polymers chain reaction (PCR) thermo cycler program
Additionally, PCR thermocycler conditions for E. coli were achieved using a PCR thermocycler system, which is similar for each gene except for the annealing temperature, as outlined in Table 2.
The specific primers, for example, 16S rRNA, to detect of E. coli isolates eaeA and bfpA were designed by Eurofins MWG Operon (MWG, Germany) (Table 3).The concentration and quality of the DNA specimens were estimated using a NanoDrop.The amplified DNA product was stained with ethidium bromide.
Antibiotics sensitivity test against E. coli
Sensitivity tests were conducted using the disc method, and antibiotics were selected according to the Clinical and Laboratory Standards Institute (CLSI).Mueller-Hinton agar was used for this purpose.Nutrient agar cultured with E. coli (107 CFU/ml) was incubated at 37°C for 24 hours.Discs of different antibiotics were placed on the surface of the agar.The antibiotics used in the current study were as follows: penicillin (PEN) 10U, amoxicillin-clavulanic acid (AMC) 10µg, gentamicin (GEN) 10µg, meropenem (MPM) 10µg norfloxacin (NOR) 10µg, trimethoprim-sulfamethoxazole (SXT) 250µg, cephalothin (INN) 30µg, polymyxin B (PMB) 200U.The results of this method (resistant, intermediate or susceptible) were conducted according to the CLSI system.All E. coli isolates were tested for multidrug resistant (MDR).
Statistical analysis method
All data were subjected to a one-way analysis of variance (ANOVA).We considered a P-value of 0.05 to be statistically significant.
Isolation and identification of E. coli
A total of 43 (38.4%)E. coli, was obtained from stool samples from children.These isolates were identified using conventional and molecular methods, such as culture and microscopic examination, biochemical tests, and PCR, and all results were confirmed using molecular techniques, such as 16S rRNA.All bacterial isolates demonstrated similar results across several The DNA of all E. coli isolates was extracted, and PCR was performed.All isolates were identified as DEC.
Identification of E. coli by PCR technique
Different E. coli isolates were identified as DEC using 16S rRNA as PCR positive (Fig. 1).
As for virulence factors, in DEC, bfpA was detected in 31 (72.1%)isolates from E. coli as PCR positive, whereas the eaeA gene was detected in 29(67.4%)isolates and considered a PCR positive (Fig. 2 and 3).These genes play a critical role in the pathogenesis of diarrhea caused by E. coli.
DisCUssiON
Most types E. coli are harmless and cause diarrhea.Some strains E. coli (i.e.E. coli O157:H7) can cause dangerous symptoms, such as stomach cramps, vomiting, bloody, and diarrhea.Successful management of any infectious disease requires recognition of the causative agents and treatment of signs manifested by the disease.This study was conducted to detect E. coli isolation rate and virulence factors of pathogenic E. coli that isolated from DEC in the Wasit province (Iraq).Diarrhea is a multifactorial disorder related to a wide range of pathogens, including bacteria, viruses, and parasites. 21,30Most commonly isolated bacteria among DEC is E. coli when applying traditional and molecular methods.The results of the current study are in agreement with those of Begum et al., 31 conducted in Mizoram.In the current study, the prevalence of DEC was higher than that of the other microorganisms (Table 2).The current study agreed with other studies conducted in Iran/ Tahran and Tanzania by Jafari et al. 26 , and Moyo et al., 32 respectively, who demonstrated that the most common microorganism was DEC (7.9%), which was lower than that reported in other developing countries.In addition, a study by Dias et al. 33 (Brazil and Mexico) observed that EAEC was the primary pathotype DEC, with respective rates of 50%.
Regarding non-DEC and the capacity to cause diarrhea, contagious diseases, which do not first affect the GIT, can cause acute diarrhea.The pathogenesis of this type of diarrhea involves intestinal inflammation, cytokine action, red blood cell (RBC) sequestration, programmed cell death, increased endothelial cell permeability in the GIT microvasculature, and invasion of epithelial cells in the GIT by several agents.Several symptoms, such as fever and diarrhea, occur in patients with respiratory syndrome (SARS), Plasmodium parasites (malaria), and dengue fever.Diarrhea also occurs in patients with acquired lung inflammation when it is suggestive of legionellosis, and those with systemic bacterial infections.Although diarrhea is rare in patients with early borreliosis, the incidence is high in those with other tick-borne infections, such as tick-borne, ehrlichiosis, and others.Unfortunately, it is often not established whether diarrhea is an initial clinical sign and/or whether it progresses during the course of the disorder. 34cently, molecular diagnostic techniques have become common in clinical laboratories.PCR is capable of detecting several pathogens via the amplification of specific genes encoding important virulence factors.If it is difficult to diagnose DEC using traditional laboratory techniques, PCR becomes beneficial in clinical laboratories because of its specificity and sensitivity.In the current study, all isolates produced 43 (100%) as E. coli by PCR method.In contrast, there were 31 (72.1%) and 29 (67.4%)isolates from E. coli contained the bfpA, and eaeA genes, respectively.Furthermore, typical EPEC is the most common cause of watery diarrhea in children, especially in developing countries.The current study is compatible with and nearest to a study carried out on children in Peru (South America) by Contreras et al. 35 , who observed that the most common pathotype in diarrhea was the predominant genes of bfpA (74%) and eaeA (54%).In addition, a study conducted in Yogyakarta/ Indonesia by Harti et al., 36 who observed that the percentage of predominant genes in DEC was espA (85%), bfpA (80%), and eaeA (51%).Another study conducted among young children in South Africa with diarrheal disease, by Galane and Roux, 37 clarified that the PCR results observed 59 (32.6%) isolates of E. coli carried genes (eaeA), and 6(3.3%) possessed bfpA genes, 4 (2.2%)CNF1, and 2 (1.1%) carried Stx 2 genes.These results were different from those of the current study, and this difference between the two studies may be ascribed to the conditions and various geographical areas and/or other related genes or carried on plasmids, among
Table 1 .
Reaction of PCR mixture master mix
Table 3 .
Primers were tested for E. coli in the present work, and selected primer to gene of 16S rRNA * Designed by my self-using the national centre for biotechnology information (NCBI).
Table 4 .
Antibiotic resistivity and susceptibility in different isolates of DEC Antibiotics dosage (µg) Resistant Intermediate Sensitive No*: Number, **: E. coli is MDR.
|
2022-06-02T15:14:06.041Z
|
2022-05-31T00:00:00.000
|
{
"year": 2022,
"sha1": "94e03e7f9a579a11c6833e4fbed1b91e60345cca",
"oa_license": "CCBY",
"oa_url": "https://microbiologyjournal.org/download/57149/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dfa23e63fedd85e695a60a7105865330d2e815a0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
597381
|
pes2o/s2orc
|
v3-fos-license
|
A Distributed Differential Space-Time Coding Scheme With Analog Network Coding in Two-Way Relay Networks
In this paper, we consider general two-way relay networks (TWRNs) with two source and N relay nodes. A distributed differential space time coding with analog network coding (DDSTC-ANC) scheme is proposed. A simple blind estimation and a differential signal detector are developed to recover the desired signal at each source. The pairwise error probability (PEP) and block error rate (BLER) of the DDSTC-ANC scheme are analyzed. Exact and simplified PEP expressions are derived. To improve the system performance, the optimum power allocation (OPA) between the source and relay nodes is determined based on the simplified PEP expression. The analytical results are verified through simulations.
Index Terms
Analog network coding, distributed differential space-time coding, two-way relay network.
I. INTRODUCTION
It is well known that cooperative communication improves system robustness and capacity by allowing nodes to cooperate in their transmission to form a virtual antenna array [1]. Compared to one-way relay networks (OWRN), two-way communication is an effective scheme to improve the spectral efficiency by allowing the simultaneous exchange of two-way information flows.
In [2], the authors first studied the two-way relay networks (TWRN) and derived its achievable bidirectional rate. The TWRNs have attracted increased interest due to its high spectral efficiency.
In [4], the conventional network coding scheme was applied to the TWRNs. In this scheme, two source nodes transmit signals to the relay, separately. The relay decodes the received signals, performs binary network coding, and broadcasts network coded symbols back to both source nodes. However, this scheme may cause irreducible error floor due to the detection errors which occur at the relay node. In [3], an amplify and forward based network coding scheme was proposed. In this scheme, both source nodes transmit at the same time so that the relay receives a superimposed signal. The relay amplifies the received signal, and broadcasts it to both source nodes. Each source node subtracts its own contribution and estimates the signal transmitted from the other source node. Analog network coding is particularly useful in wireless networks as the wireless channel acts as a natural implementation of network coding by summing the wireless signals over the air.
Recently, distributed space-time coding for OWRNs was proposed in [5] to achieve spatial diversity. Since OWRNs take place only in a single-direction, to further improve the spectral efficiency of the relay networks, the distributed space-time coding was proposed for TWRNs in [6] and [7]. However, most of the existing works on distributed space-time coding in TWRNs consider coherent detection at each receiver with the assumption of available channel-state information (CSI). In some situations, e.g., the fast-fading environment, the acquisition of accurate CSI presents great challenge, and training becomes expensive and inefficient while there are a large number of relays in the wireless networks [8]. In this case, differential modulation would be a practical solution because it requires no knowledge of the CSI.
The distributed differential space-time coding was first proposed for OWRNs in [9]. In TWRNs, the signal received at the relay node is a superposition of two symbols sent from two source nodes. Thus, if there is no CSI available at source and relay nodes, it will be very difficult to design distributed differential modulation schemes in TWRNs. The challenge is due to the blind channel estimation from the superimposed signals at the relay and unknown self-interference at each destination. In [10], the authors first extended the distributed differential space-time coding to TWRNs. In order to enable differential encoding and decoding, this scheme starts with a four-stage initialization phase, which is similar to traditional one-way relaying, to transmit the bi-directional reference signals respectively. After initialization, each user then proceeds to the data transmission. Information exchange between two users is done in two time slots. However, the decoding algorithm in [10] is a noncoherent detection scheme where the decoding of current symbol is based on the estimation of the previous symbol. Consequently, when one symbol was decoded incorrectly, it will affect the decoding of consecutive symbols thus leading to serious error propagation. To solve this problem, periodical initialization of the protocol has to be performed to transmit new reference signals for decoding, making the proposed scheme inefficient. Furthermore, no pairwise error probability (PEP) analysis was performed in [10] due to the complexity of the protocol. Song et al. [8] presented an analog network coding scheme with differential modulation using the amplify-and-forward protocol for bidirectional relay networks.
However, this scheme is limited to single relay node, thus cannot be extended to the distributed space-time codes.
Unlike [8]- [10], in this paper, we propose a distributed differential space time coding with analog network coding (DDSTC-ANC) scheme for the TWRNs with multiple relays. In this scheme, two source nodes perform differential modulation, and transmit the differential modulated symbols to all the relay nodes in the first time slot. The signal received at the relay node is a superposition of two transmitted symbols. In the second time slot, the N relay nodes broadcast the processed signals to both source nodes simultaneously. We propose a blind estimation technique that can be used to subtract the self-interference without knowledge of CSI at both relay nodes and two source nodes. A simple differential signal detector is then developed to recover the desired signal at each source. The performance of the proposed differential DDSTC-ANC scheme is analyzed and the PEP and block error rate (BLER) expressions are derived. They show that the proposed differential scheme can achieve the same diversity order as the coherent detection scheme but is about 3dB away compared to the coherent detection scheme due to the differential transmission. To further improve the system performance, the optimum power allocation (OPA) between the source nodes and the relay nodes is determined based on the provided simplified PEP expression. The analytical results are verified through simulations. Simulation results also show that the proposed differential scheme with OPA yields superior performance improvement over an equal power allocation (EPA) scheme.
The rest of this paper is organized as follows: In Section II, the system model is introduced.
Section III presents the proposed DDSTC-ANC scheme. The performance and diversity order of DDSTC-ANC are analyzed in Section IV. In Section V, the OPA for the DDSTC-ANC is presented. Simulation results are provided in Section VI. In Section VII, we draw the main conclusions.
Notation: Matrices and vectors are denoted using capital letters and boldface lowercase letters, respectively. (·) * , (·) T and (·) H represent conjugate, transpose and conjugate transpose, respectively, for both matrix and vector. For a complex matrix A, det A denotes the determinant of A. I m is the m × m identity matrix. diag{a 1 , · · · , a n } stands for an n × n diagonal matrix whose ith diagonal entry is a i . ln represents the natural logarithm, and || · || is the Frobenius norm. E and P (·) denote the expectation and probability, respectively.
II. SYSTEM MODEL
In this paper, we consider a general TWRN with N + 2 nodes, as shown in Fig. 1, where two source nodes, T 1 and T 2 , want to exchange information with each other through N relay nodes.
It is assumed that each node in the network is equipped with one single antenna working in the half-duplex mode. We consider a quasi-static fading channel, where the channel remains constant for the duration of a frame and varies independently from one frame to another. Let f i and g i denote the complex fading channel coefficients of T 1 −R i and T 2 −R i , respectively. Furthermore, we assume Rayleigh flat fading channels, i.e., f i ∼ CN (0, σ 2 f i ) and g i ∼ CN (0, σ 2 g i ), respectively. For analysis tractability, symmetry of the relay nodes is assumed in this paper, i.e., σ f i = σ f , ∀i and σ g i = σ g , ∀i.
A general two-time slot TWRN protocol is used, as shown in Fig. 1. In the first time slot, both T 1 and T 2 transmit their messages and the relays {R 1 , · · · , R N } receive a superposition of the signals transmitted from T 1 and T 2 . Let s(t) = [s 1 (t), · · · , s T (t)] T and d(t) = [d 1 (t), · · · , d T (t)] T denote the transmitted symbol vectors of T 1 and T 2 at time t, respectively. They are normalized The received signal vector at R i can be written as where P 1 and P 2 denote the transmit power of T 1 and T 2 , respectively, and v i (t) represents the noise vector at R i and each noise term follows a zero-mean complex additive white Gaussian During the second time slot, R i processes r i (t) to generate a space time coded symbol vector In this paper, we consider the amplify-and-forward protocol in the relay nodes. The transmit signal at the ith relay is designed to be a linear function of its received signal and its conjugate [11]: where A i and B i are two T × T complex matrices specifically designed for the construction of distributed space-time codings, and β i (t) is the scaling factor at R i .
In this work, the scaling factor β i (t) in Eq. (2) can be obtained based on the available statistical CSI, which is specifically given by [12] β where P R i is the transmitted power of R i . Since we assume P R i = P R , we have β i (t) = β, ∀i.
For simplicity, in this paper, we only design the system that either A i is unitary, case II). Thus, case I means that the ith column of the code matrix (S(t) and D(t) in Eq. (6)) contains only the transmitted symbols, and case II means that the ith column of the code matrix contains the linear combinations of the conjugate of the transmitted symbols only. Further more, we assume that T = N, i.e., the number of symbols in a space-time block code is equal to the number of relay nodes. We further define Then the relay node R i broadcasts the coded symbol vector x i (t) back to both source nodes.
Since T 1 and T 2 are mathematically symmetrical, for simplicity, in the following, we only discuss the decoding and the analysis for the signals received by T 2 [13]. The received signal vectors at T 2 is given by where w 2 (t) denotes the independent and identically distributed (i.i.d) additive white Gaussian noise (AWGN) vectors at T 2 , and we have w 2 (t) ∼ CN (0, N 0 I T ).
The received signal at T 2 can then be rewritten as: where It is easy to prove that E{n 2 (t)n 2 (t) H } = σ 2 n 2 (t)I N , and σ 2
III. DISTRIBUTED DIFFERENTIAL SPACE-TIME CODING FOR TWRNS
In this section, we propose a distributed differential scheme. First, we blindly estimate channel (6), which can be used to subtract the self-interference. Then, a simple differential signal detector is developed to recover the desired signal at source T 2 .
In the proposed DDSTC-ANC, T 1 encodes a message at time t into an N × N unitary matrix For the first block, we can transmit a known vector to both source nodes that satisfies to the differential space-time coding for multiple-antenna systems, having U(t) and V (t) unitary preserves the transmit power.
For simplicity, we defineÛ i (t) In the distributed differential scheme, the codes U(t) and V (t) should commute with the relay matrices [9], i.e., 1 Hence, S(t) can be rewritten as Similarly, we have D(t) = V (t) · D(t − 1).
The distributed differential space-time codes (STC) for TWRNs should be designed to satisfy Eq. (8). The design and choice of appropriate codes is beyond the scope of this work, here, we only briefly introduce some existing STCs that can be used in TWRNs. For the TWRNs with two relays, we can use Alamouti code [18], which has full diversity and linear decoding complexity. Square real orthogonal codes (SORCs), which have full diversity and linear decoding complexity, were proposed in [9] for two, four and eight antennas systems.
Theorem 1: If the relay matrices have the property: and h 22 (t) can be approximated as 1 More properties about the differential space-time coding can be found in [14]- [17].
where L denotes the number of STC symbols in a frame.
Proof: It can be proved by direct matrix multiplication and expectation. Due to the limited space, we omit the details.
We note that since receiver T 2 knows the symbols d(t) sent by itself, using the blindly estimated channel h 22 (t), we can subtract the self-interference at T 2 without using pilot symbols at the beginning. Although we can blindly estimate channel h 22 (t), T 2 does not have any CSI of h 12 (t). Then based on the above theorem, a simple differential signal detector is developed to recover the desired signal s(t) at source T 2 . In the later performance analysis section, we assume that h 22 (t) is perfectly cancelled. Most of papers on distributed STCs for TWRNs also assume perfect self-interference cancellation, such as [7] and [19] for coherent systems and [8] and [13] for differential systems. However, in practice, the estimation error will introduce some performance degradation which depends on estimation accuracy of h 12 (t). The estimated h 22 (t) is used in simulations in this paper. In the simulation section, we have simulated the proposed scheme using the estimated h 22 (t) and the results show that the performance loss due to the h 22 (t) estimation error is negligible.
By using Eq. (9) and Eq. (11) and the assumption of h 12 (t) = h 12 (t − 1), we havẽ whereñ 2 (t) = n 2 (t) − U(t)n 2 (t − 1). Note that E{U(t)U(t) H } = I N , and n 2 (t) and n 2 (t − 1) are independent complex Gaussian random vectors with zero mean and covariance σ 2 n 2 (t). We have is a Gaussian random vector with zero mean and covariance σ 2 n 2 (t). Hence, the least square (LS) decoder can be performed to recover the transmitted signal arg min
IV. PAIRWISE ERROR PROBABILITY AND BLOCK ERROR RATE ANALYSIS
In this section, we derive the PEP and the BLER of the proposed DDSTC-ANC scheme.
Asymptotic diversity order is also analyzed in this section.
A. Pairwise Error Probability
For simplicity, we define U ∆,kj (t) = U k (t) − U j (t) and S ∆,kj (t) = S k (t) − S j (t). The PEP of mistaking the kth STC block by the jth STC block can be evaluated by averaging the conditional PEP over the channel statistics, i.e., f i , g i , and we have 2 [20] where γ = P N 0 is signal-to-noise ratio (SNR), P is the total power in the TWRN and Since it is very difficult to analyseỹ 2 (t − 1) directly, we approximate it using Eq. (12) asỹ 2 (t) ≈ √ P 1 S(t)h 12 (t). This approximation is particularly accurate at high SNR. Then, based on Eq. (9), we have S ∆,ij (t) = U ∆,ij (t)S(t − 1) . We further assume h 12 (t − 1) ≈ h 12 (t) . Then, Eq. (14) can be further simplified as Similarly, the PEP for the coherent scheme can be derived as Since σ 2 n 2 (t) = 2σ 2 n 2 (t), the distributed differential scheme in TWRN is supposed to have 3 dB loss in coding gain compared to distributed coherent scheme.
Lemma 2:
The probability density function (PDF) off(t) can be derived as Proof: Note thatf 1 (t), · · · ,f N (t) are independent, we can easily derive Eq. (17). Lemma 3: B represents an n × n Hermitian matrix (i. e., B H = B), and x is an n × 1 complex vector. We have Proof: Please see [21].
Note that the canonical representation of Gaussian Q-function is in the form of a semi-infinite integral, which makes analysis very difficult. Here, we use an alternative representation of the Gaussian Q-function from [22,Eq. (4.2)] as Q(x) = 1 π π/2 0 exp − x 2 2sin 2 θ dθ. Then, by doing some manipulations, we have where )N 0 sin 2 θ , and λ i , i ∈ {1, · · · , N}, denotes the singular value of S ∆,kj (t) H S ∆,kj (t). The second step of the equation is based on the Lemma 2 and Lemma 3.
Note that the mean of |g
especially for large N (by the law of large numbers) [5], [12]. Hence, Let |g i (t)| 2 = γ i (t). Since g i ∼ CN (0, σ 2 g ), the PDF of γ i (t) can be obtained as p (γ i (t)) = Hence, after doing some manipulations, the MGF-based PEP expression is derived as where , for x < 0, is the exponential integral function [23, 8.211.1].
Next let us derive the simplified PEP expression at high SNR. Note that [23, 8.214 where C is Eulers constant and C ≈ 0.577 [23, 9.73]. When x tends to 0, the exponential integral function can be approximated as Ei (x) ≈ ln(−x), for x < 0. At high SNR, we have exp sin 2 θ M i ≈ 1 and using the approximation for the exponential integral function, we have Note that Finally, we derive the well-known Chernoff-bound-based PEP expression. From Eq. (19), setting θ = π 2 , and doing some manipulations, the Chernoff-bound-based PEP expression is given as The average BLER can be obtained based on the well-known union bound as
B. Diversity Order
In this subsection, we analyze the asymptotic diversity order of the proposed DDSTC-ANC scheme. Firstly, we define the total transmission power is N · P . Note that N · P = N · P 1 + N · P 2 + N 2 · P Ri , P 1 = α 1 P , and P 2 = α 2 P . Denote the SNR γ = P N 0 . Then, we rewrite M i at . Thus, the simplified PEP at high SNR can be rewritten as where N . When S ∆,kj (t)S ∆,kj (t) H is full rank, the diversity can be obtained as [24] d = lim Thus, the diversity of the proposed DDSTC-ANC scheme for TWRNs is N 1−log log(γ) log(γ) .
V. OPTIMUM POWER ALLOCATION
In this section, we derive the OPA between the source nodes and the relay nodes that minimizes the total PEP in the TWRNs. Because the MGF-based PEP expression is very hard to analyze and gives little insight, we use the simplified PEP expression to derive the OPA. Here, we consider the total PEP in the TWRNs, and denote the PEP in T 1 and T 2 as P d,1 ij (γ) and P d,2 ij (γ), respectively. C in Subsection IV-B is rewritten as C T 1 and C T 2 for T 1 and T 2 , respectively. Hence, we have where . It is obvious that to minimize the PEP at high SNR, we should minimize the C −N T 1 + C −N T 2 in Eq. (28) .i.e., min α 1 ,α 2 As a special case, when σ 2 f = σ 2 g = σ 2 , we have α 1 = α 2 = α. Therefore, with equality when α = 1 4 , or equivalently, P 1 = P 2 = P 4 and P R i = P 2N . Thus, the OPA is such that the source nodes use half the total power and the relay nodes share the other half. We should emphasize that this power allocation only works for the TWRNs, in which all channels are assumed to be i.i.d. Rayleigh and no path-loss is considered. It is obvious that it may not be optimal when the path-loss effect is considered in the TWRNs.
As the expression in Eq. (29) is complicated, it is difficult to derive the closed-form solution for OPA when σ 2 f = σ 2 g . Here, we use numerical method, such as the nonlinear optimization method, to obtain the optimal solution. In Section VI, it is interesting to find that when σ 2 f = σ 2 g , α 1 + α 2 = 0.5 still holds for the simulated scenarios, which means the source nodes still share half the total power.
VI. SIMULATIONS
In this section, we provide simulation results for the proposed DDSTC-ANC scheme. Simulations are performed with PSK modulation and a frame size of 100 symbols over a quasi-static Rayleigh fading channels without specific mention. The estimated h 11 (t) and h 22 (t) are used in simulations. For comparison, we also present simulations over a GSM channel model with a symbol sampling period of T s = 3.693µs and a maximum Doppler shift of 75 Hz [10]. This ensures a slowly changing channel and allows the assumption of a constant channel over two consecutive time blocks. Without specific mention, we assume that σ 2 f = σ 2 g = 1 and the source nodes uses half the total power and the relay nodes share the other half, i.e., P 1 = 1 4 P , P 2 = 1 4 P and P R i = 1 2N P . From Fig. 2, we present the simulated BLER performance for the proposed DDSTC-ANC schemes using Alamouti for TWRNs. The performance of the corresponding coherent detection is plotted as well for better comparison. It shows that the differential scheme suffers about 3-dB performance loss compared with the corresponding coherent scheme, which has been validated in Subsection IV-A. Fig. 2 also compares the simulated BLER performance for our proposed DDSTC-ANC and the differential scheme in [10]. It can be observed that our proposed scheme is superior to (about 2-dB) the detector in [10]. The main reason is that the differential detection approach employed in [10] was based on the estimation of the previous symbol. Consequently, when one symbol was decoded incorrectly, it will affect the decoding of the consecutive symbols thus leading to serious error propagation. Comparatively, the information about the estimation of the previous symbol is not required in our proposed differential detection and is, thus, able to prevent the error propagation.
In Fig. 3, we include the Genie-aided results by assuming that each source node can perfectly remove its own information from the received signal. It can be noted from the results that the proposed differential detection scheme introduces negligible performance loss compared to the genie-aided scheme. We also compare the BLER performance of the differential scheme over a GSM channel (a practical channel) and a quasi-static Rayleigh fading channel. From the figure, it can be observed that there is almost no performance loss in a GSM channel compared to the quasi-static Rayleigh fading channel which clearly justifies the robustness of the proposed differential scheme in slow fading channels. It also indicates that the effect of non-constant channel on proposed scheme can be ignored which validate our assumption of quasi-static fading channel model.
In Fig. 4, we show the optimum power allocation scheme of the DDSTC-ANC scheme. It can be seen that more power should be allocated to P 1 when the channels from relay nodes to T 2 are better than the channels from relay nodes to T 1 . It is interesting to find that when σ 2 f = σ 2 g , the sources still share half the total power for the optimal power allocation.
In Fig. 5, we examine the BLER performance of the proposed scheme with power allocation for the system with four relay nodes. The SORC is used at relays and signal is modulated from a BPSK constellation. We also take into account the relay's location as: case 1 (the symmetric case), where relays are placed halfway between the source nodes, i.e., T 1 , T 2 , and σ 2 f = 1 and σ 2 g = 1; and case 2 (the asymmetric case), where relays are close to the source node T 2 , and σ 2 f = 1 and σ 2 g = 10. It can be observed from Fig. 5 that the BLER performance of the proposed scheme with power allocation can provide considerable performance gain in comparison with the equal power allocation (EPA) scheme, i.e., P 1 = P 2 = P R i = P N +2 .
VII. CONCLUSION
In this paper, we have proposed a DDSTC-ANC scheme for TWRNs with multiple relays.
A simple differential signal detector was developed to recover the desired signal at each source by subtracting its contribution from the broadcasted signals. The performance of the proposed DDSTC-ANC scheme was analyzed and the OPA was presented to improve the system performance. Analytical results have been verified through Monte Carlo simulations.
1st phase transmission 2nd phase transmission
|
2013-12-14T20:07:13.000Z
|
2012-09-01T00:00:00.000
|
{
"year": 2012,
"sha1": "3dae2a6ef526552476798fc1f049167253502db9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1211.2162.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3dae2a6ef526552476798fc1f049167253502db9",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
215415654
|
pes2o/s2orc
|
v3-fos-license
|
Electron Collisions with Molecules and Molecular Clusters
State-of-the art computational studies of electron collisions with molecules and small molecular clusters are illustrated with results obtained from the application of the R-matrix method and the UKRMol/UKRMol+ suites. High-level calculations of electronic excitation cross sections and core-excited resonances, mainly of core-excited shape character, show excellent agreement with experiment for mid-size molecules like pyrimidine and thiophene. Simpler calculations are paving the way for an in-depth understanding of the effect of hydration on resonance formation: how the shift in resonance energy depends on the characteristics of the hydrogen bond and the resonance being studied. Finally, applications of the software to a little studied process, interatomic coulombic electron capture are also illustrated.
Introduction
The physics of electron scattering from molecules has been a focus of research in atomic and molecular physics for decades. These scattering processes possess both fundamental and applied interest [1]. Electron scattering experiments allow an insight into, for example, the electronic structure of molecules (e.g. [2]) as well as enabling the investigation of more fundamental phenomena like quantum coherence [3] and others [4]. From an applied perspective, the requirement to quantify and understand electron-molecule collisions stems from a number of fields and media [5,6]: from astrophysics to both natural and man-made plasmas (used, for example, in industry for microchip production) and processes induced by secondary electrons generated by ionizing radiation incident on biological (e.g. in humans subject to radiation-based medical treatment) and inorganic (e.g. detectors used in space missions [7]) matter.
The data requirements in particular have spurred the development of more sophisticated and accurate experimental and computational tools for the study of electron scattering from molecules. The advances in the computational investigation of these collisions have seen, over the last decade or so, the overhaul of many of the software tools employed, as well as the development of new ones. In particular, the interest in biological molecules, which are bigger and more electron rich, as targets has stimulated work to ensure the software is able to make use of current computational capabilities; this has led, for example, to the parallelization of many programs (e.g. ePolyScat [8,9], the Schwinger Multichannel with Pseudopotentials code [10] and the software suite used in this work) and the use of GPUs in some of them. These developments have enabled scientist to study: (i) small targets (e.g. H 2 [11]) with increasing level of detail, providing more accurate data than ever before for a number of scattering processes (vibrational and electronic excitation, elastic scattering, etc.); (ii) bigger targets than ever before (for example, radiosensitizers [12], biomass molecular fragments like lignin [13] and other biomolecules [14]) with a higher level of accuracy; (iii) the effect of the environment by means of the investigation of small molecular clusters. The latter strand of research emerges from the need to understand electron scattering processes beyond the gas phase, for example, in the cell (where low energy electrons are known to play a role in damage produced by ionizing radiation [15]). Clusters allow researchers to bridge the complexity gap between the gas and condense phases [14,16] but are also interesting in themselves, for example in relation to atmospheric processes [17].
It is worth mentioning that the expertise and software used in electron scattering can also be applied, without the need of significant additional methodological or computational developments, to the study of other processes. One example is positrons collisions below the positronium formation threshold; another is interatomic coulombic electron capture (ICEC, see Sect. 3.3 for its definition), a process that takes place in atomic and molecular clusters. Finally, photoionization can be seen as half an electron scattering process: whereas in the latter there is both an incoming and outgoing unbound electron, in the case of photoionization, one needs to model only an outgoing unbound electron (after the deposition of energy by photons in the molecular system). This means that software to study electron scattering can be used for photoionization after a small number of additions. The software can not only be used to determine photoionization cross section and other observables, like the asymmetry parameter, but can generate data (transition dipole moments between a bound, normally the ground, state of the neutral molecule and the electronic continuum) that can be used [18,19] to study strong field processes (see, for example [20]).
In this paper, we describe some of the recent calculations performed using the R-matrix method and its software implementations for electron-molecule/cluster scattering (and positron scattering and photoionization). These examples describe: (i) state-of-the-art, highly accurate calculations of electronic excitation cross sections and core-excited resonances; (ii) simpler calculations aimed at understanding the effect of microhydration on shape resonance formation; (iii) an application to describe ICEC.
The R-matrix method
The R-matrix method and its application to electron scattering from molecules are well established, and a number of publications describe it in detail [21,22]. However, the computational implementation of the approach has changed significantly over the last 5 years with the development of the UKRmol+ suite [18]. Below, we provide a brief summary of the method and refer the reader to earlier publications for more details. We apply the approach within the fixed-nuclei approximation.
The R-matrix method is based on the division of space into inner and outer regions. The boundary between these regions is given by a sphere of radius a centred on the centre of mass of the system. In the inner region the scattering electron is indistinguishable from the target electrons and correlation and exchange effects play a crucial role whereas in the outer region they can be neglected. The radius a must therefore be chosen such that the charge density of the electronic target states of interest is fully contained inside the R-matrix sphere, making the scattering electron distinguishable from the others when it is in the outer region.
In the inner region, a set of basis functions Ψ k determined by diagonalising the N+1 non-relativistic hermitian Hamiltonian describing the system (hermiticity is ensured by addition of the Bloch operator [21]) is used to describe the system. Several levels of approximation are possible for Ψ k and these determine how they are constructed. In Static-Exchange/Static-Exchange plus Polarization (SE/SEP) calculations, only the ground electronic state of the molecule is considered and this is described at Hartree-Fock level. If electronic excitation is of interest and/or core-excited (see below) resonances are being investigated, the Close-Coupling approximation is used: in this case, the multiconfigurational wavefunctions of the ground and some excited states of the target go into building the Ψ k basis functions; these N-electron (target) wavefunction are usually, but not always, built using a Complete Active Space Self Consistent Field approach (CASSCF).
The basis functions Ψ k are used to construct the R-matrix at the boundary between regions. The outer region part of the problem is then solved by propagating this R-matrix [22] to an asymptotic distance, where the K-matrix is determined by matching to known asymptotic expressions. The interaction potential between the scattering electron and the target is approximated in the outer region by a multipolar single-centre expansion that usually includes the dipolar and quadrupolar interactions.
Once the K-matrices are obtained, the cross sections can be calculated from the trivially determined T-matrices. In order to obtain resonance parameters (i.e. their energy and width) several approaches are possible; we use: (i) the diagonalization of the K-matrices to obtain the eigenphase sums that are then fitted to a Breit-Wigner profile; (ii) the calculation of the S-matrices and, from them, the timedelay matrices; fitting of the largest eigenvalues of the timedelay matrix (known as the time-delay) with a lorentzian function also provides the resonance parameters [23].
For the R-matrix calculations presented here we have mainly used the UKRmol+ suite, a re-engineered version of the UKRmol codes [24]. The use of the new suite was particularly necessary for the cluster studies, where, due to the size of the target, an R-matrix radius a = 18 a 0 was required.
To calculate elastic and inelastic differential cross sections we used the program DCS [25].
Results
In this section we present some examples of state-of-theart and novel electron scattering calculations carried out using the R-matrix method.
Resonances and cross sections for biomolecules
Track structure modelling (of the effects of ionizing radiation in biological media) is used to assess how energy is deposited in the medium at the microscopic level. Cross section data for all processes induced by electron impact on molecules over a broad energy range are required for this purposes [26]. Data for elastic scattering are widely available (both in terms of integral and differential cross sections, although dipolar targets present some difficulties [27]) but electronically inelastic cross sections are harder to calculate and measure for low energy scattering [14].
Temporary anion states or resonances are crucial in low energy electron scattering: all scattering processes (elastic, vibrational and electronic excitation, neutral dissociation, dissociative recombination) can be enhanced by the presence of resonances. In addition, dissociative electron attachment (DEA) normally proceeds via resonance formation. Resonances linked to the electronic degrees of freedom are normally classified as shape (when they involve attachment to the molecule in the ground state) or coreexcited (when the electron transfers some of the energy to the molecule, exciting it electronically, as part of the attachment process); the core-excited resonances can, in turn, be classified as Feshbach (when they are energetically below the target electronic state identified as the parent state) or core-excited shape (when they are above). It is important to note that this classification is not "algorithmic": many molecules display the presence of resonances that are mixed in their character, for example, being partly shape and part core-excited. Resonances can also change their character as the geometry of the molecule changes [28].
In general, shape and core-excited shape resonances have shorter lifetimes (as they can decay to their parent states) and Feshbach resonances are narrower (i.e. have longer lifetimes). Again, this is just a general trend: narrow shape resonances are present in some molecules [28].
Resonances can be investigated experimentally for example, by measuring elastic or inelastic cross sections using electron transmission spectroscopy (ETS) or electron energy loss spectroscopy (EELS). Measuring anion yields due to DEA as a function of electron energy can also provide information about the resonances present in a molecular system. However, for larger molecules the resonance spectrum can be quite complex making it difficult to link specific calculated resonances to peaks in the anion production (unlike smaller molecules, where the assignation is usually more easily done). Velocity slice or map imaging experiments can provide additional information regarding the resonances that lead to DEA, particularly resonance symmetry, that facilitates comparison with theory [29]. Wider resonances are easier to detect if cross sections are being measured, so shape and core-excited shape resonances are more likely to be identified in this way.
From the computational point of view, shape resonances are the easiest to investigate as they only require describing the ground state of the molecule accurately. Conversely, describing core-excited resonance requires the (explicit or implicit) description of excited states of the target and careful modelling of electron correlation effects.
The R-matrix method and the UKRmol and UKRmol+ suites have been used to study core-excited resonances in a number of polyatomic molecules, from triatomics like water [30] to nucleobases like adenine and guanine [31]. However, experimental confirmation of the resonances identified in calculations has been scarce: ETS does not provide much information on core-excited resonances and EELS experiments for scattering energies below the ionization threshold are not abundant. Despite the fact that many DEA experiments have reported ion yields that are almost certainly linked to core-excited resonances, little is known about them.
Two examples of the predictive power of R-matrix calculations are given by the core-excited resonances in pyrimidine [32] and thiophene [33]. For these targets EEL spectra have confirmed, by measuring excitation functions and cross sections for electronic excitation to bands (for the former target) or specific states (for the latter), the presence of many of the resonances determined theoretically. Table 1 summarizes the resonances identified for pyrimidine both in calculations and measurements: we can see that the calculated results appear higher in energy and that differences between measured and calculated position increases with energy. This is a well known effect that is linked to an incomplete description of polarization effects in the R-matrix calculations [21,32]. Figure 1 shows the cross sections for excitation into the second ( 3 A 1 ) excited state of thiophene for specific scattering angles (in other words, the excitation functions) Table 1. Core-excited resonances of pyrimidine identified in R-matrix calculations and EEL spectra (see details in Regeta et al. [32]); note that the first resonance listed is actually of mixed shape and core-excited character. The widths of the calculated resonances are also provided. together with those measured for 90 • and 135 • . Details of the calculation and experiment can be found elsewhere [33].
We can see that the size of the calculated cross section increases as the scattering angle goes from 45 • to 90 • but then decreases slightly for 135 • only to reach its maximum size for 160 • . Two resonances are clearly visible in all cross sections: a narrow one at around 6.7 eV and a wider one at around 9.2 eV. The variation with angle is smaller at lower energies (the size of the first resonant peak increases by around 50% between 45 • and 160 • ) and bigger at higher energies; the size of the second resonant peak increases by a factor of 3 between 45 • and 160 • .
We also observe, for the two angles for which there is experimental data, that the size of both cross sections is almost identical, although the peaks are bigger in the calculated results. In addition, the energy dependence of the experimental results is well reproduced by the calculations: the two resonances discussed above are also visible in the experimental cross sections. The whole calculated cross sections look shifted to higher energies: this is due to the incomplete polarization description that leads to the shifting of the resonances, as described above.
Effect of microhydration
The study of small molecular clusters comprising one or several biomolecules and one or several water molecules is being pursued in order to bridge the gap in our understanding between gas phase and the processes that occur in the biological environment [14].
Using the R-matrix method and the UKRmol+ suite, extensive studies of pyridine-(H 2 O) n and thymine-(H 2 O) n with n=1, 2, 3, 5 were performed [34,35] at the SE level in order to understand the effect of microhydration on the two lowest π * shape resonances present in both ring molecules (see, for example, [36,37]). Earlier studies [38] using the Schwinger multichannel method (SMC) for the π * shape resonance of formic acid in clusters with 1 and 2 water molecules showed that: (i) the effect of water on the resonance position depended on whether H 2 O was the hydrogen donor/acceptor in the hydrogen bond, with the former leading to a lowering of the resonance energy and the latter to an increase; (ii) the effect was qualitatively additive.
Our aim was to determine whether these conclusions held for bigger molecules and whether the effect was quantitatively similar for different shape resonances of a given target. We interpreted our results by decomposing the of microhydration effect into indirect and direct effects. The indirect effect is due to the changes in the geometry of the hydrated molecule resulting from the formation of one or several hydrogen bonds. In general, these geometry changes are small, so this effect is usually smaller than the direct effect and can be quantified by calculating the resonance positions for the isolated molecule in two different geometries: the equilibrium one for the isolated molecule and the one the molecule has in the cluster equilibrium geometry (but without including the water molecules). Therefore, the calculated effect is dependent on the equilibrium geometry used in the calculations for the isolated molecule and the cluster.
The direct effect is due to the presence of the water molecules. This effect is quantified by performing calculations for the isolated molecule in the cluster geometry and for the cluster. The sum of both effects gives the resonance shift due to microhydration. Table 2 shows the values of the shifts due to the indirect, direct and total effects for the pyridine-H 2 O cluster. In this cluster, the water molecule hydrogen bonds to the nitrogen atom in pyridine; it is therefore the hydrogen donor and we expect both resonances to be shifted to lower energies.
We can see that the indirect effect destabilizes both resonances (the shift is positive). This can be linked to the fact that many of the bond-lengths of pyridine are shortened in the cluster geometry [39]: the repulsive effects felt by the attached electron will be slightly stronger and this means the resonance energy will be slightly higher. The direct effect is stabilizing for both resonances and 3 to 4 times bigger than the indirect one. Table 2. Energy shifts, in eV, for the first and second (π * ) resonance in pyridine upon hydrogen bonding with a single water molecule calculated at SE level using the cc-pVDZ basis set (further details in [34]). The direct and indirect contributions and the total effect are listed. Negative values correspond to the resonance moving to lower energies in the cluster. See text for details.
Fig. 2.
Lowest energy unoccupied π * orbital in pyridine-H2O. The orbital was determined with MOLPRO [40] in a SCF Hartree-Fock calculation using the cc-pVDZ basis set and a geometry optimized as described in [34].
One can see, looking at the orbitals occupied by the scattering electron in the two resonances being discussed (Figs. 2 and 3 respectively; the orbitals in the isolated molecule are practically identical) that the one involved in the first π * resonance describes an electronic density that is somewhat closer to the water molecule: one of the lobes of the orbital, centred on the nitrogen atom, points in its direction. This is consistent with the fact that the direct shift for this resonance is almost 20% bigger than for the second resonance; the orbital involved in the latter has no density on the nitrogen atom. The shape of the orbitals can also be linked to the relative size of the indirect shift: the 1π * orbital has density along two of the ring bonds, thus being more sensitive to changes in their length.
The shift due to the total effect is very similar for both resonances (although this is not always the case [39]) and leads to the stabilization of both of them, as expected given that H 2 O acts as the hydrogen donor. The shift for each resonance is also very similar to the energy change in the orbital involved in the resonance when going from the isolated molecule (in its equilibrium geometry) to the cluster.
We also confirmed the findings of Freitas et al., that there is a rough (qualitative) additivity of the effect as the number of water molecules in cluster increases [35]. In addition, we determined that there is a weak, but nonzero, dependence of the resonance shift on the binding site and that the stabilization/destabilization effect can be different for different resonances in the system. Although both our [34] and the earlier SMC calculations performed both at SE and SEP level showed that similar conclusions were reached for both types of calculations, inclusion of the polarization effects in a consistent way should provide a more accurate picture and improve our understanding of microhydration effects.
Interatomic coulombic electron capture
Interatomic coulombic electron capture is an electron induced process that has been predicted both theoretically [41,42] and computationally [43] but not yet measured experimentally. In this process, an electron interacts with an heterogeneous (atomic or molecular) cluster in which one of the monomers is positively charged. The electron attaches to the cation, releasing some energy. This energy on its own is not sufficient for a different monomer (an atom or molecule with a higher ionization potential) in the cluster to ionise. However, if the sum of the kinetic energy of the electron and the attachment energy is larger than the ionization potential of the second monomer, ionization can take place. Taking as example a Neon cation immersed in a Helium cluster: Ne + @He n + e → Ne@He + n + e. The energy of the ejected electron will be given by the difference between the ionization potential of the second monomer (He in this case) and the sum of the ionization energy of the first one (Ne) and the initial kinetic energy of the electron. The process can therefore be seen as an inelastic scattering where the deposited energy causes the electron hole to "move" from one atom to another (although, more accurately, the process corresponds to attachment of one electron and emission of another). Experimentally, one could observe ICEC by measuring the "energy loss" of the free electron. Like microhydration, ICEC is a process in which the effect of the environment plays a role: however, whereas microhydration can enhance or quench [44] a process that occurs in the isolated monomer (e.g. DEA), in ICEC the environment is essential for it to take place.
The R-matrix method was used to determine the cross section for electron scattering from a Ne + He cluster. Close-coupling calculations based on a Hartree-Fock description of the ground state of Ne + He (in fact, three degenerate states corresponding to Ne + (1s 2 2s 2 2p 5 ) + He(1s 2 )) and that of NeHe + (Ne + (1s 2 2s 2 2p 6 ) + He(1s 1 )) were performed for a range of Ne-He distances R between 3 and 10 a 0 (the equilibrium interatomic distance of Ne + He is around 4 a 0 ). The sum of the excitation cross sections between the degenerate states and the excited state give the ICEC cross section for NeHe + . However, experiments are more likely to involve bigger clusters and, for this reason, the ICEC cross section for Ne + @He 20 was calculated in the following way (see [43] for further details): vibrational wavefunctions for the cluster were determined from a variational quantum Monte Carlo calculation and 2000 geometries which sample the square of the wave function used in the computation of the ICEC cross section as the sum over all NeHe + pairs (i.e. with different R values).
Even this simple calculation, where correlation effects are almost certainly underrepresented, yielded cross sections a few orders of magnitude higher than those for radiative recombination. Preliminary tests using more sophisticated wavefunctions for the states of the cluster indicate that the ICEC cross sections are likely to be even bigger. Calculated "energy loss" spectra for 5 and 10 eV scattering energy provide guidance for potential experiments.
Conclusions
Computational work on electron scattering from molecules and molecular clusters is providing both quantitative data and detailed insight into a number of collisional processes. For small molecular targets (few atoms and few electrons) results of a quality similar to that of atoms are being achieved. For medium-size targets, the full description of electronic correlation remains a challenge but quantitative agreement with experiment for the harder to determine excitation cross sections and core-excited resonances can be achieved.
Many challenges, nonetheless, still remain: modelling a number of processes of significant interest, like DEA and neutral dissociation (dissociative excitation) requires the inclusion of the nuclear degrees of freedom. Although progress has been made in this respect, polyatomic molecules remain mostly beyond current capabilities, unless simplifications that reduce the number of nuclear degrees of freedom are applicable.
Much remains to be understood of the effect of the environment on collisional processes. For example, more sophisticated calculations are needed to fully model the effect of microsolvation on resonance formation. In addition, effects like the transfer of kinetic energy into the vibrational modes of the cluster [45] need to be taken into account.
Finally, continued methodological and software developments will further improve our ability to describe and quantify electron scattering from molecules and clusters, contributing both to improving our understanding of fundamental molecular physics and to the description and modellization of other physical phenomena and electron interactions in condensed media.
|
2020-03-26T10:07:29.022Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "ecd5c0ec98ed86cc72c9c9dfe80d0cdc575f0108",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjd/e2020-100550-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7b31f329c170dd660cb048798bf62d281dfc704f",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
55485554
|
pes2o/s2orc
|
v3-fos-license
|
Macroeconomic Volatility and Macroeconomic Indicators among Sub-Saharan African Economies
This study explored how disaggregated macroeconomic volatility parameters impact key macroeconomic indicators in Sub-Saharan Africa. The study employed a number of external and regional macroeconomic volatility parameters derived from macroeconomic data sourced from the IMF in its empirical analysis. Dynamic Panel fixed effect model employed show that regional macroeconomic volatility parameters tend to have more statistically significant impact (positive and negative) on performance indicators in the sub-region than external macroeconomic volatility parameters. This study also finds that among regional macroeconomic volatility parameters shaping growth conditions in the sub-region, investment growth volatility is the dominant condition with statistically significant impact on key macroeconomic indicators in the sub-region. Results further point to evidence of significant moderating effects in how external and regional macroeconomic volatility parameters impact regional macroeconomic indicators.
Introduction
Growing interactions among economies with varied domestic macroeconomic structures, a feature of evolving trends in international commerce, continues to be a key macroeconomic trend responsible for significant growth in most developed and developing economies.Although Smithian and Ricardian theories, as well as modern adaptive theories of international trade differ somewhat, on how international trade and economic interactions impacts participating economies, the general consensus suggests most participating economies benefits from such interactions.These benefits, according to the literature, stems from access to broader markets, and ensuing exchanges due to differences in resource endowments and technological knowhow.Expansion in cross-national trade necessitated by such fundamental disparities has also been found to be crucial in bridging major economic gaps among participating economies with varied domestic macroeconomic structures.These benefits notwithstanding, present trends in international commerce suggests the drive to major on country specific comparative advantages, a central tenet in international commerce, had led to, and continues to foster economic interdependence especially among developing economies.This drive to gain access to external markets in order to support regional export oriented policies, has also inadvertently exposed most less developed economies to macroeconomic volatilities inherent in the global market place (mostly dominated by advanced and emerging economies).Related literature for instance suggest that, less developed economies (Note 1) who have been mostly insulated from extreme swings in global commerce due to limited interactions and exposure, are now becoming increasingly susceptible to occasional shocks associated with international commerce due to growing interactions.Understanding effects of this growing interaction and exposure to global markets on developing economies is thus crucial; in that, the condition defines how key performance indicators in the sub-region ultimately influence economic growth and living standards.For instance, Addison et al (2007) showed that volatilities inherent in global markets have significant impact on both regional and country specific macroeconomic indicators of participating economies.macroeconomic perturbations than others.Empirical studies nevertheless, suggest that effects of occasional volatility and shocks associated with international commerce might be more severe on less developed economies than their developed counterparts; Kraay and Ventura (2007).Referencing this conclusion, some analysts have argued that SSA economies are more prone to external volatilities resulting from increasing involvement in global commerce due to relatively weak regional economic structures and constrained economic policies needed to manage the condition.Analysis of historical trade dynamics on Sub-Saharan Africa show that until recent decades, the sub region, compared to other economic blocks around the world had minimal interactions and limited access to global markets due to trade barriers and socio-political constraints.Recent trends however, suggest most economies in the sub-region are becoming more and more integrated into the global market economy through expansion in exports, foreign direct investments, networked financial systems etc. Apart from these known traditional means of forging and expanding economic interactions, available evidence further suggest that recent growth in economic interactions for most economies in the region has resulted from mutually beneficial socio-economic and political factors.For instance, trade agreements aimed at promoting export base of economies in the sub-region to support poverty alleviation programs, and access to internet based financial network systems which have made it possible to integrate financial and banking operations into the global financial system are but few of these emerging factors.These evolving conditions coupled with other macroeconomic drivers in the sub-region continue to expose the sub-region to potentially risky macroeconomic conditions associated with global commerce.
The view that most participating economies benefits from international commerce is not in dispute in this study; this study only seeks elicit discussions on whether such benefits outweighs constant risk of exposure to shocks with potential to stall growth or bring about total economic collapse for mostly ill-prepared developing economies.Studies show that external macroeconomic volatility has the potential to exert significant positive or negative influence on performance indicators of participating economies.Further evidence suggest that depending on the nature and trigger of economic volatility or shock, the condition could enhance critical macroeconomic indicators of participating economies bringing about badly needed growth; or suppress them, leading to constrained economic performance.Aghiona et al. (2004) for instance, showed that, financial openness (a feature of growing market exposure and interactions) has the potential to destabilize domestic economy of less developed economies by inducing massive swings in capital inflows and outflows during economic booms and downturns.Easterly et al. (2001) additionally showed that increase trade openness (another key feature of growing economic interactions) has significant impact on output volatility among developing economies.Buch et al. (2005) further determined that the link between financial openness (an element of exposure) and business cycle volatility among economies depend on the nature of underlying shocks economies are exposed to.
Additionally, Bekaert, Harvey and Lundblad (2001), who examined effects of financial market liberalization on economic growth, also submitted that the condition has significant positive impact on per capita GDP growth.Focusing on the link between foreign direct investment (FDI) inflows and per capita income among selected SSA economies, Fotso (2003) further concluded that FDI-related technology transfers (due to economic interactions) has positive effects on growth conditions among SSA economies.Delechat et al (2009) also found that net capital flows resulting from economic exchanges correlates positively with growth rate in 44 SSA economies.Edison, Levine, Ricci and Sløk (2002) however, found no significant relationship between financial/economic integration and economic performance.This succinct empirical review to some extent, highlights prevailing views on how financial/trade openness or exposure influence performance indicators among participating economies.
This study evaluates effects of macroeconomic volatility on macroeconomic indicators in SSA using disaggregated volatility approach; this approach derives a number of macroeconomic volatility parameters -both external and regional, and tests how each parameter affects key macroeconomic indicators in SSA.Following Delechat et al (2009), this study subscribe to the view that expansion in international trade, and integration of economies in the sub-region into the global economy has been crucial for growth in the sub-region.Available data further supports this view that the sub-region owes much of its recent economic success to sustained growth in exports and FDI inflows; a direct product of growing interactions and access to global capital flows.However, this study is of the view that unlike most advanced economies that are well equipped with robust macroeconomic structures and strong policy expertise to cope with shocks associated with global commerce, the SSA economy might not be well equipped to deal with economic threats from growing access and exposure to global commerce.This study consequently estimates relationships between disaggregated fundamental volatility parameters and key performance indicators for SSA; with emphasis on verifying how selected macroeconomic indicators in the sub-region respond to or are shaped by regional and external macroeconomic shocks or volatility.
External macroeconomic volatility in this study defines variability associated with macroeconomic indicators among advanced economies around the world as defined by the IMF.This definition presumes that macroeconomic activities among advanced economies such as the USA, most economies in the European Union etc, to a greater extent, drives much of global commerce and as such, the volatilities involve.This study hypothesizes that macroeconomic volatility and shocks inherent in international commerce dominated by advanced economies could be responsible for depressed growth among exposed less developed economies like those in SSA.Consequently, this study is modeled to estimate how specific volatility parameters -both external and domestic, influence performance indicators in the sub-region.Currently, two views dominate ongoing debate about the relationship between external macroeconomic volatility and performance indicators among economies in the sub-region.The first view surmise that, despite growing integration and purported exposure, financial market operations and economic structures in the sub-region are still less developed and relatively detached from the global financial system to be significantly impacted by external macroeconomic shocks.Proponents of this view for instance, argue that most economies in the sub-region still operates at the subsistence level, with most resources obtained domestically; consequently, external macroeconomic shocks may have little or no significant impact on key regional performance indicators as some have argued.Advocates further argue that compared to most advanced economies, consumption patterns in the sub-region are less credit dependent, a condition which makes economies in the sub-region less susceptible to credit and financial market shocks.Thus, to proponents of this view, external macroeconomic volatility or shocks may have relatively minor impact if any, on key macroeconomic indicators among economies in the sub-region.Opponents however, point out that, growing exposure to external market coupled with weak regional macroeconomic structures and policies capable of absorbing such shocks, constitutes significant threats to long term growth conditions in the sub-region.
This study projects significant relationship between both external and regional macroeconomic volatility parameters and key macroeconomic indicators from the sub-region; it is further anticipated that varied macroeconomic indicators from the sub-region will respond differently to various volatility parameters in this study.To verify these projections, this study provide empirical examination of the nexuses between external and regional macroeconomic volatility parameters generated and selected macroeconomic indicators for SSA.Empirical approach adopted in this study has been motivated in part by lack of empirical study focusing on how disaggregated macroeconomic volatility parameters influence macroeconomic indicators in SSA.This study is also part of evolving literature exploring associations between distortions in macroeconomic condition and growth performance among less developed economies.Macroeconomic performance in the sub-region is modeled using five economic indicators; namely, Gross Domestic Product (GDP) growth, inflation rate, investment growth, gross national savings and current account balance condition.External and regional/domestic Macroeconomic volatility parameters on the other hand are estimated from selected macroeconomic indicators associated with advanced economies as defined by the IMF.
The rest of the study is structured as follows; section two discuss macroeconomic dynamics in Sub-Saharan Africa with specific emphasis on selected performance indicators employed in this study.Section three presents succinct account of empirical literature on the general relationship between fundamental macroeconomic volatility and macroeconomic indicators.Section four estimates external macroeconomic volatility as define in this study and describe sources and data type use in this study.Section five derives and states empirical model as well as auxiliary tests procedures employed in the study.Analysis of test results, possible policy implications of study findings and conclusions are presented in the final section.
Macroeconomic Conditions among SSA Economies and External Volatility
The extent to which external macroeconomic volatility parameter impacts performance indicators among economies in the sub-region is projected to depend on two key conditions.Following Watts and Bohle's (1993) and Moser (1998) on the concept of vulnerability; this study projects that, the extent of vulnerability to macroeconomic shocks associated with economies in the sub-region depends on two main conditions:-degree of exposure and relative inbuilt capacity to cope or minimize the condition.According to Watts and Bohle's (1993) and Moser (1998), the degree of exposure to an external threats or shock, and the relative capacity to cope or the resilience of an entity to external threats or shock, are the dominant factors determining vulnerability or how variables responds to or are impacted by an external condition.In order words, this approach suggests that performance of macroeconomic indicators in SSA in the face of external shocks depends on how exposed the region is to the specific shock, and inbuilt regional capacity to cope with the shock or threat.Following this reasoning, this study again projects that modeled macroeconomic volatility parameters will have significant impact on key macroeconomic indicators from the sub-region due to growing exposure to volatilities in the global market place.This threat from external macroeconomic volatility due to increasing exposure is further expected to be aggravated by weak regional economic policies and structures crucial in coping with potential effects of volatility or shocks.The following section discuses historical trends in key macroeconomic indicators employed in this study.
GDP Growth
Regional GDP growth data indicates growth conditions among economies in the sub-region differ significantly.Specific domestic factors such as differences in natural resource endowment etc. continue to influence regional conditions leading to disparities in economic growth.Oil producing economies in the region for instance, tend to experience relatively higher GDP growth than economies lacking the resource.Aggregate IMF GDP growth data for the sub-region as a whole, further document significant uneven growth conditions over the past decades.The data for instance shows the sub-region has witnessed significant fluctuations in GDP growth over the past two decades.Much of the fluctuations according to analysts have been driven by persistent variability in exports from the region, foreign direct investments inflow, as well as regional socio-political conditions.IMF data on regional economic outlook shows that with the exception of the mid 1980s and the early 1990s when GDP growth in the sub-region trended weakly, GDP growth trajectory for the region after the year 2000 were positive and relatively high until the onset of 2008 global economic slowdown.The data shows GDP growth in the sub-region declined from an average of over 6% per year prior to the recession to a little over 3% in periods just afterwards.From growth rate of 2.82% recorded in 2009 after the economic shock, the data now reports average GDP growth rate of over 5% for the sub-region as a whole.A condition which suggests the sub-region has recovered relatively faster after the global recession compared to the USA. Figure 1
Inflation Rate
Compared to other developing and advanced economies, inflationary conditions in SSA tend to be relatively high with significant negative impact on regional financial system and macroeconomic conditions.IMF regional economic outlook data indicates on the average, inflationary rate for the sub-region hovered around 10% between 1980 and 1988.This relatively high inflation rate rose significantly between 1991 and 1996, with highest rate over the period reaching over 40% on average for the sub-region.Inflationary conditions in the sub-region after 2000 however, have been relatively low by regional standards with the exception of the period leading to 2008 global financial crisis.Recessionary pressures due to the 2008 financial crisis led to minor increase in inflation rate over the period as evidenced by a rise in trend around the recessionary period in figure 2. This condition, coupled with the fact that the highest inflationary condition over the period under study also happened to have coincided with the 1990-1991 global recession, one of the worst on record, suggest that to -2.00 0.00 2.00 4.00 6.00 8.00
GDP growth rate
GDP growth rate some degree, inflationary trends in the sub-region are influenced by external macroeconomic distortions.Figure 2 illustrates inflationary conditions over a 30 year period for the region and shows effects of recessionary pressures on inflation rate.
Investment Growth (% of GDP)
Investment growth conditions in the sub-region over the past two decades have been relatively strong; available data indicates investment growth over the period under study accounts for over 15% of regional GDP growth.According to IMF regional economic outlook, investment growth as a percentage of GDP growth averaged over 20% between 1980 and 1990.The trend however, declined slightly in the early 1990s, and has since ranged between 16% and 21% of GDP growth.Compared to other macroeconomic indicators used in this study, investment growth as a percentage of GDP growth for the sub-region has been fairly stable over the years by regional standards.Although the 2008 economic recession had significant negative impact on individual economies in the sub-region, aggregate data indicates the region as a whole showed little sign of the condition.Figure 3 illustrates sub-regional investment growth as a percentage of GDP between 1980 and 2011.
Gross Regional Savings (% of GDP)
According to historical regional economic outlook data, gross regional savings as a percentage of GDP, experienced significant decline in the early 1980s.This decline led to regional savings growth falling sharply below a 20% threshold; growth trend since this decline averaged between 14% and 18% until 2005.Early part 2005 however, witnessed significant growth in gross national savings with average growth rate well over the 20% growth rate for the first time since 1980.A key feature about gross regional savings rate over the period under study is its relatively even growth over the period.Trend analysis based on sub-regional macroeconomic data further show that post 2008 recession regional savings growth has perform better on the average, compared to periods prior to the recession.Figure 4 illustrates sub-regional savings growth as a percentage of GDP growth.
Current Account Balance
Among sub-regional macroeconomic indicators explored in this study, regional current account balance as a percentage of GDP growth, like GDP growth trend, also exhibits significant trend volatility.Regional data shows current account balance as a percentage of GDP growth over the past two decades has fluctuated significantly between extremes of -6% and 4.2%.Further trend analysis indicates early part of 1980s witnessed the worse episode in the sub-region's current account balance condition.Between 1980 and 2005, current account as a percentage of GDP growth hardly recorded positive growth.The best period however in current account condition over the period under study occurred just before the 2008 recession, reaching a peak of 4.2%.As expected, this growth condition was short-lived because of recessionary pressures at the time.Figure 5, illustrates regional current account balance trends as presented by IMF regional economic outlook data.
Export Growth
Regional time series data on trends in export growth shows extensive growth between 1993 and 1996.This positive growth trend however fluctuated significantly until a major decline just after the 2008 economic downturn.Since this decline following the 2008 recession, exports from the sub-region have experienced significant growth to date; it is projected that the current trend could be sustained as foreign direct investments into the region grows with substantial portion of these investments augmenting domestic export base.Export growth trend over the period under study to some extent, further support the condition that global macroeconomic condition such as the 2008 recession, tend to have significant impact on key macroeconomic indicators in the sub-region.Figure 6 illustrates export growth dynamics for the region between 1990 and 2011 as documented by IMF regional economic outlook data (Note 2). Figure 6.Sub-Saharan Export Growth rate Conditions (1990Conditions ( -2011) ) with projections to 2016 Data Source: IMF data
Overview of Empirical Literature: Macroeconomic Volatility and Macroeconomic Indicators
The fundamental view that volatility exerts significant influence on macroeconomic indicators in both developed and developing economies is highly supported by existing literature focusing on the relationship.Empirical studies reviewed so far, largely supports the view that macroeconomic volatility has significant negative impact on macroeconomic indicators all things being equal.Studies focusing on the relationship between fundamental
Export Growth (%)
Export Growth volatility and economic performance such as Bernanke (1983), Pindyck (1991) and Ramey and Ramey (1991) have all arrive at similar conclusions; providing evidence in support of negative relationship between volatility and economic growth.Additionally, Henry and Olekalns (2002) also found negative relationship between economic volatility and real GDP growth for the U.S economy.Again, using panel data for 59 industrialized and developing economies, Asteriou and Price (2005) further showed that output volatility due to uncertainty reduces both investment and economic growth; further supporting negative relationship between macroeconomic volatility/uncertainty and economic growth.Furthermore, employing a sample of 128 countries, Badinger (2010) also found evidence of negative effect of volatility on economic growth.Giovanni and Levchenko (2006) additionally documented that, countries whose economies are more open to trade tend to experience more volatility; and are more susceptible to inimical external macroeconomic conditions with the potential to negatively impact domestic economic indicators.
Aizenman and Marion (1999) also found evidence in support of negative relationship between volatility and economic performance indicators among developing countries; for instance, the study showed that volatility negatively affects private investment growth in developing economies.Kharroubi (2007) additionally provided empirical evidence in support of inverse relationship between economic growth and volatility; Kharroubi further surmised that negative relationship between growth and volatility observed in developing countries could be traced to shortcomings or weakness associated with domestic financial system.Using a sample of 79 developed and developing economies, Hnatkovska and Loayza (2005) studied the growth-volatility relationship over the period 1960-2000, and found volatility to be inimical to economic growth.However, contrary to conclusions from most studies reviewed, Kose, Prasad, and Terrones (2005) found positive relationship between growth and volatility among industrialized economies; the case among developing economies in the same study was however, found to be negative.If these findings on the relationship between volatility and economic growth among developing economies especially are indication of a general trend, then all things being equal, findings of the current study might mimic this trend despite the use of disaggregated volatility parameters.
Estimating Macroeconomic Volatility
External macroeconomic volatility in this study defines volatilities inherent in specific macroeconomic indicators associated with advanced economies around the world as classified by the IMF.This study employs aggregate data on real GDP growth, investment growth as wells as output gap conditions for advanced economies in estimating external macroeconomic volatility parameters.In all, GDP growth, Investment growth and output gap data for 34 advanced economies are use in estimating this study's external volatility parameters.External volatility in this study is measured as the standard deviation of stated macroeconomic indicators.Regional macroeconomic volatility parameters for SSA are also derived using similar procedure.
Data and Variables
Empirical analysis verifying effects of external and regional macroeconomic volatility on selected sub-regional macroeconomic indicators such as investment and GDP growth are estimated using a panel of 39 sub-Saharan African economies; the data sets span the period 1980 to 2011.Key macroeconomic variables from the sub-region employed in this study include, GDP growth, investment growth, inflation rate, current account balance conditions, and gross regional savings conditions.External macroeconomic volatility parameters are estimated from variables already stated.All data sets are sourced from the IMF regional economic outlook database.
Econometric Specification
This study adopts empirical estimation approach which relies heavily on empirical methodology used extensively in the macroeconomic volatility-growth nexus literature.This study however examines specific dynamic relationships at the micro-level using disaggregated regional and external volatility parameters via panel fixed effects regression instead of a single volatility variable often found in the literature.This study projects that, holding all else constant, (i.e.all growth augmenting conditions, technology, socio-political conditions etc.) growth conditions associated with macroeconomic indicator yt, in a sub-region made up of t varied economies, could be modeled as a function of the degree to which such variable cope with regional and external macroeconomic volatilities.In other words, growth conditions characterizing key macroeconomic variables in SSA are projected to depend on how the variables fair in the face of volatile regional and external macroeconomic conditions.In this line of argument, growth conditions associated with key macroeconomic indicators in SSA are said to be defined by occasional macroeconomic volatilities; both regional and external.To this end, this study models performance of key macroeconomic indicators among economies in the sub-region as a function of regional and external macroeconomic volatility as follows: 1 Where yt estimates overall growth performance associated with specific sub-regional macroeconomic indicator; domσ 2 captures portion of overall macroeconomic volatility experienced from the regional (domestic) economy, and Extσ 2 estimates external volatility in the global market place.Equation 1 suggests that all things being equal, performance of key macroeconomic indicators in the sub-region depend on how the sub-region manages regional and external macroeconomic volatilities or shocks.In other words, this estimation process holds constant other known factors influencing key economic indicators in the sub-region, in order to assess how macroeconomic volatilities or shocks influence regional economic indicators.The following section determines appropriate empirical approach to adopt in verifying the relationship between disaggregated volatility parameters and selected macroeconomic indicators from SSA modeled in equation 1.
Hausman test determining appropriate model for this study based on type of data employed, supports fixed effects approach for this study; consequently, fixed effect model estimating effects of domestic/regional and external macroeconomic volatility on key macroeconomic indicators in SSA is formulated.Fixed effects method used in this study is modeled to correct for parameter endogeneity which could skew test results.Fixed effect model estimating effects of macroeconomic volatility parameters on key economic indicators is specified as follows (Note 3):
The Fixed Effect Model
where -y it captures dependent variables (macroeconomic indicators) of sub-region i, at time t.
-E-Ouptv, E-Investv, E-GDPgv, D-gdpv, D-investv, D-inflv, D-cablv etc captures independent variables (E-external and D-domestic (regional) volatility as measured by standard deviations of selected variables)
-δ 1….. δ 6 are the coefficients for independent variables tested _ q i Controls for unobserved country heterogeneity _q t.. Time (year) fixed effects -e it The error term Equation 2 models effects of domestic/regional and external macroeconomic volatility on selected regional macroeconomic indicators from SSA using data from 1980 to 2011.As defined earlier, q i and q t from equation 2 defines vectors of country and time fixed effects and e it the error term.Country fixed effects in this instance controls for unobserved country specific heterogeneity while time fixed effects controls for variations in time periods.Table 1, 2 and 3 report fixed effects estimates (coefficients and standard errors) of relationships between macroeconomic volatility parameters and selected regional macroeconomic indicators.Table 1 estimates highly plausible scenario where both regional and external macroeconomic volatility parameters concurrently influence key performance indicators among economies in the sub-region.Tables 2 and 3 on the other hand, verify how macroeconomic volatility (domestic/regional or external) independently influence regional macroeconomic indicators.Separate tests results presented in tables 2 and 3, are meant to highlight how key regional macroeconomic indicators relates to specific volatility parameters in the absence of others.These highly unlikely scenarios (the presence of only domestic/regional or external macroeconomic volatility), are meant to afford this study a means of weighing the case for, and against orienting regional policies towards reducing specific form of macroeconomic volatility.For instance, if specific sources of volatility are determined to have significant constraining effects on regional economic performance, such information could help policy makers design policies specifically geared towards minimizing such sources of volatility.Outcome of such analysis could further help utilize resources efficiently by focusing on specific sources of volatility projected to have significant negative impact on key regional macroeconomic variables.Table 1 present results of a combined scenario; i.e. how both regional and external macroeconomic volatility parameters impact key macroeconomic variables in the sub-region.
External Volatility and Performance of Sub-regional Macroeconomic Indicators
Fixed effects coefficients reported in table 1 demonstrate that external macroeconomic volatility has statistically significant effect on key sub-regional macroeconomic indicators.Coefficients estimates for instance, indicate external GDP growth volatility has significant negative effect on GDP growth in the sub-region.A review of related time series data suggest that this negative association may reflect export oriented nature of most economies in the sub-region.Analysts for instance, are of the view that being predominantly export dependent increases the likelihood for anemic regional growth during periods of global economic shocks or volatility (external GDP growth volatility).This study also finds that external GDP growth volatility (E-GDPv) negatively impacts regional investment growth among economies in the sub-region.The same condition, (E-GDPv) is further found to have significant negative impact on inflationary conditions in the sub-region; but a positive impact on regional current account balance conditions.These findings to a large extent suggest that, volatility associated with external economic growth (GDP growth) has significant negative impact on key performance indicators among SSA economies.Apart from these relationships (between external GDP growth volatility and key indicators in the sub-region), this study also finds positive association between external investment growth volatility and investment growth in SSA.To verify underlying factors responsible for this positive association, the literature on foreign direct investment and regional economic growth nexus is reviewed for clues.The evidence suggests this positive link between external investment growth volatility and regional investment growth could be explained by two key factors.The first factor revolves around relatively lower cost of production in most economies in the sub-region which makes it possible to attract specific investments even during periods of declining or constrained investment conditions in most external economies.Some analyst also suggest that highly inelastic demand for resources from most parts of the sub-region helps to attract and sustain investments growth even during periods of general global investment decline.These conditions according to the literature explain to some degree why external investment growth volatility rather induces investment growth in the sub-region.
This study further finds that external investment growth volatility has significant negative effects on regional current account balance; a condition which suggests persistent external investment growth volatility constrains current account balance conditions in SSA.External output gap volatility (E-Outputv) in this study measures fluctuations in productivity level in the global market place.This study surmised that extreme volatility in this indicator will be beneficial to economies in the sub-region; in that, the condition has the potential to increase demand for exports from the sub-region to compensate for short falls in global market productivity.Coefficients estimates accordingly show that external output gab volatility correlates positively with GDP growth in the sub-region.Reported results however suggest external macroeconomic volatility parameters moderated by domestic/regional conditions have no statistically significant impact on gross regional savings.
Regional/Domestic Volatility and Performance of Macroeconomic Indicators
Results featured in Table 1 further shows that in an environment characterized by some form of external macroeconomic volatility, regional investment growth volatility is the dominant macroeconomic condition with significant impact on key regional macroeconomic indicators.Although regional GDP growth and inflation rate volatility also have significant impact on some regional macroeconomic variables, effects of regional investment growth volatility on regional macroeconomic indicators tend to be pervasive; impacting almost all regional macroeconomic indicators tested in this study.Regional investment growth volatility in this case is found to have negative effects on gross regional savings, current account balance conditions and GDP growth; this study however finds that regional investment growth volatility has positive effects on regional investment growth all things being equal.In other words, regional investment growth volatility ultimately promotes investment growth in the sub-region.This positive relationship between regional investment growth volatility and investment growth is thought to reflect a long run investment drive phenomenon; where short run investment growth volatility due to unique regional factors ultimately necessitates and generates the needed impetus for sustained regional investment growth in the long run.
Table 2 report effects of external macroeconomic volatility parameters on selected regional macroeconomic indicators holding regional/domestic volatility parameters constant.Table 2 presents coefficients and standard error estimates of a scenario where regional macroeconomic indicators are only influenced by external macroeconomic volatility parameters.Although regional macroeconomic environment devoid of any domestic economic influence is farfetched, this approach allows this study to verify the extent to which effects of external macroeconomic volatility on key growth indicators are moderated or otherwise; by comparing results with those reported in table 1. Table 2 results show that, holding effects of domestic macroeconomic volatility parameters constant significantly influence the extent to which external volatility influence key macroeconomic indicators in the sub-region.For instance, coefficient estimates, show that in the absence of domestic/regional macroeconomic volatility, external GDP growth volatility fails to have significant impact on GDP growth in SSA.A similar condition is also found in the relationship between external investment growth volatility and regional/domestic current account balance.This study further finds that external investment growth volatility has statistically significant negative impact on domestic/regional inflation rate although results in table 1 suggested otherwise.These results (Table 2) suggest that in an environment of minimal or no regional macroeconomic volatility threats, external macroeconomic volatility (as measure by various parameters already stated) tend to influenced regional economic indicators differently; a condition which suggests some moderating effects from regional macroeconomic volatility in table 1.For instance, external GDP growth volatility in this scenario is found to have relatively weaker negative impact on inflationary conditions compared to condition reported in table 1.
Effects of External Macroeconomic Volatility on Regional Macroeconomic Indicato
Results presented in table 3 focuses on a reverse condition where regional macroeconomic indicators are modeled as a function of only domestic/regional macroeconomic volatility parameters.Coefficient estimates in this case verifies how domestic/regional macroeconomic volatility parameters independently influence critical regional macroeconomic indicators.These estimates presume a relatively closed regional economic enclave devoid of any major external macroeconomic influence.Coefficients estimates in this case show that key macroeconomic indicators in the sub-region are influenced predominantly by volatility associated with regional investment growth.The results further show that the relationship between regional investment growth volatility and the various regional economic indicators are statistically identical to those reported in table 1.This outcome suggest that regional investment growth volatility constitutes a dominant feature influencing key macroeconomic variables in the region with or without moderating effects of other volatility parameters.It also suggest that apart from socio-cultural and geo-political conditions which often perturbs growth dynamics in the region, regional investment growth volatility should be a variable of interest for policy makers in the sub-region.These results further imply that all things being equal, effects of domestic/regional macroeconomic volatility parameters on key macroeconomic indicators are hardly moderated or influenced by external macroeconomic conditions; since coefficients of various indicators barely changes in the two cases tested; ( comparing table 1 and 3 estimates for domestic investment volatility).
Concluding Remarks
This study verified the dynamic interactions between disaggregated macroeconomic volatility parameters and key macroeconomic indicators for the SSA region.I find that on average, macroeconomic volatility parameters have statistically significant negative impact on selected macroeconomic indicators for the sub-region.Estimated coefficients also show that in a hypothetical scenario where regional macroeconomic indicators are only exposed to either external or domestic/regional macroeconomic volatility parameters, external macroeconomic volatility parameters tends to have more influence on macroeconomic indicators in the sub-region than domestic/regional macroeconomic volatility parameters.Finally, this study also finds that effect of domestic/regional macroeconomic volatility on performance indicators in the region is mostly dominated by volatility associated with domestic/regional investment growth.These findings suggests that, in order to ensure sustained regional growth, policies geared towards fostering macroeconomic stability should target minimizing effects associated with specific external macroeconomic volatility parameters and instability in regional investment growth.Successful implementation of such policies could help the sub-region manage effects of such macroeconomic conditions; and augment efforts aimed at creating the necessary environment critical for sustained economic growth.
charts regional growth dynamics between 1980 and 2011; as well as projected growth trend until 2016.It also shows the effects of the 2008 global financial crisis on growth conditions in the sub-region.
|
2018-12-11T12:02:53.492Z
|
2012-09-05T00:00:00.000
|
{
"year": 2012,
"sha1": "94aa4e319003538ed318eaaf88934134c36fdb8a",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijef/article/download/20310/13634",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "94aa4e319003538ed318eaaf88934134c36fdb8a",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
268799449
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge of nursing staff before and after training on incontinence-associated dermatitis
ABSTRACT Objective: To verify the knowledge of nursing staff before and after training on incontinence-associated dermatitis. Method: A study before and after an educational intervention carried out with nursing staff from the medical and surgical clinics and intensive care unit of the university hospital in June 2023. The training took place over three meetings. Data was collected using a questionnaire administered immediately before and after the training. McNemar’s test for dependent samples was used to compare before and after training. Results: 25 nurses and 14 nursing technicians took part. The items that showed statistical significance were related to the identification and correct differentiation of dermatitis associated with incontinence and pressure injury; and the correct way to sanitize the skin. Conclusion: The training of the nursing team made it possible to assess their knowledge of how to identify, prevent and treat incontinence-associated dermatitis.
INTRODUCTION
Incontinence-associated dermatitis (IAD) is a common ailment in patients with fecal and/or urinary incontinence.It may cause discomfort, pain, burning, itching or tingling in the affected areas.Clinical signs include erythema (ranging from pink to red), a whitish appearance and swelling of the surrounding skin (indicating maceration) and poorly demarcated borders (1) .
Pressure Injury (PI), on the other hand, is damage to the skin due to intense and/or prolonged pressure on a bony prominence or related to the use of medical devices, often confused with IAD in the early stages (2) .However, IAD is a risk factor for PI, which is considered a preventable adverse event (3,4) .
IAD has a significant impact on the health of hospitalized patients.In a survey of 5,342 adult patients in intensive care units in 36 states in the United States (USA), it was found that more than a third of patients -46.6% -had incontinence of urine, feces or both.The overall prevalence of IAD was 21.3%; the prevalence of IAD among patients with incontinence was 45.7%.Just over half of the IADs were classified as mild (52.3%), moderate (27.9%) and severe (9.2%).The prevalence of PI in the sacral region among individuals with incontinence was 17.1%.Multivariate analysis revealed that both the presence of IAD and immobility were associated with a significantly increased likelihood of developing PI in the sacrum (5) .The total hospital cost index becomes 1.2 times higher for incontinent patients and 1.3 times higher for patients being treated with IAD (3) .
The prevention and treatment strategies for IAD and PI are different.The nursing team plays a key role in the prevention and proper management of IAD, through the implementation of specific measures such as the use of appropriate barrier products, hygiene, humidity and incontinence control, as well as education and guidance for patients and caregivers (6) .
By understanding the differences between IAD and PI, nurses can identify and intervene early, adopting specific approaches for each condition.In addition, nursing staff should be aware of the risk factors associated with both conditions, such as immobility, poor nutrition, advanced age and diaper use, in order to implement appropriate preventive strategies and ensure quality care for patients (2) .Therefore, early detection, prevention and treatment of IAD requires nursing professionals to understand the physiological aspects of the skin in order to correctly identify and differentiate skin lesions, as well as intensifying care through the nursing process, linked to evidence-based protocols (7) .
In this sense, it is necessary to develop permanent education actions to improve care practices by professionals in relation to IAD (8) .This includes the implementation of courses and in-service training.The implementation of educational programs brings a positive attitude, as it presents an adequate level of knowledge for pillars in care, as well as basic patient safety precepts directed at the subject (9) .
It is essential that nurses improve their knowledge and add it to the team, in search of care based on scientific evidence.Health professionals must seek qualifications, but health institutions must provide training to generate new ideas and exchange experiences, with quality time for this (10) .
The nursing team's lack of knowledge on the subject of the assessment, prevention and classification of PI is linked to the quality of care provided.Therefore, this shows that professionals need qualifications to improve their knowledge.It should also be noted that nursing is an indispensable part of a multi-professional team, which continues to provide care to health users and is committed to providing qualified care with scientific and technical knowledge (10) .
Although there are studies on the identification, prevention and treatment of IAD, there is a lack of national scientific publications on this subject.In this way, investigating the knowledge of the nursing team, considering the national reality, will provide an overview of how this population knows about IAD.
In this way, the importance and relevance of this work to empower scientific research and improve the knowledge of nursing professionals about IAD and its aspects for detection, proper management and prevention can be seen."Such statements are of great importance to subsidize and assist in the design of comprehensive and effective health care (11) ." In order to contribute to the consolidation of knowledge regarding care for IAD, the aim of this study was to verify the knowledge of the nursing team before and after training on incontinence-associated dermatitis.
The research hypothesis tested was: the proportion of right and wrong answers is different before and after training.
Type of STudy
Quasi-experimental, before-and-after study.
SiTe
The research was conducted at a University Hospital in the city of Fortaleza -Ceará, administered by the Brazilian Hospital Services Company (EBSERH), a public company governed by private law, linked to the Ministry of Education, with the aim of providing medical and hospital care services.
STudy Sample
The type of sample was by convenience, with the following inclusion criteria: being a nursing professional providing direct care to patients in the medical and surgical units and the Intensive Care Unit (ICU) linked to the study hospital.The exclusion criterion was not answering one of the pre-test or post-test questionnaires.
A total of 42 professionals took part in the course.Of these, 3 professionals were excluded (2 nurses and 1 nursing technician) for not completing the pre-and/or post-tests.Thus, the final study sample consisted of 39 professionals who took part in the research: 25 nurses (64.1%) and 14 nursing technicians (35.9%).This course was planned in April 2023 and came live into the virtual environment platform of the Electronic Information System (SEI), with a project describing the training plan and a letter of consent from the institution's nursing inpatient unit coordinator for release.
daTa ColleCTion and STudy period
Once the course had been assessed, approved and cleared for implementation by the institution's People Development Unit, a registration link was created for participants via the EBSERH Corporate Education School Platform (3EC), which aims to promote training for network employees.
It was publicized by the nursing units' immediate supervisors in the sectors' WhatsApp groups.Thus, all the nursing staff, nurses and nursing technicians in the sectors were invited to take part in the training, but were informed that participation in the survey would be optional.
Three classes were offered, two online and one face-to-face.The face-to-face class took place in a room at the institution, in the afternoon from 2pm to 3.45pm.The two synchronous online classes (using Microsoft Teams) took place in the evening, from 7.15pm to 9pm, on different dates.Each class had 25 places available for nursing professionals, nurses and nursing technicians, and nursing residents, with a timetable of 1 hour 45 minutes.The course was taught by three nurses, the coordinator and two facilitators, and took place in June 2023.
The Informed Consent Form (ICF), pre-test and post-test instruments were registered on the 3EC Platform and google forms to assess the nursing team's knowledge of IAD.
The data collection instrument (pre-and post-test) on the team's knowledge related to IAD was previously evaluated in its content by two specialist nurses, one in stomatherapy and the other in dermatology, with extensive practical experience in the area.
The questionnaire consisted of two parts: the first refers to the participants' characterization data (age, gender, work unit, professional category, qualifications and length of training); and the second part contains 14 items.The first two items deal with identifying and recognizing the difference between DAI and LP.The answers to these first two items were yes or no.The rest of the items ask questions about the identification, prevention and treatment of IAD, with right or wrong answers (11a) .
Before starting the training, the professionals who wanted to take part in the research were asked to sign the ICF.The researcher then sent the link to the pre-test and, after the training, the link to the post-test.The link to access the instruments was made available on the 3EC Platform and on Google Forms via Microsoft Teams chat.Only after signing the informed consent form did the researcher obtain the data relating to the study.
At the end of the training, the Reaction Assessment was applied via the link sent by UDP in the online class, and in the face-to-face class it was via QR Code.It is worth noting that all participants received a digital certificate of participation from the institution via the 3EC Platform, regardless of whether or not they had taken part in the research by completing the pre-and post-test instrument.
daTa analySiS and proCeSSing
The research results were exported to the Statistical Package for the Social Sciences (SPSS) software, version 20 There was a statistically significant difference (p < 0.05) between the pre-and post-test answers to the first two questions: 100% of the professionals said in the post-test that they knew how to identify and differentiate IAD from PI (X2 = 6.125; p = 0.008).Two other items showed a statistically significant difference when compared before and after the training: items 8 and 9. Item 8: "When cleaning the skin, liquid soap with an acid pH should be preferred" (X2 = 13.067;p = 0.001), and item 9: "In the absence of suitable products for cleaning skin exposed to humidity, it is preferable to clean the skin with soap and water" (X2 = 17.053; p = 0.001).(Table 2) Table 2 shows the number of hits and misses for each item in the pre-test and post-test.
DISCUSSION
The results of this study concur with those of a study (12) on IAD carried out with 30 nursing professionals from a university hospital, the majority of whom were female nurses aged between 30 and 39.
The aim of this study was to train nurses and nursing technicians through continuing education, so that they are able to identify and prevent IAD.In the legal sphere, the Federal Nursing Council (COFEN), a federal authority that regulates nursing practice, regulates the care of patients with wounds, through Resolution No. 567, of January 29, 2018, stating that nurses, technicians and nursing assistants must keep up to date by participating in continuing education programs (13) .
A Brazilian study (9) evaluated the knowledge of nursing professionals in a medical clinic unit about the skin conditions IAD and PI, with the participation of nurses (59%) and nursing technicians (57%), who mentioned that training in health services improves the knowledge of nursing teams regarding the early identification of changes related to IAD and PI.
On the topics of identification and the difference between IAD and PI, the vast majority of participants got it right in the pre-and post-test.In contrast to the findings of a study (14) which aimed to assess nurses' knowledge of IAD in order to understand the extent of this problem in a teaching hospital, the results found regarding knowledge of IAD identification showed that nurses know the definition, but were wrong to attribute the www.scielo.br/reeuspRev Esc Enferm USP • 2024;58:e20230272 clinical identification of PI to IAD, demonstrating difficulty in differentiating between the two types of lesions.
Fragile knowledge on the part of the multi-professional team, especially nursing professionals, is a risk factor for the development and inappropriate management of cases of IAD, since they have difficulty differentiating between other types of lesions, such as PI (9) .Item 8 was statistically significant, p = 0.001, with 8 (20.5%) people answering true in the pre-test, but in the post-test, the correct answer was 23 (59.0%).Item 9 showed statistical significance with p = 0.001, in which 4 (10.3%)people answered the item as false (but it was the correct answer), but in the post-test, the hit rate was 23 (59.0%) for this item.
It can be seen that one question complemented the other.The literature states that the pH of healthy skin is between 5.0 and 5.5, so it is beneficial to choose a cleaning agent with a low pH, to use mild and non-irritating surfactants, soft cloths, and it is prudent to avoid alkaline products, such as soaps, which can change the pH of the skin surface to a more basic environment, promoting bacterial growth (15)(16)(17) .
"In addition, cleansing using mechanical movements and alkaline pH soaps can lead to skin breakdown by removing its natural lipids, which serve as a protective barrier.Cleaning should preferably be carried out with liquid soaps with a neutral or acidic pH, but as most conventional soaps have an alkaline pH, the use of rinse-free cleaning agents with an acidified pH has been recommended (8) ." It can be seen that after the explanations of the ideal product for cleaning the skin, there was an improvement in the number of correct answers.However, these products are not always available in health services, so it was emphasized that on this occasion it is preferable to use only water.
Regarding the treatment approach in category 1A, item 11, there were a significant number of correct answers, which indicates the team's knowledge of IAD treatment in this category.A study (18) underlines the advice that for patients with Category 1 IAD (red and intact skin), in addition to gentle cleansing, it is recommended to use an acrylate terpolymer film or a product based on petrolatum or containing dimethicone.
With regard to the type of composition of the products, item 14, used both for the prevention and treatment of IAD, it is of the utmost importance to be familiar with the various types of skin protectors and their form of presentation available in the institution, as this facilitates nursing care for patients affected by IAD.
Research (19) points out that when carefully assessing the skin and identifying patients most at risk of IAD, skin cleansing should be carried out, and barrier products containing petrolatum and dimethicone, zinc oxide-based creams, liquid acrylate film should be used, as they have a moisturizing and barrier function, although there is still no consensus on the best product to use.
Concerning the treatment of IAD in category 2, item 12, the products used in the treatment were covered in the training.However, as the institution has a team of nurses who specialize in skin care, it was advised that if there are any doubts about the treatment, the experts should be asked for their opinion in order to assess and guide the topical conduct, since the humidity of IAD can be a risk factor for developing PI, and so proper monitoring and management would prevent this adverse event.
This guidance is in line with a study (7) which showed that 86% of nurses have the knowledge to manage mild IAD and moderate or severe cases and to manage this dermatological complication process and differentiate it from stage 1 PI.However, in category 2 IAD, the nurse should ask the institution's skin lesions/healing commission to evaluate and guide appropriate conduct in handling this category.
Still on the treatment of category 2 IAD, if signs of infection are observed, it is indicated to take a sample for microbiological analysis and the result should be used to decide on the therapy (e.g.antifungal cream, antibiotic, anti-inflammatory product) (18) .
However, the training reinforced that the decision on medication is a medical matter, and the nurse can point out what they have identified in the skin assessment and discuss the best course of action, putting it in writing on the medical prescription (by the doctor) so that the nursing team can follow it appropriately.
At the end of the training, there was a reaction survey.Participants considered that the content was applicable to their professional reality and that they had gained new knowledge.The findings were "corroborated by a recent study (12) , which also ran a course on IAD through the social network Instagram and, after the final evaluation and the feedback received by the course participants, it was noticeable that the action managed to get good adherence from the participants, with several positive comments, demonstrating that educational actions can be conducted through unconventional means, such as social media, so commonly used by people for communication and leisure time." In another study on the same subject (20) , it was observed that the educational intervention provoked reflections based on practice, but based on scientific knowledge, and led to changes in the group involved, with a better understanding of the subject and assertive decision-making.
In item 13, when highlighting factors to differentiate PI and IAD, even though there was no statistical significance, 10 (25.6%) of the participants marked it as right (wrong item).The differential diagnosis between PI and IAD is based on visual examination and patient history (21) .Incorrect classification has significant implications for prevention, treatment and comparative assessment of quality of care, as suggested by other studies (22) .In addition, IAD has been reported as PI, negatively impacting institutional epidemiological indicators.
The investigation of the nursing team's knowledge was adopted because of the need to identify whether health professionals are able to identify and prevent IAD in care practice, and training in the workplace as continuing education, updating the team on appropriate and current care, thus enriching clinical nursing care for this issue.
It should be noted that this research had limitations in terms of the number of participants and the fact that the research was carried out at a single educational institution.It should be noted that the 3EC platform is used for courses offered at the institution and is a recently introduced technology.This limited the number of participants, as some professionals found it difficult to register for the course and download the Microsoft Teams application to attend online classes.
CONCLUSION
Regarding the tested hypothesis, the proportion of right and wrong answers is different before and after the training, with only items 8 and 9 obtaining statistical significance, it can be seen that there was a significant improvement in the correct answers in the post-test.It is known that not everyone is aware of the pathophysiology of IAD and, consequently, the clinical reasoning for using suitable products for skin hygiene.However, this training provided an environment for exchanging knowledge and updating content not only on IAD, but also on PI, which has an impact on care practice.
Although the other items were not statistically significant, there was an improvement in the number of correct answers in the post-test, showing that doubts or difficulties presented before the training began had repercussions on the increase in the number of correct answers in the post-test.Therefore, the proportion of true and false answers was different before and after the training.
Given the need for the nursing team to be able to correctly identify, prevent and treat IAD, continuing education in service is important.
Where was written:
The questionnaire consisted of two parts: the first refers to the participants' characterization data (age, gender, work unit, professional category, qualifications and length of training); and the second part contains 14 items.The first two items deal with identifying and recognizing the difference between DAI and LP.The answers to these first two items were yes or no.The rest of the items ask questions about the identification, prevention and treatment of IAD, with right or wrong answers.
Now read:
The questionnaire consisted of two parts: the first refers to the participants' characterization data (age, gender, work unit, professional category, qualifications and length of training); and the second part contains 14 items.The first two items deal with identifying and recognizing the difference between DAI and LP.The answers to these first two items were yes or no.The rest of the items ask questions about the identification, prevention and treatment of IAD, with right or wrong answers (11a) .
The topic of the training was Safe Care in the Prevention and Treatment of Incontinence-Associated Dermatitis.The content covered was: Identification of IAD, Risk factors, Prevention and Treatment of IAD, Difference between PI and IAD, Notification and Indicators of IAD.
Table 1
.0.McNemar's test for dependent samples was used to compare the groups before and after the training.A p-value of <0.05 was considered.The research was cleared in 2022 by the Research Ethics Committee of the State University of Ceará (UECE) under opinion no.5.268.049 and the Walter Cantídio University Hospital/Federal University under opinion no.5.288.935, in accordance with Resolution no.466 of December 12, 2012. of the total number of participants, 35 (89.74%) were female, with an average age of 40.As for length of training, 31 (79.49%) had been working for more than 10 years.In terms of qualifications, 22 (56.41%) had a specialization degree.As for their work unit, 19 (48.72%) were from surgical wards.
Table 1 -
Sociodemographic characteristics of the nursing team participating in the training -Fortaleza, CE, Brazil, 2023.
Table 2 -
Number of right and wrong answers in the pre-test and post-test and comparison using McNemar's test -Fortaleza, CE, Brazil, 2023.
|
2024-04-01T15:46:45.827Z
|
2024-03-29T00:00:00.000
|
{
"year": 2024,
"sha1": "6fcc8c8e60f344cda5023b29e4e24c995143b2c5",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/reeusp/a/jyNDCxs4yH6QQLch7qJH9GK/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3edd204acd5700729fdc12c31350662461d4bd31",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
320445
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of a new and simple classification for endoscopic sinus surgery
Objective: In 2013, the Japanese Rhinologic Society proposed a simple classification for endoscopic sinus surgery (ESS). This classification consists of five procedures (type I, fenestration of the ostiomeatal complex, with uncinectomy and widening of the natural ostium; type II, single-sinus procedure, with manipulating the inside of the sinus; type III, polysinus procedure; type IV, pansinus procedure; type V, extended procedure beyond the sinus wall). The clinical relevance of this classification in chronic rhinosinusitis (CRS) and paranasal sinus cyst was evaluated. Study Design: A retrospective validation study. Methods: A total of 122 patients (195 sinuses) who underwent ESS in Okayama University Hospital in 2012 were enrolled. The relationships between the ESS classification and the clinical course, including the operation time, bleeding amounts during surgery and postoperative changes of olfaction, the computed tomography (CT) score, and nasal airway resistance were analyzed. Results: A total of 195 ESS procedures were classified into type I (n = 3), type II (n = 17), type III (n = 91), type IV (n = 82), and type V (n = 2). The major phenotypes of type II, III, and IV ESS were paranasal sinus cyst (68%), CRS without nasal polyps (77%), and CRS with nasal polyps (55%), respectively, and the difference was significant. The degree of ESS based on this classification was positively and significantly correlated with the operation time and bleeding amounts. As a whole, olfaction, CT score, and nasal airway resistance were significantly improved after surgery. The degree of improvement was similar between type III and type IV ESS. Conclusion: This simple classification for ESS reflected the perioperative burden of the disease.
E ndoscopic sinus surgery (ESS) is currently a standard surgical option for medically refractory chronic rhinosinusitis (CRS) and paranasal cyst. A recent study in the United States demonstrated that the incremental cost-effectiveness ratio for ESS versus medical therapy alone was approximately $14,000 per quality-adjusted life year in patients with CRS, which indicated that ESS is a cost-effective intervention. 1 ESS for such diseases includes various procedures, from simple uncinectomy to extended procedures beyond the sinus wall, such as the modified Lothrop procedure, and there is no universal classification of ESS. [2][3][4] Thus, it is difficult to compare and evaluate each surgical procedure both clinically and economically in the present circumstances.
In 2013, the Japanese Rhinologic Society proposed a new and simple surgical classification for ESS to aim for standardization of surgical procedures and the functional evaluation of the surgery. 5 This new classification consists of five types according to the extent of surgery as follows: type I, removal of the ostiomeatal complex; type II, the single-sinus procedure; type III, the polysinus procedure; type IV, the pansinus procedure (full-house ESS); and type V, the extended procedure beyond the sinus wall (e.g., the modified Lothrop procedure). In 2014, the Japanese Health Insurance System set the flat surgical fee for each classification (type I, ¥ 36,000 (US $320); type II, ¥ 100,000 (US $880); type III, ¥ 245,000 (US $2,160); type IV, ¥ 319,900 (US $2,820); and type V, ¥ 400,000 (US $3,530)). A retrospective study was performed to evaluate whether this new classification was clinically relevant in patients who underwent ESS in Okayama University Hospital, the tertiary referral academic hospital in Okayama Prefecture, Japan.
Subjects
A cohort of 122 patients (195 sinuses; age range, 8 -80 years; mean age, 50.9 years; 70 males and 52 females) who underwent ESS in Okayama University Hospital in 2012 was enrolled. This cohort included 27 patients with CRS without nasal polyps (35 sinuses), 72 patients with CRS with nasal polyps (CRSwNP) (135 sinuses), and 23 patients with a paranasal sinus cyst (25 sinuses) defined by using the criteria reported in a European position paper on rhinosinusitis and nasal polyps. 6 All the patients were refractory to standard medical treatments, including long-term, low-dose macrolide therapy. All the patients underwent computed tomography (CT) examination within 1 month before surgery.
The ESS procedure was selected based on the presence of mucosal thickening in the ostium and/or inside the wall of each sinus on CT or the presence of inflammation, such as mucosal swelling and discharge, on endoscopic inspection during surgery. Concomitant septoplasty and turbinate surgery (submucosal turbinate bone dissection) was performed based on endoscopic inspection and coronal CT, which showed nasal cavity obstruction that might interfere with the ESS procedure. Primary and revision ESS procedures were performed in 111 and 11 patients, in 175 and 20 sinuses, respectively. Unilateral and bilateral ESS procedures were performed in 49 and 73 cases, respectively. All the surgeries were performed or supervised by a single surgeon (M.O.) with Ͼ20 years of experience with ESS. This study was approved by the human research committee of the Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, and all the patients provided their informed consent before surgery.
Outcomes
The perioperative outcomes of operation time, bleeding amounts during surgery, and surgical complications were observed for each patient. The bleeding amount was calculated based on suction traps. A very small amount of bleeding was calculated as 10 mL. To evaluate radiologic improvement by ESS, the examination was performed again 6 months after the surgery in 69 patients with CRS who agreed to undergo a repeated examination. The radiologic severity of CRS was graded by using the Lund-Mackay system. 6 Preoperative and 3-month postoperative anterior rhinomanometry findings were evaluated in 45 patients with bilateral CRS (90 sinuses) who had had moderateto-severe nasal congestion. Inspiratory nasal airway resistance at 100 Pa was determined. 7 When the nasal airway resistance was unmeasurable due to complete nasal obstruction, it was defined as 5 Pa/cm 3 /s. Similarly, preoperative and 3-month postoperative olfactory tests (T&T olfactory test) were performed in 47 patients with subjective moderate-to-severe hyposmia or anosmia. 8 To evaluate the relationship between the degree of ESS classified by the new operative criteria and perioperative outcomes, an ESS score and the total endonasal surgery score were used. The ESS score is the sum of the proposed classification number of both sides (e.g., type III ESS on the right side and type IV ESS on the left side results in a score of 7). The score for concomitant septoplasty and unilateral turbinate surgery was set to one point (e.g., septoplasty and bilateral inferior turbinate surgery results in a score of 3). The total endonasal surgery score was set as the sum of the ESS score and the concomitant surgery score.
Statistical Analysis
Values are given as medians. The 2 test was used for comparisons of two factors. The nonparametric Mann-Whitney U test and the one-way analysis of variance (ANOVA), followed by post hoc testing were used to compare data between two and multiple groups, respectively. The Wilcoxon signed rank test was used to analyze the data within each group. Correlation analyses were performed by using the Spearman rank correlation. The p values of Ͻ0.05 were considered significant. Statistical analyses were performed with GraphPad Prism software (version 6.04; GraphPad Software, Inc., La Jolla, CA).
Classification of Patients Who Received ESS
In 2012, 195 ESS procedures were performed in 122 patients. According to the new proposed criteria, 3, 17, 91, 82, and 2 ESS procedures were classified as type I (1.5%), type II (8.7%), type III (46.7%), type IV (42.1%), and type V (1.0%), respectively. As shown in Table 1, no significant differences were seen in age (p ϭ 0.681, ANOVA). However, sex differences were seen according to the criteria (p ϭ 0.016, 2 test) in which type I, III, and IV ESS were male predominant, whereas type II and type V ESS were female predominant. Disease phenotype was significantly different (p Ͻ 0.001, 2 test), in which the majority of type IV ESS procedures were performed for patients with CRSwNP, and the majority of type II ESS procedures were performed for those with paranasal sinus cyst.
Of the 195 sinuses, revision ESS was performed for 20 sinuses. Significant differences in the proportion of revision ESS procedures were seen with the new criteria, in which type IV ESS contained more revision ESS procedures (n ϭ 14 [17.1%]) than the other types (p Ͻ 0.001, 2 test). Fifty-seven septoplasties were concomitantly performed in 51 bilateral (38 in type III and 64 in type IV) and 6 unilateral ESS (all type III), which indicated that the septoplasty was independently correlated with the type of procedure (p Ͻ 0.001, 2 test). Similarly, 35 patients received 59 turbinate reduction surgeries. Among these, 5 surgeries were performed without ipsilateral ESS; the other 54 surgeries were concomitantly performed in type I (n ϭ 2), type III (n ϭ 17), and type IV (n ϭ 35) ipsilateral ESS, which indicated that the turbinate reduction surgery was also independently correlated with the type of procedure (p Ͻ 0.001, 2 test).
Relationship Between the New Classification of ESS and Perioperative Outcomes in Patients with CRS
Of the 99 patients with CRS (170 sinuses), the operation time was recorded in 96. A significant positive correlation between the ESS score and operation time was seen (r ϭ 0.652; p Ͻ 0.001, Spearman rank correlation test) (Fig. 1 A). Bleeding amounts during surgery were carefully recorded in 90 patients. A weak though significant positive correlation between the ESS score and bleeding amount during surgery was also seen (r ϭ 0.280; p ϭ 0.007) (Fig. 1 B). A total of 57 septoplasty and 59 turbinate surgeries were performed in 57 and 35 patients, respectively. A significant correlation was seen between the total endonasal surgery score and the operation time (r ϭ 0.617, p Ͻ 0.001) (Fig. 1 C) but not between the total endonasal surgery score and the bleeding amount (r ϭ 0.159, p ϭ 0.143) (Fig. 1 D). Postoperative bleeding was a major complication, as seen in six cases. No significant differences in the rate of this complication among the new classification types were detected (p ϭ 0.255, 2 test). No other major complications, such as CSF leaks and visual disturbance, were seen.
Relationship Between the New Operative Classification of ESS and Physiologic Characterizations of Patients with CRS
The number of patients with CRS enrolled; the number of patients who underwent preoperative and 3-month postoperative testing, including rhinometry and olfactory testing; and the number of patients who received a 6-month postoperative CT examination are shown in Fig. 2. Results of ANOVA indicated significant differences in the preoperative CT score among the classification types (p Ͻ 0.001), in which type V showed the highest score (median, 12.0), followed by type IV (median, 8.4), type III (median, 5.2), and type I (median, 4.3) in patients with CRS. Post hoc testing further showed a significant difference between type III ESS and type IV ESS (p Ͻ 0.001) and between type III ESS and type V ESS (p ϭ 0.016) (Fig. 3 A). A significant positive correlation between the degree of ESS and the preoperative CT score was also seen (r ϭ 0.547; p Ͻ 0.001, Spearman test). A postoperative CT examination was performed in 69 patients (121 sinuses: type I, n ϭ 1; type III, n ϭ 56; type IV, n ϭ 62; and type V, n ϭ 2). As a whole, significant improvement was seen (p Ͻ 0.001, Wilcoxon signed rank test). Results of the ANOVA indicated a significant difference in the degree of improvement among type III, type IV, and type V ESS (p ϭ 0.005). Post hoc testing showed a significantly greater improvement in type V ESS than in type III ESS (p ϭ 0.035) (Fig. 3 B).
A total of 47 patients with CRS completed both preoperative and 3-month postoperative olfactory tests. Because all the patients underwent bilateral ESS, the more-severe type of ESS in each patient was recorded (type III, n ϭ 14; type IV, n ϭ 32; type V, n ϭ 1) (Fig. 4 A). No significant difference in the preoperative average perception threshold on the T&T olfactory test was seen between type III and type IV ESS (p ϭ 0.666, Mann-Whitney U test). As a whole, the average perception threshold 3 months after surgery was significantly lower (p Ͻ 0.001, Wilcoxon signed rank test). Both type III (p ϭ 0.004) and type IV (p ϭ 0.006) ESS showed significant improvements in the average perception threshold. Improvement of olfaction was seen in 28 of 47 patients (59.6%). No significant difference in the degree of improvement was seen among type III, IV, and V ESS (p ϭ 0.376, 2 test) (Fig. 4 B). Nasal airway resistance was examined both before and 3 months after surgery in 45 patients with CRS (90 sinuses: type III, n ϭ 28; type IV, n ϭ 60; type V, n ϭ 2). Results of an ANOVA indicated no significant difference in the preoperative resistance among the groups (p ϭ 0.121) (Fig. 5 A). As a whole, a significant reduction in the resistance was seen 3 months after surgery compared with before surgery (p Ͻ 0.001, Wilcoxon signed rank test); both type III and type IV ESS showed significantly decreased resistance (both p Ͻ 0.001) (Fig. 5 B). Similar to before surgery, no significant difference in postoperative resistance was seen among the groups (p ϭ 0.653). A significant decrease in nasal resistance was seen even in patients who did not undergo septo-
DISCUSSION
The present study was a pilot analysis to validate the proposed classification system in patients with CRS and paranasal sinus cyst. ESS is regarded as standard surgery for these diseases and is widely performed in each facility. For example, the Japanese Diagnosis Procedure Combination data base showed that 50,734 patients with CRS and/or nasal polyposis from 706 hospitals underwent ESS over 51 months in Japan. 9 However, unlike the Wulstain classification of otologic surgery, classification, in other words, grading, of ESS has not been fully standardized. 10 In the present study, discussed as a pilot analysis to validate the proposed classification system, it was found that this new classification reflected the surgeon's burden in terms of operation time and bleeding amounts. In addition, significant improvements in radiologic severity, olfaction, and nasal airflow resistance were achieved after surgery regardless of the grade of ESS, which indicated that proper selec-tion of the ESS procedure led to an acceptable postoperative course.
Most patients with CRS underwent type III or type IV ESS among the five types. This reflected that CRS usually involves multiple sinuses. 11 Furthermore, the majority of type IV ESS cases were performed for CRSwNP. This may be due to the increase of intractable eosinophilic rhinosinusitis (ECRS), an endotype of CRSwNP, in Japan, which may progress from ethmoiddominant inflammation to pansinus inflammation. 12 A recent report by Snidvongs et al. 13 recommended type IV ESS to provide a single sinus cavity in which the frontal, ethmoid, maxillary, and sphenoid sinuses are in communication, followed by nasal irrigation with or without corticosteroids for ECRS. Thus, type IV ESS may become the major procedure in which the proportion of ECRS is high.
Significant differences in sex but not age were seen among the five types, in which types I, III, and IV ESS cases were male predominant. Because type I, III, and IV ESS were mainly performed in patients with CRS but not paranasal sinus cysts, male predominance may be due to the sex difference in patients with CRS who underwent ESS. Although heterogeneous results were seen for the sex difference in the prevalence of CRS, several reports showed a male predominance in patients with CRSwNP who underwent ESS. [11][12][13][14][15] For example, the Japanese Epidemiologic Survey of Refractory Eosinophilic Rhinosinusitis study demonstrated that the male-to-female ratio was 2.2:1 in patients with CRS who underwent ESS. 12 A similar difference was also seen in the Japanese Diagnosis Procedure Combination data base, in which 65.4% of the subjects who underwent ESS were male. 9 In fact, when patients with paranasal cysts were excluded, significant sex differences among the ESS classification types were lost (p ϭ 0.119, 2 test).
Type IV and type V ESS procedures contain more revision surgeries, 17.1 and 100%, respectively, than other ESS types (0 -3.3%). To the best of our knowledge, little is known about the comparison of procedures between primary and revision ESS. Although a recent report from the United States showed that the proportion of type IV ESS was 16.4% in primary ESS, the proportion was not clear in revision ESS. 16 Along with the finding described above that ECRS is increasing in Japan, it is likely to require extended surgery to create a single sinus cavity for patients refractory to primary ESS.
The ESS score was significantly positively correlated with the operation time and bleeding amounts, which indicated that, if the procedure of the new operative classification was more advanced, then operation times and bleeding amounts were greater. The surgical fee for ESS is not well-known worldwide. A recent systematic review reported that the overall procedural cost of ESS, including not only the surgical fee but also other fees, such as anesthetist fees and surgical supplies, ranged from U.S. $1,000 to U.S. $10,500 per adult patient and differed by country. 17 In Japan, there is a universal public health care system, and the surgical fee for ESS is based on the severity according to the operative classification. Thus, the setting of the flat surgical fee based on the new classification seems to be reasonable. However, the total endonasal surgery score was significantly and positively correlated with operation time but not by bleeding amount. This may be due to the low risk of intraoperative bleeding in septoplasty and/or submucosal turbinate bone resection compared with ESS. 18 A significant and positive correlation between the ESS type and the preoperative CT score was also seen in patients with CRS, which indicated that selection of the ESS procedure was principally based on radiologic findings in the present cases. Although a significant radiologic improvement was seen as whole, more improvement was achieved in type V ESS than with type III ESS. This may be due to the high preoperative CT score in type V ESS (12 points), in which more radiologic improvement can be achieved compared with cases with a low baseline CT score. Nevertheless, the present results indicated that appropriate selection of the new operative procedure led to satisfactory results on radiologic examination.
Preoperative olfaction was not different between type III and type IV ESS. This may be due to the basis of procedure selection in that a particular type of ESS was not chosen based on the olfaction level. A significant decrease in the average perception threshold on the T&T olfactory test was achieved 3 months after surgery, and improvement of olfaction was seen in 59.6% of patients. Previous reports demonstrated that the improvement of olfaction after ESS ranged from 23 to 100%, depending on evaluation conditions (e.g., observation period, method of monitoring olfaction) and patient characteristics, such as nasal polyp formation and concomitant asthma. 19 -21 However, no significant difference in the degree of improvement was seen among types III, IV, and V ESS, which may be because the new classification of ESS does not mention the procedure for the olfactory cleft. Olfactory cleft opacification on CT is known to be a predictive factor of smell recovery after ESS. 22 Consistent with previous reports that used rhinomanometry, a significant improvement in nasal airway resistance was seen 3 months after surgery. 23,24 Because the nasal valve region primarily determines nasal resistance, it may be argued that concomitant surgery for the nasal cavity is a confounding factor that affects the changes in nasal resistance. 25 However, a significant decrease in nasal resistance was seen even in patients who underwent ESS alone. One of the reasons why ESS improves nasal airflow is that alleviation of sinus inflammation by ESS leads to a reduction of nasal mucosal edema and mucus in the nose. 23,24 Together with the finding that significant improvement in nasal airway resistance was seen regardless of ESS type, the present results indicated that appropriate selection of the ESS procedure led to an improvement of nasal airflow by controlling sinus inflammation. One major weakness in this study was that there were no subjective outcomes used. Objective nasal resistance measures do not always reflect patient's subjective nasal obstruction.
CONCLUSION
The current results indicated that the new proposed criteria for ESS were simple and useful both clinically and economically. Because the numbers of type I, II, and V ESS procedures were relatively small in the present study, a future prospective, multicenter analysis with a large number of patients will provide a basis for determining the usefulness of these criteria in the clinical setting for treatment with ESS. In addition, the current payment scheme in Japan may encourage sur-geons to perform a type IV sinus surgery when not indicated by the extent of disease. Future research will look at how the classification of surgeries changed before and after 2014 when the payment scheme went into effect.
|
2018-04-03T04:45:44.921Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "b683f8ac51ad09011474e2705814db6a61df257a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.2500/ar.2017.8.0208",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b683f8ac51ad09011474e2705814db6a61df257a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258411754
|
pes2o/s2orc
|
v3-fos-license
|
A Low-Cost Hardware/Software Platform for Lossless Real-Time Data Acquisition from Imaging Spectrometers
In real-time data-intensive applications, achieving real-time data acquisition from sensors and simultaneous storage with the necessary performance is challenging, especially if “no-data-lost” requirements are present. Ad hoc solutions are generally expensive and suffer from a lack of modularity and scalability. In this work, we present a hardware/software platform built using commercial off-the-shelf elements, designed to acquire and store digitized signals captured from imaging spectrometers capable of supporting real-time data acquisition with stringent throughput requirements (sustained rates in the boundaries of 100 MBytes/s) and simultaneous information storage in a lossless fashion. The correct combination of commercial hardware components with a properly configured and optimized multithreaded software application has satisfied the requirements in determinism and capacity for processing and storing large amounts of information in real time, keeping the economic cost of the system low. This real-time data acquisition and storage system has been tested in different conditions and scenarios, being able to successfully capture 100,000 1 Mpx-sized images generated at a nominal speed of 23.5 MHz (input throughput of 94 Mbytes/s, 4 bytes acquired per pixel) and store the corresponding data (300 GBytes of data, 3 bytes stored per pixel) concurrently without any single byte of information lost or altered. The results indicate that, in terms of throughput and storage capacity, the proposed system delivers similar performance to data acquisition systems based on specialized hardware, but at a lower cost, and provides more flexibility and adaptation to changing requirements.
Introduction
Many emerging data-intensive applications in fields such as aerospace, military, or process control are intended to acquire, transfer, process and store large information volumes at sustained rates of hundreds of Mbps. The main features requested from a hardware/software platform to support real-time data-intensive applications are high global I/O throughput, deterministic temporal behavior, and ample storage capacity.
The achievements of the required time response and I/O throughput without any loss of data are essential issues to address in real-time data acquisition systems. The information must be predictably acquired within precise time intervals, performing in parallel the storage of all the information acquired.
During the last few years, these data acquisition systems have continued demanding higher levels of accuracy and strictness. With increasing frequency, associated requirements force the acquisition and storage of all the digitalized information captured by the input sensors. This implies a significant increase in the required I/O throughput and storage capacity.
Specific systems are generally used to meet this demand. However, they are typically expensive, require ad hoc hardware design and implementation, and require a long time and significant effort to implement and validate. Additionally, these systems are often too specific, needing more modularity to face future changes in system requirements.
•
Only COTS hardware components (no need for specialized hardware); • General-purpose operating system (no need for a specialized real-time operating system); • High sustained I/O throughput (100 Mbytes/s approximately); • Lossless data storage (300 GBytes); • Optimized and properly configured multithreaded software scheme, using the realtime services provided by the general-purpose operating system.
The demanding acquisition and storage rates needed have imposed challenging constraints on the system architecture. Its COTS hardware components have been pushed to the limits of their performance and capacity, in association with a multithreaded software architecture capable of exploiting the hardware's full potential.
The rest of the article is organized as follows. Section 2 discusses the background and the main related work. Section 3 addresses the system's functional description, including the hardware and software architectural schemes. Section 4 is related to evaluating the real-time data acquisition system, including some lessons learned and their application to high-volume real-time data acquisition systems. We will end by summarizing the main conclusions in Section 5.
Sentinel-5P [8], for instance, is a single-instrument spacecraft that carries a push-broom instrument with four hyperspectral channels to identify clouds, aerosols, and atmospheric components. Four spectrometers are included in this instrument, and the instrument uses a push-broom imaging mode to collect spectral data from the ultraviolet to the short-wave infrared with extremely high spatial resolution. This data can be used to determine the local composition of the atmosphere in terms of its gas constituents, clouds, and aerosols. In this fashion, troposphere variability can be studied by combining high spectral resolution, high spatial resolution, and daily global coverage.
The instrument's acquisition method involves photographing a strip of the Earth with a two-dimensional detector for one second as the satellite moves by about seven kilometers. Using a wide-angle telescope, it can measure a strip the size of 2600 km across the track and 7 km along the track. A new measurement is initiated after a one-second integration, creating a progressive scan of the Earth as the satellite travels. Ground pixels 7 km wide are resolved in the across-track direction and for various wavelengths using the two detector dimensions. The high resolution and small pixel size make it possible to measure natural and artificial sources and sinks of atmospheric gases, including greenhouse gas products.
The five main modules of the instrument are the telescope, relay optics, short-wave infrared spectrometer, instrument control unit, and radiant cooler. The telescope/UVN unit houses the telescope, as well as ultraviolet, visible, and near-infrared spectrometers.
The UVN module houses three spectrometers: UV, UVIS, and NIR. All three spectrometers work on the same concept, passing light from the entrance slit via a collimator Sensors 2023, 23, 4349 3 of 13 lens and grating before each wavelength is detected at a specific location on the detector. The spectrally split picture of the slit is focused onto the 2D detector arrays using a series of camera fore-optics. Similar CCD (charged coupled device) detector arrays with back-illuminated frame transfer technology and split readout registers are used in all three UVN spectrometers.
Capacitive trans-impedance amplifiers convert pixel charge to voltage, and then the signals are routed onto four parallel video output lines for transmission to the FEE. The detector's four video outputs create corresponding analog output signals read out by the FEE, where analog-to-digital conversion occurs. The FEE also controls and powers the sensor.
More recently, the Sentinel-5 UVNS mission intends to offer daily global coverage with an unprecedented spatial resolution of 7 × 7 km 2 at Nadir, as compared to heritage instruments such as GOME-2 [13], SCIAMACHY [14] and OMI [15]. The high spatial resolution will enable more accurate detection of emission sources and provide an increased number of cloud-free ground pixels. The spatial resolution is much smaller than in previous missions, imposing demanding requirements on the FEE and the associated testing systems.
These testing systems [16][17][18] need to work under challenging conditions (high I/O throughput, simultaneous data acquisition, and storage, without loss or alteration of data). At the same time, their economic cost must be reduced.
Related Work
Classic real-time data acquisition systems such as [19], even when including expensive hardware components and integrating different real-time operating systems, were limited to a global I/O throughput performance of a few Mbytes/s. In [20], a real-time data acquisition and processing system for MHz repetition rate image sensors is presented. The system uses modern FPGA circuits to help in the efficient collection and processing of data. The solution is based on Xilinx 7-Series FPGA circuits and implements a custom latency-optimized architecture utilizing the AXI4 family of interfaces. The achievement of theoretical rates of 3.125 Gb/s was expected during the tests. However, the card-to-host DMA engine superimposed the actual performance limit, requiring around 29 ms to complete the transfer (5.12 MB ÷ 29 ms = 176.6 MB/s). The low DMA throughput was caused by the lack of circular buffer support in the ChimeraTK (a PCIe device driver library providing device read/write functionality).
The research work in [21] introduces the design of data acquisition software for a Linear Energy Transfer spectrometer. The software consists of three modules to read out and preliminarily analyze the data: the readout and control module, the data real-time imaging module, and the offline data analysis module. Raw data received by the upper computer are read into the random access memory (RAM) and then stored on the hard disk. Data receiving and processing are carried out simultaneously, being executed in different computer processes. The maximum rate of the data generated by the electronics is below 6 Mbit/s, and there are three layers of electronics. As a result, the data-receiving rate of the upper computer will not exceed 18 Mbit/s. Low data rates make it possible to process data in real time.
In [22], a configuration and control framework for real-time data acquisition systems is presented. It is based on multi-processor system-on-chip (MPSoC), allowing for a customized partitioning of the given workload into software and hardware. Their development primarily targets the application domains concerning the readout of superconducting sensors and quantum bits. Both fields need highly-customized FPGA processing due to performance and latency requirements. The measured data throughput with plain byte arrays results in 117 Mbytes/s speeds.
The system described in [23] allows the measurement of signals with a high degree of resolution. Even with real-time data transmission, the low power consumption permits the systems to run for months only using batteries, allowing installation in difficult access areas. Low power consumption increases autonomous operation and minimizes battery shifting. The low-cost design allows a high-density sampling network, fast maintenance, and substitution. Data rates are in the range of hundreds of Kbytes/s.
In summary, none of the systems considered above integrate all the characteristics needed to acquire and store digitized information at sustained rates in the boundaries of 100 Mbytes/s, with the modularity, flexibility, and low economic cost provided by a hardware/software architecture that integrates COTS components and a multithreaded software application using the services of a general-purpose operating system.
System Requirements
The set of initial requirements for a data acquisition system devoted to testing spectrometers and FEEs of aerospace missions similar to [8,11] is summarized below: 1.
Real-time acquisition and storage of large amounts of information, with timing constraints on the communication with the spectrometer that carries the imaging sensors; 2.
No data shall be lost or altered; 5.
Low economic cost.
These demanding requirements impose a need to base the system design on heterogeneous hardware/software architectural schemes. Components selected considering their limits regarding I/O throughput, storage capacity, computing performance, and economic cost have been integrated into the final architecture.
Functional Description
The real-time data acquisition and storage system (RTDAS) presented in this work is depicted in Figure 1.
arrays results in 117 Mbytes/s speeds.
The system described in [23] allows the measurement of signals with a high degree of resolution. Even with real-time data transmission, the low power consumption permits the systems to run for months only using batteries, allowing installation in difficult access areas. Low power consumption increases autonomous operation and minimizes battery shifting. The low-cost design allows a high-density sampling network, fast maintenance, and substitution. Data rates are in the range of hundreds of Kbytes/s.
In summary, none of the systems considered above integrate all the characteristics needed to acquire and store digitized information at sustained rates in the boundaries of 100 Mbytes/s, with the modularity, flexibility, and low economic cost provided by a hardware/software architecture that integrates COTS components and a multithreaded software application using the services of a general-purpose operating system.
System Requirements
The set of initial requirements for a data acquisition system devoted to testing spectrometers and FEEs of aerospace missions similar to [8,11] is summarized below: 1. Real-time acquisition and storage of large amounts of information, with timing constraints on the communication with the spectrometer that carries the imaging sensors; 2. Sustained input throughput of 94 Mbytes/s; 3. Storage capacity for 100,000 frames (1 Mpx size per frame, 300 GBytes approximately); 4. No data shall be lost or altered; 5. Low economic cost.
These demanding requirements impose a need to base the system design on heterogeneous hardware/software architectural schemes. Components selected considering their limits regarding I/O throughput, storage capacity, computing performance, and economic cost have been integrated into the final architecture.
Functional Description
The real-time data acquisition and storage system (RTDAS) presented in this work is depicted in Figure 1. The front-end electronics (FEE) is responsible for commanding and powering the imaging spectrometer, as well as receiving four different analog output signals provided by the imaging spectrometer (corresponding to four separate detector amplifiers). Inside the FEE, four 14-bit analog-to-digital converters complete digitization, from which science and housekeeping data are relayed to the RTDAS.
Due to a lack of I/O communication pins in the FEE, the data are issued by the FEE in serial format through the channel link. The FEE to RTDAS channel link protocol is The front-end electronics (FEE) is responsible for commanding and powering the imaging spectrometer, as well as receiving four different analog output signals provided by the imaging spectrometer (corresponding to four separate detector amplifiers). Inside the FEE, four 14-bit analog-to-digital converters complete digitization, from which science and housekeeping data are relayed to the RTDAS.
Due to a lack of I/O communication pins in the FEE, the data are issued by the FEE in serial format through the channel link. The FEE to RTDAS channel link protocol is based on the emission of 21-bit self-contained words, which have the following formatting: Bit 20: Data Valid. This bit is set to '1' when the content of the rest of the word is valid. Otherwise, this bit shall be set to '0', and all bits 19-0 shall be set to zero.
Bit 19: Science/Sync. If this bit is set to '1', the payload data represents the science data and originating detector output port, plus a parity bit. If this bit is zero, the payload data represents synchronization data.
•
During each line transfer during readout: one sync word with the bit "Line Synchronization" set to '1'.
•
At each pixel digitization, the FEE shall send the four pixels in the following order: 1st amplifier, 2nd amplifier, 3rd amplifier, 4th amplifier. For a given amplifier, the data sent to the RTDAS shall follow the order they were read out without buffering or re-ordering.
The FEE can send these 21-bit words at different rates, being a nominal clock equal to 23.5 MHz and a maximum clock rate of 35 MHz. The acquisition system must acquire all the words with bit 20 other than '0' and discard the rest.
The 23.5 MHz nominal clock frequency involves the acquisition of 1 pixel (21 bits of information, packed as a 32-bit word) nominally every 42.55 ns (28.55 ns at maximum clock rate). Consequently, the global I/O throughput demanded from the RTDAS system is nominally 23.5 × 4 = 94 MBytes/s. (maximum 140 Mbytes/s).
The requirement for the number of images to be stored determines the storage capacity needed by the RTDAS system. The storage of 100,000 images with a size of 1024 × 1024 pixels requires over 300 GBytes (only 24 bits per pixel are stored), and no data compression is allowed because the information must be stored without any distortion.
Each 21-bit data word is stored using 24 bits, and consequently, the three most significant bits of each word store do not have meaningful information. For example, Figure 2 presents a Frame Synchronization word (three bytes with values 0x72 0x00 0x00 in hexadecimal format). The FEE shall send only the following valid words: • During frame transfer: one sync word with bit "Frame Synchronization" set to 1 .
•
During each line transfer during readout: one sync word with the bit "Line Synchronization" set to 1 .
•
At each pixel digitization, the FEE shall send the four pixels in the following order: 1st amplifier, 2nd amplifier, 3rd amplifier, 4th amplifier. For a given amplifier, the data sent to the RTDAS shall follow the order they were read out without buffering or re-ordering.
The FEE can send these 21-bit words at different rates, being a nominal clock equal to 23.5 MHz and a maximum clock rate of 35 MHz. The acquisition system must acquire all the words with bit 20 other than 0 and discard the rest.
The 23.5 MHz nominal clock frequency involves the acquisition of 1 pixel (21 bits of information, packed as a 32-bit word) nominally every 42.55 ns (28.55 ns at maximum clock rate). Consequently, the global I/O throughput demanded from the RTDAS system is nominally 23.5 × 4 = 94 MBytes/s. (maximum 140 Mbytes/s).
The requirement for the number of images to be stored determines the storage capacity needed by the RTDAS system. The storage of 100,000 images with a size of 1024 × 1024 pixels requires over 300 GBytes (only 24 bits per pixel are stored), and no data compression is allowed because the information must be stored without any distortion.
Each 21-bit data word is stored using 24 bits, and consequently, the three most significant bits of each word store do not have meaningful information. For example, Figure 2 presents a Frame Synchronization word (three bytes with values 0x72 0x00 0x00 in hexadecimal format). In Figure 3, we can observe the first bytes of a stored frame. The first word is a Frame Synchronization (0x72 0x00 0x00), followed by a Line Synchronization word (0x71 0x00 In Figure 3, we can observe the first bytes of a stored frame. The first word is a Frame Synchronization (0x72 0x00 0x00), followed by a Line Synchronization word (0x71 0x00 0x00), and then several science data words. The first science data word (0x78 0x1B 0xD8) can be decoded as follows: Detector output amplifier = 00 (first detector) -Science data = 0x1BD8 (16-bit pixel value) can be decoded as follows: -Data Valid = 1 -Science/Sync = 1 -Parity = 0 -Detector output amplifier = 00 (first detector) -Science data = 0x1BD8 (16-bit pixel value) Figure 3. Frame stored using a 3-byte storage scheme per 21-bit data word.
Using this storage scheme (3 bytes stored per 21-bit word), the size of a 1 Mpx frame would be equal to 3,148,803 bytes (1 Frame Synchronization, 1024 Line Synchronization 1024 × 1024 pixels). A sample 1Mpx image is shown in Figure 4, where we can observe the arrangement of the four different detector amplifiers in four equal-size columns.
Deserializer Board
Due to a lack of I/O communication pins in the FEE, the FEE sends the information in serial format through the channel link. When received by the RTDAS, these data are converted to a 21-bit parallel format on a deserializer board. This activity is carried ou transparently, being only necessary to control an extra bit to enable data arrival.
The deserializer board used in the RTDAS is the Texas Instruments DS90CR218A [24]. Its receiver deserializes three input LVDS data streams into 21 CMOS/TTL outpu bits. When operating at the maximum input clock rate of 85 Mhz, the LVDS data are re ceived at 595 Mbps per data channel for a total data throughput of 1.785 Gbit/s (233 Mbytes/s). A logical scheme of the deserializer board is shown in Figure 5. -Science/Sync = 1 -Parity = 0 -Detector output amplifier = 00 (first detector) -Science data = 0x1BD8 (16-bit pixel value) Figure 3. Frame stored using a 3-byte storage scheme per 21-bit data word.
Using this storage scheme (3 bytes stored per 21-bit word), the size of a 1 Mp would be equal to 3,148,803 bytes (1 Frame Synchronization, 1024 Line Synchron 1024 × 1024 pixels). A sample 1Mpx image is shown in Figure 4, where we can obs arrangement of the four different detector amplifiers in four equal-size columns.
Deserializer Board
Due to a lack of I/O communication pins in the FEE, the FEE sends the info in serial format through the channel link. When received by the RTDAS, these converted to a 21-bit parallel format on a deserializer board. This activity is car transparently, being only necessary to control an extra bit to enable data arrival.
The deserializer board used in the RTDAS is the Texas Instruments DS90 [24]. Its receiver deserializes three input LVDS data streams into 21 CMOS/TTL bits. When operating at the maximum input clock rate of 85 Mhz, the LVDS data ceived at 595 Mbps per data channel for a total data throughput of 1.785 Gb Mbytes/s). A logical scheme of the deserializer board is shown in Figure 5.
Deserializer Board
Due to a lack of I/O communication pins in the FEE, the FEE sends the information in serial format through the channel link. When received by the RTDAS, these data are converted to a 21-bit parallel format on a deserializer board. This activity is carried out transparently, being only necessary to control an extra bit to enable data arrival.
The deserializer board used in the RTDAS is the Texas Instruments DS90CR218A [24]. Its receiver deserializes three input LVDS data streams into 21 CMOS/TTL output bits. When operating at the maximum input clock rate of 85 Mhz, the LVDS data are received at 595 Mbps per data channel for a total data throughput of 1.785 Gbit/s (233 Mbytes/s). A logical scheme of the deserializer board is shown in Figure 5.
Acquisition Board
The acquisition board is in charge of acquiring the digitalized information. This digital information is buffered in the acquisition board memory.
The acquisition board used in the RTDAS is ADLINK s PCIe-7360 [25]. The PCIe-7360 is a high-speed digital I/O board with 32-channel bi-directional parallel I/O lines. Data rates of up to 400 MB/s are available through the ×4 PCI Express interfaces, with clock rates of up to 100 MHz internal clock, ideally suited for high-speed and large-scale digital data acquisition or exchange applications, such as digital image capture. It features 32 channels at up to 100 MHz for digital input, with 400 MB/s maximum throughput.
Thirty-two-channel high-speed digital I/O lines are bi-directional and divided into four groups. Each group contains eight channels and can be configured individually as input or output ports. Data mapping for 32-bit data width is shown in Figure 6. Digital I/O data transfer between PCIe-7360 and PC s system memory is through busmastering DMA, controlled by PCIe IP Core (see Figure 7). Bus-mastering DMA provides the fastest data transfer rate on the PCI/PCIe bus. Once the analog/digital input operation starts, the control is returned to the program. The hardware temporarily stores the acquired data in the onboard Data FIFO and then transfers the data to a user-defined DMA buffer memory in the computer. Data can be transmitted continuously to computer memory (continuous operation).
Acquisition Board
The acquisition board is in charge of acquiring the digitalized information. This digital information is buffered in the acquisition board memory.
The acquisition board used in the RTDAS is ADLINK's PCIe-7360 [25]. The PCIe-7360 is a high-speed digital I/O board with 32-channel bi-directional parallel I/O lines. Data rates of up to 400 MB/s are available through the ×4 PCI Express interfaces, with clock rates of up to 100 MHz internal clock, ideally suited for high-speed and large-scale digital data acquisition or exchange applications, such as digital image capture. It features 32 channels at up to 100 MHz for digital input, with 400 MB/s maximum throughput.
Thirty-two-channel high-speed digital I/O lines are bi-directional and divided into four groups. Each group contains eight channels and can be configured individually as input or output ports. Data mapping for 32-bit data width is shown in Figure 6.
Acquisition Board
The acquisition board is in charge of acquiring the digitalized information. This digital information is buffered in the acquisition board memory.
The acquisition board used in the RTDAS is ADLINK s PCIe-7360 [25]. The PCIe-7360 is a high-speed digital I/O board with 32-channel bi-directional parallel I/O lines. Data rates of up to 400 MB/s are available through the ×4 PCI Express interfaces, with clock rates of up to 100 MHz internal clock, ideally suited for high-speed and large-scale digital data acquisition or exchange applications, such as digital image capture. It features 32 channels at up to 100 MHz for digital input, with 400 MB/s maximum throughput.
Thirty-two-channel high-speed digital I/O lines are bi-directional and divided into four groups. Each group contains eight channels and can be configured individually as input or output ports. Data mapping for 32-bit data width is shown in Figure 6. Digital I/O data transfer between PCIe-7360 and PC s system memory is through busmastering DMA, controlled by PCIe IP Core (see Figure 7). Bus-mastering DMA provides the fastest data transfer rate on the PCI/PCIe bus. Once the analog/digital input operation starts, the control is returned to the program. The hardware temporarily stores the acquired data in the onboard Data FIFO and then transfers the data to a user-defined DMA buffer memory in the computer. Data can be transmitted continuously to computer memory (continuous operation). Digital I/O data transfer between PCIe-7360 and PC's system memory is through busmastering DMA, controlled by PCIe IP Core (see Figure 7). Bus-mastering DMA provides the fastest data transfer rate on the PCI/PCIe bus. Once the analog/digital input operation starts, the control is returned to the program. The hardware temporarily stores the acquired data in the onboard Data FIFO and then transfers the data to a user-defined DMA buffer memory in the computer. Data can be transmitted continuously to computer memory (continuous operation).
Acquisition Board
The acquisition board is in charge of acquiring the digitalized information. This digital information is buffered in the acquisition board memory.
The acquisition board used in the RTDAS is ADLINK s PCIe-7360 [25]. The PCIe-7360 is a high-speed digital I/O board with 32-channel bi-directional parallel I/O lines. Data rates of up to 400 MB/s are available through the ×4 PCI Express interfaces, with clock rates of up to 100 MHz internal clock, ideally suited for high-speed and large-scale digital data acquisition or exchange applications, such as digital image capture. It features 32 channels at up to 100 MHz for digital input, with 400 MB/s maximum throughput.
Thirty-two-channel high-speed digital I/O lines are bi-directional and divided into four groups. Each group contains eight channels and can be configured individually as input or output ports. Data mapping for 32-bit data width is shown in Figure 6. Digital I/O data transfer between PCIe-7360 and PC s system memory is through busmastering DMA, controlled by PCIe IP Core (see Figure 7). Bus-mastering DMA provides the fastest data transfer rate on the PCI/PCIe bus. Once the analog/digital input operation starts, the control is returned to the program. The hardware temporarily stores the acquired data in the onboard Data FIFO and then transfers the data to a user-defined DMA buffer memory in the computer. Data can be transmitted continuously to computer memory (continuous operation). For the operation of digital pattern acquisition in continuous mode or burst handshake mode, the PCIe-7360 card can acquire digital data from external devices at a specific sampling rate (digital input sample clock). The PCIe-7360 can internally generate the sample clock signal for digital data acquisition, with an internal base clock source of 100 MHz.
Personal Computer
The primary duties assigned to the PC are transferring the data frames from the acquisition board memory buffers to the PC's local memory and collecting data frames in storage devices.
The PC receives the information from the acquisition board memory, being temporarily constricted in the sense that memory frames have to be read from the buffers located in the acquisition board memory before they fill up completely, i.e., at a higher rate than the filling rate imposed for the sensor and FEE.
The acquisition board manufacturer provides advanced 32/64-bit kernel drivers (DASK) for customized data acquisition application development, enabling complex operations and improved performance and reliability from the data acquisition system. DASK kernel drivers support Windows OS. The board's memory has a capacity of 2 MBytes, and it can be configured as a different number of buffers, ranging from 2 (double-buffer architecture) to 16. Using the services of the acquisition board SW driver, the information acquired can be read to the address space of the PC, where it is buffered in local memory and stored in a highcapacity RAID device (and can be visualized offline using a graphical viewer application).
The most important feature required from the storage subsystem is a high-sustained I/O throughput so as not to constitute a global bottleneck. In addition, increased capacity is needed. After considering the utilization of solid-state disks, a RAID disk array was selected as the most suitable storage device, mainly due to the high-sustained data transfer rate and capacity featured. The specific RAID of the final architecture performs over 100 MBytes/s in a sustainable manner, and several hundred GBytes of data can be stored in the array. The characteristics of the storage devices that compose the RAID can be seen in [26].
Software Architecture
The software architecture of the RTDAS has been defined on top of a general-purpose operating system (Windows platform). It comprises several threads which use the realtime resources provided by Windows extensions: automatic memory sharing, parallel execution, real-time scheduling, asynchronous event notification, high-resolution timers, synchronization elements, and multiprocessing. The data acquisition application addressed is appropriate for using a multithreaded scheme, since the system's performance regarding time response and I/O throughput is highly improved.
The real-time multithreaded process developed for the application, responsible for receiving and storing all the information acquired, is memory resident (virtual address space memory locked) during execution. It is scheduled using a real-time fixed-priority policy.
The main process creates three threads before starting real-time execution: dataReader, diskWriter, and fileManager. During regular operation, these threads run concurrently, sharing data and cooperating to respond adequately to the system's real-time and I/O throughput demands in a producer-consumer scenario.
The specific functions of the real-time software structure are detailed below, divided into three different stages: Stage 1: Initialization and preparation for the next stage. In this phase, there is only one thread of execution, and real-time requirements are not present. The main functions are to carry out the memory locking of the process virtual address space, initialization of synchronization elements, allocation of memory for buffers, initialization of communication with the Acquisition board through the SW driver, and file opening. The different real-time threads which configure the core of the RTDAS software architecture are listed and described below in real-time priority order (from highest to lowest). A diagram showing the interaction among the different threads can be found in Figure 8.
Sensors 2023, 23, x FOR PEER REVIEW are to carry out the memory locking of the process virtual address space, initializa synchronization elements, allocation of memory for buffers, initialization of comm tion with the Acquisition board through the SW driver, and file opening.
The The different real-time threads which configure the core of the RTDAS softw chitecture are listed and described below in real-time priority order (from highest est). A diagram showing the interaction among the different threads can be found ure 8.
•
Thread dataReader: (top priority: TIME_CRITICAL). This thread checks if th buffer in the ring is filled with data, and consequently, it must be read to avo • Thread dataReader: (top priority: TIME_CRITICAL). This thread checks if the next buffer in the ring is filled with data, and consequently, it must be read to avoid data loss. Whenever a buffer is ready to be read, it performs the following actions to process the data: 1.
Reads every 32-bit word in the buffer. For each word in which the DATA_VALID bit is set: Copies the three least significant bytes of the 32-bit word to a local array (treated as a large circular buffer with a size of 1.8 Gbytes). b.
If a 32-bit word is a FRAME_SYNC, it sends a message to diskWriter, because the information buffered in the local array up to that moment is enough to be written to the storage device efficiently.
Evaluation and Results
The real-time data acquisition hardware/software architecture described in the previous section intends to integrate diverse COTS elements at their full potential regarding performance and capacity to obtain a high-performance, cost-effective data acquisition system.
The modular design of the system guarantees to provide enhanced flexibility and scalability. The heterogeneous components included in its architecture can be replaced flexibly, surpassing the system's global bottleneck in parallel with future technological innovation in real-time software architectures, high-performance buses, storage devices, or high-speed links.
A significant concern of this type of real-time application is the correct identification of the potential technological bottlenecks that may limit the system's global I/O throughput. According to the components selected for the RTDAS system, the following theoretical hardware limits have been detected:
•
Acquisition board: theoretical 400 MBytes/s data transfer rates can be achieved at full transmission speed, using a 32-bit data width and a 100 MHz clock rate. The interface with the personal computer is PCI Express ×4 (4 GB/s bandwidth, far above the board data throughput limit). The probability of data transmission failure is negligible. Therefore, the theoretical hardware bottleneck of the RTDAS architecture is 193 Mbytes/s. In order for the system to meet the initial specifications (94 Mbytes/s input throughput, simultaneous storage of 1 Mpx 100,000 frames, no data lost or altered) and be as close as possible to reach its theoretical performance limits, some important software design decisions were taken. These options can be outlined as follows: 1.
Configuration of the number of buffers in the acquisition board: several configurations of buffers were tested; the best performance was obtained when two buffers were selected (the possible range being from two to sixteen buffers). It is worth noting that the total amount of buffer memory provided by the board drivers is approximately 2 Mbytes, regardless of the number of buffers selected.
2.
Multithreaded software scheme: parallel thread execution improves performance, being particularly suited for data-intensive applications that are I/O-limited. 3.
Real-time features provided by the operating system: the maximum size of each acquisition board buffer is 778,240 bytes, which makes it fill up every eight milliseconds at a nominal clock speed of 23.5 MHz. This is a tight time for a multitasking time-sharing OS like Windows. Consequently, threads have to be scheduled using a fixed priority scheme.
4.
Opening of files: as the maximum number of open files is 512 in Windows OS, it was decided to open 500 files prior to the beginning of data acquisition. In this fashion, during data acquisition, a new file is opened only when a previous file is filled with data and closed.
Finally, after successfully implementing all of the above alternatives, the actual system performance obtained in extensive laboratory testing is summarized in Table 1. Visualization of a sequence of different images containing test data can be seen in the following video: https://www.loom.com/share/af27824dd09a46e4aecf4001b5790135 (accessed on 17 April 2023).
Conclusions
In this article, we have covered the description and evaluation of a real-time hardware/software architecture implemented and evaluated to build a data acquisition system needed to perform under severe time constraints and high throughput requirements.
The architecture is heterogeneous and scalable. It includes a data acquisition board and a multithreaded software architecture capable of acquiring and storing data simultaneously at high throughput (94 Mbytes/s), preserving the integrity of 300 GBytes of data at 100%.
The correct combination of COTS components with an optimized and properly configured multithreaded software application running on top of a general-purpose operating system has made possible the satisfaction of the requirements in determinism and capacity for processing and storing large amounts of information in real time, keeping the economic cost of the system low.
Finally, we have also reported the evaluation results of the data acquisition system. We have pointed out the lessons learned when experimentally identifying bottlenecks and dealing with the real-time behavior of the hardware/software architecture. The lessons learned during the analysis and resolution of the problems can surely help future aerospace and embedded software engineering projects.
The system's modularity and low economic cost will provide future enhanced overall performance and behavior in parallel with technological advances in the different components of the architecture.
|
2023-04-30T15:20:08.051Z
|
2023-04-28T00:00:00.000
|
{
"year": 2023,
"sha1": "fa1dc747b22edb3900e097e206088deb07a03ca4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/s23094349",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f638c6eacec62886d8f2a2670b0c35504b902032",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
51884950
|
pes2o/s2orc
|
v3-fos-license
|
The chemically homogeneous evolutionary channel for binary black hole mergers: rates and properties of gravitational-wave events detectable by advanced LIGO
We explore the predictions for detectable gravitational-wave signals from merging binary black holes formed through chemically homogeneous evolution in massive short-period stellar binaries. We find that $\sim 500$ events per year could be detected with advanced ground-based detectors operating at full sensitivity. We analyze the distribution of detectable events, and conclude that there is a very strong preference for detecting events with nearly equal components (mass ratio $>0.66$ at 90\% confidence in our default model) and high masses (total source-frame mass between $57$ and $103\, M_\odot$ at 90\% confidence). We consider multiple alternative variations to analyze the sensitivity to uncertainties in the evolutionary physics and cosmological parameters, and conclude that while the rates are sensitive to assumed variations, the mass distributions are robust predictions. Finally, we consider the recently reported results of the analysis of the first 16 double-coincident days of the O1 LIGO (Laser Interferometer Gravitational-wave Observatory) observing run, and find that this formation channel is fully consistent with the inferred parameters of the GW150914 binary black hole detection and the inferred merger rate.
INTRODUCTION
The detection of the gravitational-wave signal GW150914 on September 14, 2015 from the inspiral and merger of two black holes with masses around 30M by the Laser Interferometer Gravitational-wave Observatory (LIGO) has provided the first robust evidence that black holes with such masses exist, that they can form in binary pairs, and that they coalesce at an inferred local rate of 2-400 Gpc −3 yr −1 (Abbott et al. 2016a,b,c).
Predictions for the rate of binary black hole mergers varied widely due to the lack of direct observational evidence (Abadie et al. 2010). Empirical estimates are available for the merger rates of binary neutron stars, based on the observed populations of double neutron stars (e.g. Phinney 1991;Narayan et al. 1991;Kim et al. 2003;O'Shaughnessy & Kim 2010). In contrast, for double black hole mergers the population of direct progenitors is not accessible and the rate prediction fully relied on the predictions of stellar and binary evolutionary models integrated into population synthesis simulations. Several groups predicted that the gravitational-wave signals of binary black hole mergers would potentially dominate LIGO observations (e.g. Lipunov et al. 1997;Voss & Tauris 2003;Belczynski et al. 2010;Dominik et al. 2015), but these analyses also demonstrated the significant uncertainties in these predictions (e.g. Dominik et al. 2012;de Mink & Belczynski 2015).
The detection of GW150914 within the first 16 days of the advanced LIGO O1 observing run with two detectors operating in coincidence has provided the first stringent empirical constraints of the binary black hole merger rate. Assuming the results are representative, this implies the possibility of hundreds of detections per year as the detectors reach full design sensitivity and the duration of the runs with both detectors online increases (Abbott et al. 2016g,c,b, 2010. This will make it possible to constrain the formation channels for what is by far the most intriguing outcome of massive binary evolution, the coalescence of two gravitational singularities (e.g., Bulik & Belczyński 2003;Mandel & O'Shaughnessy 2010;Stevenson et al. 2015;Mandel et al. 2015).
Different channels have been proposed for the formation of double black hole binaries that can coalesce within a Hubble time. These include: (i) dynamical formation, which requires a dense star cluster (e.g., Sigurdsson & Hernquist 1993;Portegies Zwart & McMillan 2000;Miller & Lauburg 2009;Rodriguez et al. 2015;Antonini et al. 2016).
(ii) classical isolated binary evolution through highly non-conservative mass transfer or common envelope ejection (e.g., Tutukov & Yungelson 1973Kalogera et al. 2007;Belczynski et al. 2016); and (iii) chemically homogeneous evolution in tidally distorted binary stars, i.e. massive stars in (near) contact binaries that experience strong internal mixing as proposed by de Mink et al. (2008de Mink et al. ( , 2009) and further explored in the context of the formation of binary black holes by Mandel & de Mink (2016) and Marchant et al. (2016).
The third channel, the topic of this paper, originates from very close binary systems that are in (near) contact at the onset of hydrogen burning. In such systems, the deformation by tides of the component stars triggers instabilities in the stellar interior that can, in principle, drive large-scale Eddington-Sweet circulations (Endal & Sofia 1978;Zahn 1992). This allows mixing of nuclear burning products produced in the center throughout the stellar envelope. Originally these processes have been considered in the case of rotating single stars and have been proposed as an explanation for surface abundance anomalies such as nitrogen enhancements (e.g., Maeder & Meynet 2000, see however Brott et al. 2011).
If the large-scale circulations are efficient enough, they will lead to a gradual enrichment of the stellar envelope with helium. This prevents the buildup of a chemical gradient between core and envelope that characterizes non rotating stars in the classical evolutionary models. This mode of evolution is referred to as "chemically homogeneous evolution", originally proposed for rotating single stars by Maeder (1987). The stars are well approximated by the classical homology relations, i.e., the approximate analytic scaling solutions for the stellar structure equations which assume a uniform chemical composition. They stay compact during their main sequence evolution as they slowly evolve towards the helium main sequence. This mode of evolution gained renewed attention in the context of the formation of the progenitors of long gamma-ray bursts from rapidly rotating single stars (Yoon & Langer 2005;Woosley & Heger 2006). Solid evidence is missing, but observations have provided several clues in favor of the existence of chemically homogeneously evolving stars, based on individual objects (Martins et al. 2013;Almeida et al. 2015) as well as the properties of unresolved populations (Eldridge & Stanway 2012;Stanway et al. 2014;Szécsi et al. 2015), as discussed by Mandel & de Mink (2016, see Section 2.4).
Here we discuss the evolutionary channel proposed by de Mink et al. (2009), who argued that the conditions for chemically homogeneous evolution can, in principle, be achieved in very close massive binary systems. The classical evolutionary models predict that near contact binaries with orbital periods less than about 2 days will merge even before or soon after the completion of hydrogen burning due to the expansion of the stars (Nelson & Eggleton 2001;de Mink et al. 2007). On the other hand, models that account for enhanced mixing allow for the possibility that the two stars shrink and remain within their Roche lobes. This evolutionary channel has been explored with three different 1D evolutionary codes (de Mink et al. 2009;Song et al. 2016;Marchant et al. 2016). All three studies report the existence of a window in the initial binary parameter space for this type of evolution when accounting for mixing induced by rotation and angular momentum transport by magnetic fields. The latter group even explores the evolution of over contact systems. Examples of observed binary systems that have been proposed to undergo (partial) chemically homogeneous evolution are VFTS 352 (Almeida et al. 2015) andHD 5980 (Koenigsberger et al. 2014).
This channel naturally produces rather massive binary black holes as the stars process a larger fraction of their initial mass by nuclear fusion. The allowed initial binary parameter space further favors producing binary black holes with comparable masses. The black holes thus formed already reside in a close orbit, so that most of them coalesce within a Hubble time as the orbit decays due to gravitational wave radiation. The limiting factor comes from the stellar wind mass loss, which affects the final masses as well as the final orbital separation, and can potentially inhibit chemically homogeneous evo-lution if the binary expands to the point that the stars significantly spin down. The reduction of stellar wind mass loss at low metallicity (Vink et al. 2001;Mokiem et al. 2007) leads to a preference for the progenitors to form at higher redshift or in dwarf galaxies.
The Monte Carlo simulations by Mandel & de Mink (2016) of the cosmological merger rate through this channel imply delay times of 3-11Gyr, a preference for comparable mass ratios q > 0.75 and typical total masses near 50-110M in the default model considered there. These simulations predict a local z = 0 merger rate of 10 Gpc −3 yr −1 , peaking at a redshift of 0.5 at twice the local rate, implying that the channel can be potentially the dominating channel for binary black hole in this mass regime. The detailed 1D evolutionary models by Marchant et al. (2016) account for over contact systems, which produce mass ratios closer to unity, higher total masses, a larger range of delay times and somewhat lower rates due to the stronger preference for low metallicity.
The aim of this paper, which is a companion paper to Mandel & de Mink (2016), is to provide the expected detection rates as well as the distributions of masses, mass ratios and chirp masses of detectable events that form through this channel. We provide estimates for the anticipated final design sensitivity, as well as for the lower sensitivity achieved during the 16 day portion of the O1 run which led to the detection of "The Event" GW150914. We compare the parameters inferred for GW150914 with the estimates and conclude they are fully consistent. We discuss the impact of variations of the model assumptions and show that, even though the rates are substantially uncertain, the preference for high masses and mass ratios similar to GW150914 is a robust prediction of this channel.
We make the full output of our simulations available online for the community at http://www.sr.bham.ac. uk/~imandel/CaseM, in order to allow further comparisons with current and future data and with simulations of other channels.
MODEL ASSUMPTIONS
Our simulations of massive close binary populations over cosmic time are described in Mandel & de Mink (2016, hereafter MdM16), to which we refer for a full description. Here, we summarize the key assumptions.
Progenitor evolution
We perform a Monte Carlo simulation drawing the initial parameters of massive binary systems from a Kroupa initial mass function (IMF) for the primary star (Kroupa & Weidner 2003), a flat mass ratio distribution (e.g. Sana et al. 2012;Kobulnicky et al. 2014) and a distribution of orbital periods appropriate for O-type stars (Sana et al. 2012) as detailed in MdM16. We follow the evolution of the systems, parametrizing our assumptions as described below. We check if the stars fit within their Roche lobes at zero age using the radii of zero age main sequence stars based on models computed with Eggleton's evolutionary code (Pols et al. 1995;Glebbeek et al. 2008). We assume that the stellar spin is synchronized with the orbit and the orbits are circular, which is appropriate for the short period systems of interest (Zahn 1989). Using the spin frequency and stellar radius we compute the fraction of the Keplerian rotation rate, which we compare with the threshold for chemically homogeneous evolution in the detailed models by Yoon et al. (2006). These are 1D hydrodynamical evolutionary models that solve the stellar structure equations accounting for the effect of the centrifugal acceleration and rotationally driven instabilities (Endal & Sofia 1976;Heger et al. 2000), which lead to the transport of chemical elements and angular momentum. These models further account for internal magnetic fields (Spruit 2002). For the threshold for chemically homogeneous evolution we use the expression given in Section 4.3 of MdM16. Following Yoon et al. (2006), we adopt a maximum metallicity threshold of Z = 0.004.
If the system fulfills the criteria for chemically homogeneous evolution, we follow the evolution by accounting for the effects of mass and angular momentum loss driven by stellar winds and the final supernova explosions via the simple parametrized approach described in MdM16. We account for the effect of mass loss on the orbital separation and the masses of the final remnants. We only consider systems in which both stars fulfill the threshold for chemically homogeneous evolution throughout their main-sequence evolution. We consider the possibility that the most massive helium stars lead to pairinstability supernovae leaving no remnant, by adopting an upper limit of 63M (Heger & Woosley 2002) for the final, pre-explosion mass of the star. We do not consider systems that are more massive than the pair instability regime, in contrast to Marchant et al. (2016), given the lack of constraints on the progenitors of systems in this mass range. We further conservatively exclude any additional contribution from systems that evolve through an over-contact phase.
Given the high orbital velocities in the massive close systems under consideration, we ignore the effect of possible natal kicks accompanying collapse to black holes. We account for the decay of the orbit by energy and angular momentum loss due to the emission of gravitational waves as in Peters (1964).
Our approach differs from the complementary work of Marchant et al. (2016), who explicitly follow the full evolution with a stellar evolutionary code. However, given the large uncertainties in evolutionary models and the mixing processes in particular, we have opted for a faster parametrized approach which allows us to study the effect of various uncertainties in section 4.
Cosmology
To compute the cosmological merger rate history we adopt a standard flat cosmology with Ω Λ = 0.718 and h 0 = 0.697 (Hinshaw et al. 2013). We adopt the star formation rate d 2 M SFR /(dtdV c ) (z) per unit source time per unit comoving volume as a function of redshift z from Madau & Dickinson (2014, Eq. 15 in their work). For the metallicity distribution as a function of redshift, we follow , which is based on the mass-metallicity relation of Savaglio et al. (2005) and the average cosmic metallicity scaling of Kewley & Kobulnicky (2005, 2007. For the average present day metallicity we conservatively use 1.06 times solar metallicity, Z = 0.0134 (Asplund et al. 2009). We implicitly assume that the initial mass function and other binary properties do not depend on metallicity or redshift. This is reasonable since the fraction of binaries of interest formed in metal-free population III stars or extremely metal-poor stars is very small within this framework of assumptions and the merger rate is dominated by systems with metallicity near the maximum threshold metallicity. For this metallicity observations indicate no evidence for a varying IMF (Kroupa 2002).
The rate density of binary black hole mergers is given in MdM16, Eq. (8), as the number of mergers N merge per unit component mass m 1 and m 2 at the moment of merger t m per unit source time and per unit comoving volume V c . (1) Here, the star formation rate d 2 M SFR /(dt dV c ) is evaluated at the binary birth time t b and d 5 N binaries /(dm 1 dm 2 dP dZ dM SFR ) is the number density of binaries formed per unit m 1 , m 2 , initial orbital period P , and metallicity Z per unit star formation rate. The time delay distribution is given by the probability density p(t m ; m 1 , m 2 , P, Z, t b ) for a binary to merge at time t m if it was formed with the given m 1 , m 2 , P , Z at time t b . Note that m 1 and m 2 refer to the black hole masses and not the birth masses of the progenitor stars. The innermost integral is taken over all birth times t b preceding the merger time t m , where the zero of time corresponds to the Big Bang. The minimum and maximum initial orbital periods P min = 10 0.075 days and P max = 10 5.5 days and the initial period distributions are based on observations of O-type stars (Sana et al. 2012), extending the period distribution to allow for effectively single stars as in de Mink & Belczynski (2015). For further information we refer to MdM16.
Detection rates
We convert the cosmological merger rate into a rate of detections by advanced LIGO (Aasi et al. 2015) and Virgo (Acernese et al. 2015) detectors per unit observer time by folding in the gravitational waveform models and the detector sensitivity: Here, f detect = f (z(t m ), m 1 , m 2 ) is the probability that LIGO and Virgo will detect a coalescing black hole binary with given component masses m 1 and m 2 merging at a redshift z(t m ) such that the gravitational waves emitted from a source merging at time t m will arrive at the Earth today. The term 1/(1+z) reflects the time dilation of the source clock (with which all times are measured unless otherwise specified) with respect to the observer clock t obs . We evaluate these integrals with a Monte Carlo simulation.
We model the gravitational-wave emission from a binary by using the IMRPhenomB waveform approximant (Ajith et al. 2011). This waveform includes the post-Newtonian inspiral and the perturbative ring down, connected by a smooth merger via a phenomenological approximation calibrated to numerical relativity simulations. Although we expect the spins of the stars to be aligned by tides for this formation channel, the orientation and magnitude of the spins of the black holes are uncertain as they may be affected by stochastic processes during the collapse. For the purpose of the waveform calculation we set the spins to zero. This generally underestimates the strength of the gravitational-wave signal relative to that expected from a binary with aligned spins, which seems likely in this scenario (MdM16). Although more precise waveforms are now available, the accuracy provided by IMRPhenomB is sufficient for our purposes here, especially since most of our binaries have mass ratios close to unity.
As described in (MdM16), our Monte Carlo simulation generates a set of simulated merging binary black holes, which we label with an index k. We divide the history of the Universe into a large number of bins by redshift, which we label with an index j. For each sample binary k, we redshift the waveform to account for the cosmological expansion of the Universe, thus producing a redshifted frequency-domain waveformh(f ) k,j for each merger bin j corresponding to redshift z j . We compute the signalto-noise ratio ρ at which an optimal (face-on, overhead) source at this redshift and its corresponding luminosity distance d L (z j ) would be detected by a single advanced LIGO interferometer: ( 3) Here, S n (f ) is the noise power spectral density of the detectors. To estimate detectability at full design sensitivity, we use the so-called zero-detuning, high-power configuration (Abbott et al. 2010). For estimates at O1 sensitivity, we use the reference O1 noise curve Abbott et al. (2015) (see Abbott et al. (2016d) for associated calibration accuracy).
The signal-to-noise ratio will depend on the source location on the sky relative to the detector and the source orientation. The projection coefficient Θ as a function of these angles is given by Finn (1996). We choose a single-detector threshold signal-to-noise ratio ρ t = 8 as a proxy for the detectability of the source by a network; the detection probability for the given source at a given redshift is then (e.g., Belczynski et al. 2014) where the cumulative distribution function of the projection coefficient, C (Θ/4) , is measured with a separate numerical Monte Carlo. We can finally compute the total merger rate that is detectable by summing over all redshifts bins and simulated binaries: where dN merge k,j /(dt dV c ) is the merger rate for sample binary k in redshift bin z j , dV c (z j ) is the comoving volume associated with redshift bin z j and the last term comes from the difference between source time and observer time, dt/dt obs = 1/(1 + z).
RESULTS
We provide predictions for the number of detectable events and their distribution, and compare these with observed values based on The Event, GW150914. These results are summarized in Table 1.
Cosmic merger rate and the local merger rate
The overall cosmic merger rate is shown in Figure 1 for our default simulation. The shape follows the rise and fall of the cosmic star formation rate for low metallicity stars (see also Fig. 3 & 8 in MdM16), shifted by the time delay between the the birth of a massive binary star progenitor system and the final merger of the two black holes. The typical time delay in our default model is 4-11 Gyr (cf., Fig. 6 in MdM16). As a result, the earliest mergers occur at a redshift of z ∼ 1.5, and the merger rate reaches a maximum of about 20 Gpc −3 yr −1 at z ∼ 0.4 after which it drops by a factor 2 at z = 0. The local merger rate derived from our default model, 10 Gpc −3 yr −1 and the estimates obtained when we vary our model assumptions (see Table 1 of MdM16) are consistent 1 with the conservative inferred range of 2-400 Gpc −3 yr −1 from 16 days of double coincident advanced LIGO O1 observations (Abbott et al. 2016c). The inference is based on The Event as well as lowersignificance triggers assuming a redshift-independent volumetric merger rate. The ranges allow for different underlying mass distributions for the BH-BH population.
Detection rate
Our default model predicts that advanced gravitational-wave detectors operating at full design sensitivity could observe 470 ± 25 events per year of coincident observation resulting from binary black hole mergers formed through the Case M scenario. The error bar given here corresponds exclusively to the numerical uncertainty of the Monte Carlo integral, and does not include the systematic uncertainties in the assumed model, which are discussed in the next section. The corresponding rate for the sensitivity of the first observing run implies about 40 events per year. This scales to 1-2 detections for the first 16 days of double-coincident O1 observations.
The 16 day double-coincident O1 run yielded one significant detection as well as one candidate event of lower significance, which has a posterior probability larger than 0.8 to be of astrophysical origin. No other triggers with significance larger than 0.5 were reported (Abbott et al. 2016c). These findings are consistent with the prediction of 1-2 events in our default model and the ranges 1 With the exception of a model variation Mdot2, which produces zero detectable events from this channel as we discuss below. obtained when exploring variations discussed section 4.
Redshift distribution of detectable mergers
The reach of gravitational-wave instruments is limited. The gravitational-wave strain and, hence, the signal-tonoise ratio are inversely proportional to the luminosity distance at fixed redshifted masses m(1 + z) (see below). Therefore, detection efficiency drops as a function of distance (redshift), with only massive and favorably located and oriented sources detectable at higher redshifts (see Figure 4 of Abbott et al. 2016b). As a result, the redshift distribution of detectable events is shifted toward lower redshifts with respect to the total merging binary population. This is shown in Figure 1. The corresponding cumulative distributions of the redshift of detectable events are shown in the lower panels of Figure 1.
The median redshift for detectable sources is z ∼ 0.5 in our default simulation for full design sensitivity. During the less sensitive O1 run we are biased to events occurring at smaller redshifts and the median redshift of detections is z ∼ 0.2. The redshift inferred for The Event, z = 0.09 +0.03 −0.04 (Abbott et al. 2016e), lies approximately at the lower tenth percentile of the simulated distribution of detectable mergers for O1 sensitivity.
We also provide the cumulative distribution of the redshift of formation for the detectable events in the lower panels of Figure 1. The typical events observable at full sensitivity result from systems that were formed at redshifts z ∼1-4.8 (90% range), implying that they probe star formation and massive star evolution during and prior to the cosmic star formation peak.
MdM16 found that a total of ∼ 1250 binary black holes merge per year of local (z = 0) observer time after forming through the chemically homogeneous evolution channel. The detection rate calculations described above in-dicate that ∼ 40% (∼ 3%) of all potentially observable mergers could be detected with instruments operating at full design (O1) sensitivity.
Total masses, chirp masses and mass ratios
In Figure 2 we show the predicted distributions of properties that can in principle be inferred from the gravitational-wave signals of detected sources. Distributions for sources detectable at full design sensitivity are shown in the top row and the predictions for those detectable at O1 sensitivity are in the lower panels, together with the inferred parameters of GW150914.
There is a strong preference for events resulting from systems with comparable masses for the individual black holes. There are no binaries of interest with mass ratios q = m 2 /m 1 < 0.5 and more than two thirds of detections come from sources with q > 0.8, as can be seen in the left-hand panels in Figure 2. The preference for equal masses is a robust prediction of this evolutionary scenario (section 4) and is independent of the assumed detector sensitivity. The inferred mass ratio for The Event is consistent with these predictions.
We further show the distributions for the chirp mass, M c = m 3/5 1 m 3/5 2 (m 1 + m 2 ) −1/5 , and total mass, m tot = m 1 +m 2 in the central and right-hand panels of Figure 2. The chirp mass is a combination of component masses m 1,2 which governs the phase evolution of gravitational waves at the leading order during the inspiral phase, and is therefore the most readily observable parameter for low-mass binaries. However, for high-mass systems of interest here, typically only the late stages of the inspiral fall within the sensitive frequency band of the detectors. The total mass therefore becomes the more accurately measurable mass parameter (Veitch et al. 2015;Graff et al. 2015;Haster et al. 2016). We provide both distributions for ease of comparison with other predictions in the literature. Both the source-frame and redshifted m → m(1 + z) masses are given. The redshifted quantities are the direct effect of the cosmological redshift of the gravitational waves in an expanding universe. The mass-redshift degeneracy (Krolak & Schutz 1987) can be broken by converting the luminosity distance, inferred from the gravitational-wave amplitude, into a redshift, using standard cosmology; this makes it possible to extract source-frame masses (e.g. Haster et al. 2016).
The source frame chirp masses and total masses show practically no dependence on detector sensitivity. In other words, the distributions of chirp masses and total masses of detectable binaries do not significantly evolve with redshift. The median source frame chirp masses are M c,full = 35 +10 −10 M and M c,O1 = 34 +11 −10 M for full and O1 detector sensitivity respectively, where the error bars indicate the 90% confidence intervals. These values are consistent with the parameters inferred for The Event M c,GW150914 = 28 +2 −2 M . The corresponding total source frame masses are m tot,full = 82 +21 −25 M and m tot,O1 = 80 +24 −24 M respectively, also consistent with the value inferred for The Event m tot,GW150914 = 65 +5 −4 M . The Event is also consistent with the distribution of redshifted chirp masses and total masses, although it resides on the lower side of the redshifted mass distributions, in line with its small inferred redshift.
In Figure 3 we provide two further visualizations of our simulations showing two-dimensional distributions. In the left-hand panel of Figure 3 we show the joint distribution of the mass ratio and redshifted total mass for full design sensitivity and O1 sensitivity. The inferred ranges for The Event are over plotted. In the right-hand panels we display the properties of the individual simulated merging binaries in our Monte Carlo simulations, showing the delay time versus chirp mass. The size and color of the symbols show how much these simulated systems contribute to the overall detection rate. The largest contributions come from binaries in the bottom right of the diagram, i.e., systems with relatively short time delays and high masses which emit stronger gravitational-wave signals, detectable at greater distances. The preference for short delay times is less strong for the events detectable in the 16 days double-coincident O1 run, which probes a smaller volume, therefore preferring late-time mergers. The local merger rate instead is dominated by lower mass events with relatively long delay times (as can be seen in Figure 9 of MdM16), and the local detection rate is a trade-off between this and the greater sensitivity to more massive systems.
The predicted distributions show a stronger preference for high masses than either classical population-synthesis predictions for field binary black holes (e.g., Dominik et al. 2015) or dynamically formed binary black hole models in globular clusters (e.g., Rodriguez et al. 2015). All merging binaries formed through this channel have total masses 50M under the default model assumptions. Furthermore, we find no delay times shorter than 3 Gyr which has implications for the detectable stochastic background signal (Abbott et al. 2016f).
ROBUSTNESS OF RESULTS
Substantial uncertainties in these simulations arise from several sources: (i) the assumptions for the initial conditions, (ii) the physics of the evolution of the systems (in particular the efficiency of the mixing processes, the mass and angular momentum losses), (iii) cosmological assumptions, and (iv) assumptions regarding gravitational-wave detectability. The impact of the initial conditions such as the binary fraction and the adopted distribution functions for the binary parameters has been quantified by de Mink & Belczynski (2015) for the classical isolated binary scenario. They concluded that the impact of uncertainties in the initial distributions is fully dominated by uncertainties in the initial mass function, which accounts for a factor of 8 up and down in the overall rate, with very little to no effect on the distribution of the properties of double black hole mergers. In MdM16 we quantified various aspects of the impact of (ii) and (iii) on the local (z = 0) merger rate, as well as the maximum cosmological merger rate and the redshift at which the maximum is reached. Here, we provide a similar exploration, now probing the impact of model variations on the detection rate for full design sensitivity observations R detect (full), the number of detections expected for the 16 days of double-coincident O1 observations analyzed so far N detect (O1) (Abbott et al. 2016h), as well as the median and 90% confidence intervals on the chirp mass, total mass, mass ratio and component masses in the source frame. We consider the same variations as MdM16, to which we refer for an extensive discussion and motivation for the considered variations. Here, we limit ourselves to a brief summary. Results for all model variations are summarized in Table 1.
The efficiency of the mixing processes constitutes one of the main uncertainties in the evolutionary models. We therefore consider a variation PoorMixing in which we used a more conservative threshold for chemically homogeneous evolution that roughly halves the window of interest in initial orbital period space. We vary the threshold metallicity for chemically homogeneous evolution in models Zmin0.002 and Zmin0.008. We consider the uncertainties in angular momentum loss driven by stellar winds in models ConstA, which represents enhanced angular momentum loss by keeping the separation fixed, and HalvedA, which further enhances angular momentum loss under the assumption of slow winds, shrinking the orbital separation by a factor of two. Model variations Mdot2 and Mdot0.2 represent enhanced and reduced mass loss. These variations account for uncertainties in the wind mass loss as well as other modes of mass loss such as eruptive mass loss episodes expected for pulsational pair instability supernovae. Model Mdot2ConstA considers enhanced mass loss but assumes higher angular momentum loss from the system (see sect. 7.2 and 7.3 of MdM16). Finally we consider the uncertainty in the threshold for pair instability supernova in model PISN80 and a variation in the assumed metallicity spread at each redshift in model Dex0.5. One variation, model Mdot2, which corresponds to a doubling of the mass loss, predicts no detectable events. We do expect that detectable events would arise even with doubled mass loss at lower metallicities, given the ∝ Z 0.85 scaling of wind-driven mass loss rates (Vink et al. 2001), consistent with the findings of Marchant et al. (2016) who analyzed Z = Z /10, Z /20, Z /50 populations. Our present results, which are based on Z = 0.004 models, represent a very conservative assumption.
The predictions for the detection rate at full sensitivity vary substantially, R detect (full) = 90 − 1500 per year for all models with non-zero predictions. The prediction for the number of detections in the 16 days of double-coincident O1 data analyzed so far varies between N detect (O1) = 0.3 − 2.5 with only two exceptions: model Mdot2 predicts no detections and model Dex0.5 predicts 10 detections.
We find that the preference for relatively high chirp and total masses is a robust prediction seen in all model variations that yield detectable events. Model variation Mdot2ConstA in which we adopted enhanced wind mass loss results in the lowest median masses. Model PISN80 results in the highest median masses. This model increases the maximum final mass at which stars can become black holes (instead of exploding in a pair instability supernova which would leave no remnant).
The preference for comparable masses is also a robust prediction. The preference for equal masses be- Table 1 Quantification of the impact of model variations on our predictions and a comparison with GW150914. We list R detect , the detection rate at full design sensitivity; N detect (O1), the expected number of detections at the sensitivity of O1 for a 16 day period of double-coincident observations; the median and 90% intervals for the mass parameters that can be inferred from the waveforms, where Mc is the chirp mass, mtot the total mass, q = m 2 /m 1 the mass ratio with component masses m 1 > m 2 . For the mass ratio we provide the 90% lower bound on the q. We list the union of the 90% ranges as "combined" parameters. All parameters refer to the distributions of detectable events at full design sensitivity, unless otherwise indicated. For comparison, we provide the parameters inferred for GW150914 in the source frame. The reader may also wish to compare with the candidate event mentioned in Abbott et al. (2016h), if it is indeed of astrophysical origin. comes stronger when we consider a reduced efficiency of tidally induced mixing PoorMixing, q > 0.72, and it is least strong for the model with reduced mass loss and enhanced angular momentum loss through slow winds Mdot2ConstA, q > 0.55. At this time, all models apart from Dex0.5 are consistent with the number of detections observed during the first 16 days of double-coincident observation from the O1 run after accounting for Poison statistics and the possibility that this channel is not the only channel that contributes to binary black hole detections.
Uncertainties in the gravitational-wave detectability
We use a single-detector signal-to-noise ratio threshold of 8 as a proxy for detectability by the LIGO-Virgo network. In practice, gravitational-wave search pipelines (e.g., Babak et al. 2013;Cannon et al. 2012) use more complex statistics than the signal-to-noise ratio to treat non-stationary, non-Gaussian noise backgrounds. As a result, the actual sensitivity of advanced gravitationalwave detectors will depend on the details of the network (such as the number of detectors operating in coincidence at a given time, which in turn depends on their duty factor), the detector data quality, the specific algorithms used for the search, and even the details of the source, such as the component masses.
Moreover, the spins of the binary components can have a significant effect on the gravitational-wave signal. This is particularly true for massive binaries: large aligned spins can enhance the strength of the signal, possibly increasing the detector sensitive volume by factors of ∼ 2 (see, e.g., figure 6 of Belczynski et al. 2014). Therefore, our detectability predictions are simplifications which must be treated with caution; however, the uncertainties involved are likely smaller than those in the physics governing the evolution of the binary systems.
The detection rate predictions are based on advanced LIGO detectors operating at either full sensitivity or O1 sensitivity. The detectors will gradually evolve in sensitivity between 2015 and the end of the decade, with several scheduled data-taking runs interspersed with commissioning breaks (Abbott et al. 2016g). While the exact predictions for any intermediate runs depend on the exact shape of the detector noise spectrum and must take into account the cosmological variations in merger rates as described above, a crude estimate can be made by assuming that the detection rate scales with the surveyed volume (see Fig. 4 of Abbott et al. 2016b).
SUMMARY AND CONCLUSION
The channel for chemically homogeneous evolution in tidally distorted massive binary systems is of large interest in light of current searches for binary black hole mergers. The high component masses and comparable masses for the components inferred for GW150914 are a natural and robust outcome of this evolutionary channel. The predicted detection rate is less certain but fully consistent with the first 16 days of double-coincident O1 observations.
At present, with a single confident detection, it is not possible to distinguish between this channel, the classical channel for isolated binary evolution, and the dynamical formation channel. However, the near future prospect of up to hundreds of detections per year (Ab-bott et al. 2016c) will probe the demographics of stellar mass binary black holes. This, together with measurements of the stochastic background from individually unresolvable mergers (e.g. Abbott et al. 2016f), will provide constraints on the formation mechanisms.
Our default model predicts about 500 detections per year at full design sensitivity, corresponding to about 1.8 detections in 16 days of O1-sensitivity data. The model variations we consider give variations by factors of 3-5 up or down, although the possibility of zero detections from this channel can not be excluded at this stage.
The preference for binary black hole mergers with comparable component masses is a robust outcome of all models considered here (in all model variations we find that 90% of the detectable events have mass ratios q larger than 0.55). The same holds for the preference for high total and chirp masses (the median total mass ranges from 59 to 93 M in the model variations we consider).
Possible future detections of binary black hole mergers with significantly unequal component masses or low total masses will be evidence in favor of contributions by the classical isolated binary channel and/or the dynamical formation channel. At 90% confidence, none of the model variations predict total masses below 32 M or mass ratios q < 0.55. Observations outside these boundaries are unlikely to arise from this channel in which both stars evolve chemically homogeneously. However, it would be interesting to explore variations of this evolutionary path in which only one of the stars evolves chemically homogeneously.
We find that the cosmological merger rate peaks at a redshift of 0.4 with the majority of events being just out of reach of the full design sensitivity of the detectors. We find no mergers beyond z = 1.5 in the default model. This has implications for the stochastic background signal that can be tested against predictions from other binary black hole formation channels.
Although the simulated merger and detection rates for this channel are sensitive to model uncertainties, this channel does not suffer from the key physics uncertainties that affect the classical isolated binary evolutionary channel, namely the treatment of unstable and non conservative mass transfer, common envelope ejection events, and the still unconstrained black hole birth kicks. Further efforts are needed to advance detailed onedimensional (such as Marchant et al. 2016) and threedimensional simulations of the physical processes affecting massive near-contact binaries. If future disentangling of the contributions by different scenarios becomes possible, gravitational-wave events will provide interesting constraints on the unique physics of the mixing processes that govern this channel.
|
2016-06-06T14:08:23.000Z
|
2016-03-07T00:00:00.000
|
{
"year": 2016,
"sha1": "fb7435967ce0f604fc6e8eeeb43fe6adc0b9a6c0",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/460/4/3545/8117308/stw1219.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "fb7435967ce0f604fc6e8eeeb43fe6adc0b9a6c0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
16931001
|
pes2o/s2orc
|
v3-fos-license
|
The Formation of a Helium White Dwarf in Close Binary System with Diffusion
We study the evolution of a system composed by a 1.4 \msun neutron star and a normal, solar composition star of 2 \msun in orbit with a period of 1 day. Calculations were performed employing the binary hydro code presented in Benvenuto&De Vito (2003) that handle the mass transfer rate in a fully implicit way. Now we included the main standard physical ingredients together with diffusion processes and a proper outer boundary condition. We have assumed fully non - conservative mass transfer episodes. In order to study the interplay of mass loss episodes and diffusion we considered evolutionary sequences with and without diffusion in which all Roche lobe overflows (RLOFs) produce mass transfer. Another two sequences in which thermonuclearly-driven RLOFs are not allowed to drive mass transfer have been computed with and without diffusion. To our notice, this study represents the first binary evolution calculations in which diffusion is considered. The system produces a helium white dwarf of \sim 0.21 \msun in an orbit with a period of \sim 4.3 days for the four cases. We find that mass transfer episodes induced by hydrogen thermonuclear flashes drive a tiny amount of mass transfer. As diffusion produces stronger flashes, the amount of hydrogen - rich matter transferred is slightly higher than in models without diffusion. We find that diffusion is the main agent in determining the evolutionary timescale of low mass white dwarfs even in presence of mass transfer episodes.
INTRODUCTION
At present, it is a well established fact that low mass white dwarf (WD) stars should be formed during the evolution in close binary systems (CBSs). These objects are expected to have a helium rich interior simply because they have a mass below the threshold for helium ignition of about 0.45 M⊙. If they were formed as consequence of single star evolution, we would have to wait for timescales far in excess of the present age of the Universe to find some of them.
Formation of helium WDs in CBSs was first investigated long ago by Kippenhahn, Kohl & Weigert (1967) and Kippenhahn, Thomas & Weigert (1968). They found that ⋆ Member of the Carrera del Investigador Científico, Comisión de Investigaciones Científicas de la Provincia de Buenos Aires (CIC), Argentina. Emails: obenvenuto@fcaglp.unlp.edu.ar; oben-venu@astro.puc.cl † Fellow of the CIC. Email: adevito@fcaglp.unlp.edu.ar these objects are formed during the evolution of low mass CBSs and that the cooling evolution is suddenly stopped by thermonuclear flashes that are able to swell the star up to produce further Roche lobe overflows (RLOFs).
Since sometime ago, low mass WDs have been discovered as companions to millisecond pulsars (MSP). This fact sparkled interest in helium WDs in order to investigate the deep physical links between both members of a given pair. In particular, it represents an attractive possibility to infer characteristics of the neutron star behaving as a MSP by studying the WD in detail. Studies devoted to helium WD properties are those of Alberts, et al. (1996); Althaus & Benvenuto (1997); Benvenuto & Althaus (1998); Hansen & Phinney (1998a) ;Driebe, et al. (1998) ;Driebe, et al. (1999) (2000) considered the problem in the frame of detailed binary evolu-tion calculations. More recently, Podsiadlowski, Rappaport, & Pfahl (2002) have also computed the evolution of some CBS configurations that give rise to the formation of helium WDs. Also, Nelson & Rappaport (2003) have explored in detail the evolutionary scenarios of binary systems with initial periods shorter than the bifurcation one leading to the formation of ultra-compact binaries with periods shorter than an hour. On the opposite, in this paper we shall deal with a system with an initial period larger than the bifurcation one leading to wider binaries.
Remarkably, the first WD found as companion of a MSP in globular clusters has been detected by Edmonds et al. (2001). Among recent observations of low mass WD companion to MSPs we should quote those by van Kerkwijk et al. (2000) who have detect the WD companion of the binary MSP PSR B1855+09, whose mass is know accurately from measurements of the Shapiro delay of the pulsar signal, MW D = 0.258 +0.028 −0.016 M⊙ . The orbital period of this binary MSP is 12.3 days. More recently, Bassa, van Kerkwijk & Kulkarni (2003) have found a faint bluish counterpart for the binary MSP PSR J02018+4232. The spectra confirm that the companion is a helium WD and, in spite that observations are of insufficient quality to put a strong constrain on the surface gravity, the best fit indicates a low log g value and hence low mass (≈ 0.2 M⊙ ). On the other hand, independently, Ferraro et al. (2003) and have identified the optical binary companion to the MSP PSR J1911-5958A, located in the halo of the galactic globular cluster NGC 6752. This object turned out to be a blue star whose position in the color-magnitude diagram is consistent with the cooling sequence of a low-mass (≈ 0.17−0.20 M⊙ ), low metalicity helium WD at the cluster distance. This is the second helium WD with a mass in this range that has been found to orbit a MSP in a galactic globular clusters. Also, Sigurdsson, et al. (2003) have detected two companions for the pulsar B 1620-26, one of stellar mass and one of planetary mass. The color and magnitude of the stellar companion indicate a WD of 0.34 ± 0.04 M⊙ of age 4.8 × 10 8 y. For previous detections of this kind of objects we refer the reader to the paper by Hansen & Phinney (1998b).
From a theoretical point of view, it was soon realized that the key ingredient of WD models is the hydrogen mass fraction in the star. Consequently, this called for a detailed treatment of the outer layers of the star. Iben & MacDonald (1985) demonstrated the relevance of diffusion in the evolution of intermediate mass CO WDs while Iben & Tutukov (1986) found it to be also important in low mass WDs.
More recently, Althaus, et al. (2001abc) revisited the problem of the formation of helium WDs. In doing so, they mimicked binary evolution by abstracting mass to a 1 M⊙ object on the RGB. The main goal of these papers was to investigate in detail the role of diffusion during the evolution as a pre-WD object. They allowed gravitational settling, chemical and thermal diffusion to operate. However, they did not considered the possibility of any mass transfer episode after detachment from the RGB. Perhaps the main result of Althaus, et al. (2001abc) was the finding that for models with diffusion there exists a threshold mass value M th above which the object undergoes several thermonuclear flashes in which a large fraction of the hydrogen present in the star is burnt out. Consequently, as the star enters on the final cooling track it evolves fast, reaching very low luminosities on a timescale comparable with the age of the Universe. Quite contrarily, in models without diffusion, evolutionary timescales are much longer, making it difficult to reconcile with observations. For WDs belonging to CBSs in companion with MSP, WD ages should be comparable to the characteristic age of pulsars τP SR = P/2Ṗ (for a pulsar of period P with period derivativeṖ that had an initial period P0 such that P0 << P and braking index n = 3). This should be so, because it is generally accepted that the MSP is recycled by accretion from its normal companion. However, it was found that the WD was much dimmer than predicted by models without diffusion, which should be interpreted as consequence of a faster evolution. This has been the case of the companion of PSR B1855+09.
For objects with masses below M th no thermonuclear flash occurs and the star does not suffer from another RLOF. Consequently, it retains a thick hydrogen layer, able to support nuclear burning, forcing the WD to remain bright for a very long time. This is the case of the companion of PSR J1012+5307.
It is the aim of the present paper to revisit the problem of the formation and evolution of helium WDs in CBSs by performing full binary computations considering diffusion starting with models on the main sequence all the way down to stages of evolution of the remnant as a very cool WD. To our notice this is the first time such a study is carried out. In this way we largely generalize the previous studies from our group on this topic. In doing so, we have preferred to concentrate on a particular binary system, deferring a detailed exploration of the huge parameter space (masses, orbital periods, chemical compositions, etc.) to future publications. To be specific, we have chosen to study a CBS composed by a 2 M⊙ normal star together with a neutron star with a "canonical" mass of 1.4 M⊙ on an initial orbit of 1 day of period. We assumed solar chemical composition with Z = 0.02 for which M th ≈ 0.19 M⊙ (Althaus et al. 2001b).
In order to explore the role and interplay of mass loss episodes and diffusion we have constructed four complete evolutionary calculation: Regarding mass transfer episodes we have chosen to study the case of fully non conservative conditions, i.e., those in which all the matter transferred from the primary star is lost from the system carrying away all its intrinsic angular momentum. We do so in order get the strongest possible RLOFs which, in turn, will produce the largest mass transfer episodes. In this sense, we shall get an upper limit to the effects of RLOFs on the whole evolution of the star, in particular regarding the ages of very cool WDs.
The reminder of the paper is organized as follows. In Section 2 we describe our code paying special attention to the changes we implemented in the scheme for computing mass transfer episodes. Then, in Section 3 we describe the evolutionary results for the four cases considered here. Finally, in 4 we discuss the implicances of our calculations and summarize the main conclusions of this work.
NUMERICAL METHODS
In the computations presented below we have employed the code for computing stellar evolution in close binary systems presented in Benvenuto & De Vito (2003). Now we have incorporated the full physical ingredients with the aim of getting state -of -the -art evolutionary results. In particular, we have included a complete set of nuclear reactions to describe hydrogen and helium burning together with diffusion processes. For more details on the considered physics, see, e.g., Althaus, et al. (2001a).
Regarding the outer boundary condition, we have incorporated the formula given by Ritter (1988) for computing the mass transfer rateṀ : whereṀ0 is the MTR for a star that exactly fills the Roche lobe (see Ritter's paper for its definition), RL is the equivalent radius of the Roche lobe (see below), R is the stellar radius and HP is the photospheric pressure scale height. We have considered that a mass transfer episode is underway when R RL − ξHP with ξ = 16. In this way the star begins (ends) to transfer mass in a very natural and smooth way. This has been completely adequate for the purpose of carrying out the calculations presented below.
EVOLUTIONARY CALCULATIONS
In order to present the numerical calculations we shall describe in detail the sequence for which all physical ingredients were considered (Case A) in which we allowed diffusion and mass transfer in each RLOF to operate. The evolutionary track in the HR diagram corresponding to case A is shown in Fig. 1. The 2 M⊙ object begins to evolve and the the first RLOF occurs (point 1 in Table 1) when it is still burning hydrogen in the center. Thus, we are dealing with Class A mass transfer, as defined by . At that moment the hydrogen central abundance is XH = 0.214578. From there on, as consequence of the orbital evolution of the binary, the primary star undergoes a huge mass loss (see the first panel of Fig. 2) which continues up to the moment (point 2 in Table 1) at which central hydrogen exhaustion occurs. The star contracts and mass transfer is stopped; at that moment the mass of the primary is of 1.59252 M⊙ . Little latter, as consequence of the formation of a shell hydrogen burning zone, the star inflates and mass transfer starts again (point 3), and stands on a long period at which the star losses almost 90 % of the initial mass, ending with a mass of 0.22007 M⊙ . Hereafter we shall consider these two RLOF episodes as an initial RLOF in order to differentiate it from the other flash -induced RLOFs. During the initial mass transfer episode, the hydrogen content of the outermost layers dropped up to a minimum value of ≈ 0.3 because mass transfer dredges up layers that were previously undergoing appreciable nuclear burning (see Fig. 3). This increase in the mean molecular weight of the plasma present in the outer layers of the star favours the contraction of the primary star. At point 4 the star detaches from the Roche lobe, and since then on the star evolves bluewards very fast up to approximately the moment at which reaches a local maximum in the effective temperature. After such maximum effective temperature, evolution appreciably slows down allowing diffusion to have time enough to sensibly evolve the hydrogen profile. Here, hydrogen tends to float simply because it is the lighter element present in the plasma. This is clearly shown as a steep increase in Fig. 3 where we show the surface hydrogen abundance 1 of the model as a function of time.
Quite in contrast with the behaviour of surface abundance, at the bottom of the hydrogen envelope, hydrogen tend to sink now as a consequence of large abundance gradients. The net effect is that, while the outer layers get richer in hydrogen, diffusion is fueling hot layers. Then, when the hydrogen rich layers become degenerate and conduction eases the energy flux outwards, hydrogen becomes hot enough to ignite. Now, in degenerate conditions, ignition is unstable (See Fig. 4). Consequently, evolution suddenly accelerates and there occurs a hydrogen thermonuclear flash. Such a flash is not strong enough to inflate the star to force a new RLOF. Regarding surface abundances, we should remark that little time after thermonuclear flash occurs, the star develops a deep outer convective zone embracing from very hydrogen rich layers up to others in which hydrogen is almost a trace. As consequence, hydrogen abundance suddenly drops 2 down to a value similar to the one the star had at the end of the initial RLOF (see Fig. 3 and Table 1). The referred mixing is noticeable in the evolutionary track ( Fig. 1) as a sudden change of slope after the minimum in the effective temperature and luminosity of the star. After mixing, the outer layers of the star continue swelling up to get near producing a RLOF, but begin to contract before. After maximum radius is reached, the star undergoes a fast contraction up to its maximum effective temperature, and thereon again timescales become long enough for diffusion to operate. The star again floats hydrogen at outer layers and fuel others that will make the star to undergo a second thermonuclear flash. Now the star has a higher degree of degeneracy in the critical layers, making the flash to be stronger than the previous one. From there on the star undergoes much the same evolution as in the previous flash, but now the star inflates enough to undergo a third RLOF event. The conditions at the onset of this third RLOF correspond to point 5 in Table 1.
Obviously, this third RLOF is deeply different to the initial one. Now the envelope is very dilute and so, a tiny amount of stellar mass occupy a large portion of its radius. Consequently, very little mass is transferred from the primary in contrast with initial RLOF in which the primary transferred about 90% of it initial mass. The MTR during this third RLOF is depicted in the third panel of Fig. 2. Remarkably, the mass lost from the primary star during this third RLOF has a low hydrogen abundance due to the previous mixing.
After few thousands of years transferring mass, the star contracts again repeating essentially the same evolution the star followed after the first thermonuclear flash. However, re- Table 1. Selected stages of the evolution of a system composed by a 1.4 M ⊙ neutron star and a normal, solar composition star of 2 M ⊙ in orbit with an initial period of 1 day. Here we have considered diffusion and mass transfer during each RLOF episode (Case A). Points labeled with odd (even) numbers correspond to the beginning (end) of a mass transfer episode in Fig. 1. The last point corresponds to the end of the computation. markably, the star has lost hydrogen, due to nuclear burning as well as to mass transfer (see Fig. 5). However, the star still has an amount of hydrogen high enough to force the star to undergo another flash. Now the flash is rather more violent than the previous one producing another RLOF. In order to gain clarity, we have chosen to discuss in detail the loop due to the last thermonuclear flash. In Fig. 6 we show the excursion of the star in the HR diagram indicating the main physical agents acting in the star together with some particular models (solid dots). The hydrogen profiles corresponding to models before and just at the end of (after) the RLOF are shown in Fig. 7 (Fig. 8). Some relevant characteristics of these models are presented in Table 2.
The hydrogen profile for some of the models indicated in Fig. 7 corresponding to stages previous to and just after the last RLOF. Up to model labeled 11000 (the model number in the sequence) the outwards motion of the profile is due to the nuclear burning during the flash. Since model 11000 on (not shown in the figure) the profile moves outwards as consequence of the mass transfer episode which ends in the model 11200. Notice that the loss of hydrogen is somewhat tiny (points 9 and 10 in Table 1).
In Fig. 8 curve labeled as 14500 corresponds to stages somewhat after RLOF. Curves labeled as 15000 -15050 are displaced outwards due to nuclear burning while curve 15075 corresponds to a profile modified by diffusion. Notice the tail of the hydrogen profile gets appreciably deeper approximately when the outermost layers become saturated by hydrogen.
From our calculations we find that only after four hydrogen thermonuclear flashes the star is able to enter on the final cooling track of a helium WD (see Fig. 4). The evolutionary timescale of the model is presented in Fig. 9. Notice that the nuclear energy release at such advanced stages of evolution is a minor contribution to the total energy balance of the star. As consequence the star gets heat from its relic thermal content which forces a fast evolution, reaching very low luminosities in timescales comparable to the age of the Universe. We stopped the calculation when the object reached log L/L⊙ = -5, at that moment the star had an age of about 19 Gyr. Now, let us discuss the results corresponding to Case C in which we allowed all RLOF to drive mass transfer but we have neglected diffusion (see Table 3). The evolutionary track for this case is shown in Fig. 1, panel C. Here, the evolution previous to the end of the initial RLOF is very similar to that corresponding to the Case A. In other words, diffusion has a negligible effect on these evolutionary stages. Perhaps, the main difference is the increase in hydrogen surface abundances previous to the initial RLOF found in Case A is absent here (see Table 1). However, differences in the evolution are quite significantly after the end of initial RLOF. As here, by assumption, the physical agents able to modify abundances are only nuclear reactions and convection, obviously, the outermost layers of the hydrogen -rich layers are not enriched by hydrogen and simultaneously no fueling occurs at the bottom. Consequently, the evolution is very different. Notably, the star undergoes only three thermonuclear flashes and outer layers have rather constant abundances in spite that they also develop outer convection zones after each flash. As in the previous case we computed the evolution up to when the object reached log L/L⊙ = -5 with an age of about 25.65 Gyr. Thus, evolution is markedly slower than in Case A. This is due to the fact that here thermonuclear flashes are weaker, burning less hydrogen. As consequence, the star is able to undergo appreciable thermonuclear energy release during the final cooling track as a helium WD. In this regard, notice that nuclear burning remains the dominant energy source of the star up to ages of 10 Gyr (see Fig. 9).
The sequence of models corresponding to Case B (Case C) is very similar to that corresponding to Case A (Case D) and will not be discussed in detail. The obvious major difference is related to the size the star is able to reach just after thermonuclear flashes. As in Case B (case D) it is assumed that there is no limitation imposed by the size of the Roche lobe, after thermonuclear flashes, the star reaches effective temperatures far lower than those allowed in the case of the occurrence of RLOFs. Quite noticeably, the evolutionary timescale of the final WD cooling track is largely independent of the occurrence of any thermonuclearly flash -induced RLOF (see Fig. 9).
Another interesting difference arises regarding the char- Table 2. Selected stages of the evolution of the primary star of the system corresponding to Case A. We have selected relevant models in a loop shown at Fig. 6. Model stand for the number of models in the evolutionary calculation.
Model Table 3. Selected stages of the evolution of a system composed by a 1.4 M ⊙ neutron star and a normal, solar composition star of 2 M ⊙ in orbit with a period of 1 day. Here mass transfer is allowed to occur during each RLOF episode but diffusion has been neglected (Case C). Points labeled with odd (even) numbers correspond to the beginning (end) of a mass transfer episode. The last point corresponds to the end of the computation.
Model acteristic timescale of evolution of the models from the red part of the diagram to the maximum effective temperature conditions. We have found that in such stages, evolution of models for which thermonuclearly induced RLOFs are allowed suffer from a much faster evolution regardless of the allowance or not of diffusion. To be specific, from minimum to maximum effective temperature, Case B models take ≈ 10 6 y while models with RLOF spend only ≈ 10 4 y from Roche lobe detachment to maximum effective temperature. This, obviously indicates that it should be more difficult to find objects at such conditions than predicted in models without thermonuclearly -induced RLOFs. Also, notice in Fig. 1 that the subflashes (little loops in the evolutionary tracks occurring when the star is evolving bluewards) happen at different effective temperatures depending on the allowance of RLOFs. In fact, in models with (without) RLOFs subflashes occur at higher (lower) effective temperatures for consecutive thermonuclear flashes. This is so irrespective of allowance or not of diffusion.
DISCUSSION AND CONCLUSIONS
In this work we have computed the evolution of a binary system composed by a neutron star with a "canonical mass" of 1.4 M⊙ and a normal, population I main sequence star of 2 M⊙ in orbit with a 1 day period. We have performed the calculations employing an updated version of the code presented in Benvenuto & De Vito (2003) in which we have included the main standard physical ingredients together with diffusion processes. Also a proper outer boundary condition was incorporated following Ritter (1988) (see §2).
In order to explore the role of mass transfer episodes from the primary star and its interplay with diffusion we have considered four situations: diffusion, all RLOF operate (Case A); diffusion, no flash -induced RLOF operates (Case B); no diffusion, all RLOF operate (Case C); and no diffusion, no flash -induced RLOF operates (Case D). See introduction ( § 1) of further details.
To our notice, these calculations represent the first detailed study of binary evolution considering diffusion. In this sense, this work represents a natural generalization of the results presented by Althaus, Serenelli & Benvenuto (2001a) in which binary evolution processes was mimicked by forcing a 1 M⊙ star on the red giant branch to undergo an appro-priate mass loss rate. Now the proper inclusion of the specific processes that govern binary evolution offer us a more physically sound description of the formation of low mass, helium white dwarfs (WDs). In particular, now we have the possibility of connecting stellar structure and evolution with the orbital parameters of the systems allowing for a deeper comparison with observations.
From the results presented in the previous sections, it is clear that diffusion is far more important in determining the timescale of evolution of the stars than mass transfer episodes during flash -induced RLOFs. This is so especially when the object reaches the final cooling track. We found that timescales are almost insensitive to the occurrence of flash -induced RLOF episodes (see Fig. 9). This constitutes the main result of the present work.
ACKNOWLEDGMENTS
We thank our referee, Prof. Philipp Podsiadlowski for comments and suggestions that allowed to improve the clarity of the original version. OGB is supported by FONDAP Center for Astrophysics 15010003. Tables 1 and 3 respectively. Notice that, in both cases, the initial RLOF is very prolongated. The first happens when the star is still burning hydrogen at its core. At core exhaustion mass transfer ends, and when the star swells due to the outwards motion of the hydrogen shell burning, there occurs a new RLOF. We called these two RLOFs as the initial RLOF. In lower and upper panels these events are almost the same due to the fact that the effects of diffusion are barely noticeable at these early stages. The subsequent episodes are due to thermonuclear flashes. Notice that their duration is six orders of magnitude shorter. Figure 3. The hydrogen abundance in the outermost layers of the primary star. Here we considered the hydrogen abundance at the first point in the grid corresponding to 1 − Mr/M ≈ 10 −8 . In the left panel we depict the results for models in which RLOFs are allowed whereas right panel depicts the case for models in which mass transfer driven by thermonuclearly induced RLOFs is neglected. Each curve is labeled as in Fig. 1. Notice the enormous differences in the behaviour of outer layers in the cases with and without diffusion. However, abundances are barely affected by allowance of mass transfer, as it is clearly noticeable in view of the similarities of the plots in each panel. In the sequences corresponding to case A, due to mixing, mass transfered during each RLOF has a chemical composition corresponding to a minimum in hydrogen abundance. Such a composition is very similar to that of the plasma transfered in the models of Case C without diffusion. See main text for further details. Figure 5. The logarithm of the hydrogen mass faction present in the star vs. time for the four cases of evolution presented in this paper during the last thermonuclear flashes. Labels A, B, C, D refer to the evolutionary tracks A to D in Fig. 1. Notice that after thermonuclear flashes, models with diffusion have a lower hydrogen fraction. Also, as it may be expected, the hydrogen mass fraction is lower for models in which RLOFs operate at these stages. Models without diffusion end their flash episodes with a higher hydrogen content which is subsequently burnt out during the final cooling track. Notice the change in the vertical scale of this figure above and below the break in the vertical axis. See main text for further details. . The logarithm of photon luminosity and nuclear luminosity fraction released during the last stages of evolution of the models considered in this paper. Labels A, B, C, D refer to the evolutionary tracks A to D in Fig. 1. Models without diffusion have a much larger nuclear activity at advanced evolutionary stages, which slows down the evolution in an appreciable way. Remarkably, the ages of these objects are largely determined by the allowance of diffusion whereas considering or neglecting thermonuclearly induced RLOFs has a negligible effect on stellar ages.
|
2014-10-01T00:00:00.000Z
|
2004-04-19T00:00:00.000
|
{
"year": 2004,
"sha1": "f1269cbc887643d55343eef1159fd5b58d121217",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/352/1/249/3187038/352-1-249.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bc56d03ce6c0a4558788639e5b127980f169112c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219636044
|
pes2o/s2orc
|
v3-fos-license
|
On the conservation properties in multiple scale coupling and simulation for Darcy flow with hyperbolic-transport in complex flows
We present and discuss a novel approach to deal with conservation properties for the simulation of nonlinear complex porous media flows in the presence of: 1) multiscale heterogeneity structures appearing in the elliptic-pressure-velocity and in the rock geology model, and 2) multiscale wave structures resulting from shock waves and rarefaction interactions from the nonlinear hyperbolic-transport model. For the pressure-velocity Darcy flow problem, we revisit a recent high-order and volumetric residual-based Lagrange multipliers saddle point problem to impose local mass conservation on convex polygons. We clarify and improve conservation properties on applications.For the hyperbolic-transport problem we introduce a newlocally conservative Lagrangian-Eulerian finite volume method. For the purpose of this work, we recast our method within the Crandall and Majda treatment of the stability and convergence properties of conservation-form, monotone difference, in which the scheme converges to the physical weak solution satisfying the entropy condition. This multiscale coupling approach was applied to several nontrivial examples to show that we are computing qualitatively correct reference solutions. We combine these procedures for the simulation of the fundamental two-phase flow problem with high-contrast multiscale porous medium, but recalling state-of-the-art paradigms on the of notion of solution in related multiscale applications. This is a first step to deal with out-of-reach multiscale systems with traditional techniques. We provide robust numerical examples for verifying the theory and illustrating the capabilities of the approach being presented.
1. Introduction. In this paper, we are concerned with modeling, simulation and numerical analysis for approximate solutions in multiscale nonlinear Partial Differential Equations (PDE) related to highly complex systems. The large number of papers published in recent years is indicative of the relevance of the foundations of multiscale approach. It is a good measure of the breadth and of the vitality of the area, and therefore, calling the multiscale modeling and simulation research community for new ideas and innovative approaches. In this work we give an overview of recent approaches for the accurate and efficient simulation of complex porous media flows and we also present some new results. We summarize below the main aspects of our work: • We revisited a novel volumetric locally conservative and residual-based Lagrange mul-tipliers saddle point reformulation of high-order methods and present numerical results with realistic high-contrast multiscale coefficients. This is applied to a second-order elliptic problem (∇ · [−K(x)Λ(S)∇p] = q(x) ) instead of a traditional first-order mixed formulations. We clarifying and simplifying the presentation of its conservative properties. • We introduce a new robust and accurate forward tracking Lagrangian-Eulerian scheme for hyperbolic problems. Our method is carefully designed to deal with multiscale wave structures resulting from shock wave interactions coming from abstract nonlinear hyperbolic conservation laws such as w t +divF(t, x, w(t, x)) = 0 with fluxes of the form F(x, t, w) = v(x, t)f (w(t, x)). This formulation includes many problems of physical interest and, in particular, we consider the case of complex multiscale flow in porous media (scalar and systems cases). • We improve the interpretation of the construction of numerically stable Lagrangian-Eulerian no flow surface region in two-space dimensions previously presented and analyzed in [17] for one-dimensional balance and conservation laws. • We present a new approach with accurate multiscale resolution for two-phase flows.
The method is able to handle multiscale rock geology in the elliptic-pressure-velocity system. Moreover, the multiscale wave structures from wave interactions present in the hyperbolic-transport model seems to be properly simulated according to our numerical results. • We present numerical results with realistic high-contrast two-dimensional multiscale coefficients based on the 10th SPE Comparative Solution Project (SPE10). We address numerical issues of multiscale resolution and conservation properties. • We survey recent results on both: novel deterministic and probabilitc multiscale modeling paradigms and issues of mesh resolution inadequacy in multiscale complex systems. The new elliptic solver [14,15] is a general tool for imposing local conservation for highorder methods to deal with multiscale permeabilities. In particular is also applicable to the Generalized Multiscale Finite Element Method (GMsFEM); see [16,74,119,75,118] and references therein. On the other hand, the new Lagrangian-Eulerian method seems to be a promissing general approach to capture nonlinear wave interactions linked to multiscale behavior of interactions of waves in a wide range of models and, in particular, for systems (see [17,18,19,20,21,22]). Numerical results show that our combined multiscale approach provides an accurate and robust procedure to study phenomena which couple distinct length or time scales. Our novel hyperbolic Lagrangian-Eulerian solver circumvents the use of adaptive and/or mesh generation. Indeed, the new method is also free of Riemann solvers. For Darcy flow, the method captures fine-scale effects using multiscale finite element techniques. Finally, it is worth mentioning that for the purpose of this work we use Cartesian grids. Thus convergence and error analysis reduces essentially to a one-dimensional problem and retains convergence result of approximated solutions to the entropy weak solution by recalling [64,65,48,57]. This is to say that our approach fits comfortably within the classical theory. On the other hand, we provide convergence and error analysis on triangular grids for hyperbolic conservation laws in another forthcoming work [22]. Moreover, the results in [17], strongly suggest that the no flow surface region encapsulates the domain of dependence even in the case of hyperbolic systems.
A reliable multiscale prediction of oil-water two-phase flow model through complex porous media requires the development and validation of coupling models for flow across length scales related to elliptic Darcy-pressure-velocity and hyperbolic saturation-conservation problems. Therefore, in this work we introduce a novel computational approach for multiscale computing of the oil-water flow model system.
Let Ω be a domain in R 2 . For simplicity of presentation, we consider two-phase flow in a viscous-dominated flow, and we neglect the effects of gravity 1 , compressibility and capillarity and set the porosity equal to a constant, in which case it has been scaled out by a change of the time variable. Thus, the fundamental differential multiscale system used to describe water-oil, incompressible, immmiscible displacement is given by the saturation and pressure equations (see, e.g., [52]): Here, S is the water saturation (water and oil saturation sum up to one), x is the absolute permeability, v is the total Darcy velocity, p is the pressure, Λ is the total mobility and F is the fractional flow of water. We also have F = Λ i /Λ with Λ i = k i /µ i being the phase mobility, k i the relative permeability and µ i viscosity of phase i, respectively. We end up with a multiscale system of two coupled nonlinear partial differential equations (1.1)-(1.2) exhibiting multiscale features in both sides: the complex multiscale heterogeneity structures from rock geology appearing in the elliptic-pressure-velocity model (1.2) as well as multiscale wave structures resulting from shock wave interactions from the hyperbolictransport model (1.1). For concreteness, we complete equations (1.1) and (1.2) with appropriate initial and boundary conditions aiming simulations in a slab geometry domain Ω and time interval T = [t 0 , t f ]. We consider the coupling system (1.1)-(1.2) in a two-dimensional rectangular slab domain Ω = (0, L x ) × (0, L y ), with the boundary conditions where n is the outward-pointing unit normal vector to ∂Ω, and a uniform initial condition The initial-boundary conditions (1.3)-(1.4) simulate a left to-right waterflood. Water is injected uniformly (at a constant rate q) through the left vertical boundary Γ inlet (x = 0) of Ω for all simulation time t ∈ t 0 , t f , no flow conditions are imposed along the horizontal boundaries Γ no−f lux (y = 0, L y ), and fluid is produced from a well kept at constant (zero) pressure at the right vertical boundary Γ outlet (x = L x ). Equations (1.1)-(1.4) can be viewed as multiscale water-oil Riemann-Goursat water injection problem with discontinuous flux function as closely related to [5] (see [109,104,103,102,101]), in which the hyperbolic scheme should be able to handle the main difficulty of the multiscale problem that consists in taking into account the jump discontinuities of the flux; see also [64,65,12,8,6,7] and references cited therein. Another motivation is the occurrence of models such as (1.1)- (1.4) in numerous engineering problems, mainly the case of systems [8,6,7,69,70,33,34,35,95,12]. In addition, there are a number of prototype relevant models of hyperbolic conservation laws with discontinuous flux functions in oil trapping phenomenon [29,9,44,87,97,115], a Whitham model of car traffic flow on a highway [81,93] and a model of continuous sedimentation in ideal clarifier-thickener units [40], see also [38,88]. Hyperbolic conservation laws with discontinuous flux functions also arise in sedimentation processes [59], in radar shape-from-shading problems [100] and also as building blocks in numerical methods for Hamilton-Jacobi equations [89]. Hyperbolic conservation laws of the form (1.1) with discontinuous (in (t, x)) flux function H(t, x, S) ≡ {f (S) [−K(x)Λ(S)∇p]} attracted much attention in the recent past, because of the difficulties of adaptation of the classical Kruzhkov approach developed for the smooth case, due in part to the presence of several different entropy solutions with same initial data. In the context of Buckley-Leverett equations as in (1.2)-(1.4), each notion of solution is uniquely determined by the choice of a connection, which is made unique at the interface by a proper choice of an entropy solution from many possible classes of entropy solutions, [24,29,5]. Nonclassical solutions appear for Buckley-Leverett models with gravity and discontinuous flux functions [44,29] as well as for three-phase flow problems with continuous flux functions, where recent and very relevant results can be found in [33,34,35,95] for a solution of Riemann problems and in [69,70] concerning well-posedness. In [27], a theory of L 1 -dissipative solvers for scalar conservation laws with discontinuous flux was proposed, which is based on the corresponding L 1 contractive semigroups, some of which reflect different multiscale physical applications. In [27,28,58] a number of the existing admissibility (or entropy) conditions are revisited and the so-called germs that underly these conditions are identified. The seminal survey article [28] (see also [107,108,113,41,111,90,47,88]) of recent developments helps to better understand the issue of admissibility of solutions in relation with specific modeling assumptions. By looking at model problem (1.1)-(1.4) as a generalized problem linked to systems of conservation laws in several dimensions, we recall that some authors has advocated entropy measure-valued solutions, first proposed by DiPerna [60,61], as the appropriate solution paradigm for systems of conservation laws; see [67,68] and references cited therein. In particular, these authors have presented some numerical evidence that state-of-the-art numerical schemes may not converge to an entropy solution of systems of conservation laws as the mesh is refined. Accordingly to [67,68] this has been attributed to the emergence of turbulence-like structures at smaller and smaller scales upon mesh refinement. See also [50,51,94].
In [68], the authors point out that intermittency is widely accepted to be a characteristic of turbulent flows [72]. It is believed that intermittency stems from the fact that turbulent solutions do not scale exactly as in the Kolmogorov hypothesis. On the other hand, in [50] the authors offer a unified framework in which is possible to establish mathematical existence theories as well as a very innovative idea for the interpretation of numerical solutions through the identification of a function space in which convergence should take place. In a more general setting, the issue of multiscale modeling and simulation of chaotic mixing of distinct fluids has been addressed in [50,51,94]. They mention that acceleration driven turbulent mixing is a classical hydrodynamic instability. See also [73] to the case of two-phase flows.
We also mention also the very recent works [50,51,94,67,68] on nonlinear multiscale problems like (1.1)-(1.4) in which resolution of multiscale turbulent-like behavior is better resolved under mesh refinement. In fact, structures at smaller and smaller scales are formed as the mesh is refined to account for the complex heterogeneity structures from rock geology appearing in the elliptic model as well as multiscale wave structures resulting from shock wave interactions from the underlying hyperbolic problem. This is done such that solutions satisfy a proper family of entropy inequalities [5]. Results in [92,54] have demonstrated that entropy solutions may not be unique. In the direction from both rigorous mathematical analysis and numerical analysis, ingenious difficulties stem from the lack of regularity of solutions.
A better comprehension of multiscale fluid flow in subsurface is very hard, challenging and undoubtedly still of current events. Multiscale sciences cuts across all of science from fluid dynamics to biology, from meteorology to material science and from physics to chemistry among many other directions. Multiscale issues are also central in subsurface flows ranging from complex geologic media to several time scales linked to the compositional and black-oil modeling fluid flow in oil reservoirs as well as several scale aspects of groundwater flow and related transport systems. Understanding the multiscale properties of subsurface flows is a major problem of modern approaches predicting groundwater level changes and predictive technologies in petroleum reservoir. In this regard, many innovative techniques have been reported as such local-global upscaling approach [63,78,96,53,117,62], multiscale methods [1,32,82,84,91,98,99,79,30], model order reduction techniques [71,83,45,110,55], Twoscale homogenization theory [66,25,26,42,86,112,31]; see papers for a survey on recent development of multiscale computing and modeling approach [110,2,114,76,77,116,80].
Despite the efforts of many researchers, no universal or unified multiscale modeling and methods for fluid flow through naturally complex geologic reservoirs has been achieved so far (see, e.g., [2,116,46,105,8,110,83,71,98,32,62,117,63]). Under appropriate simplification assumptions compositional and black-oil models can be further simplified for the fundamental multiscale two-phase immiscible displacement with no mass transfer between phases (this is often appropriate in models describing displacements at the length scales associated with reservoir simulation grid blocks, see, e.g., [63]). In addition, even for the two-phase case, the multiscale modeling is very challenging in the presence fractures and barriers for flow in porous medium and their impact on the closure and constitutive relations as such multiscale relative permabilities and pressure difference (capillary pressure); see [29] and references cited therein for an interesting study on vanishing capillarity solutions of Buckley-Leverett equation with gravity in two-rock's medium (see also [105,46,43] for multiscale modeling of Richards's equation and two-phase under non-equilibrium effects). However, in the case of scalar wateroil two-phase model a global-pressure formulation and Kirchhoffs transform are not adequate when considering non-equilibrium effects as such hysteresis in the relative permeabilities [12] and in the capillary pressure [36] the issue of global pressure formulation for three-phase is not straightforward (see [8,12,13] and reference cited therein for a detailed multiscale mod-eling for three-phase flow problem). Moreover, the case of discontinuous capillary pressure induced by mulsitcale modeling of fractures and barriers in the three-phase flow with gravity is hard and very intricate [8,11,13]. The degeneracy for three-phase and two-phase flows is also delicate [10,106]. Altogether the fundamental multiscale water-oil model (1.1)-(1.4) is also useful both for describing some real cases (e.g., dead-oil systems) and for developing and studying numerical solution procedures. In this works we consider the multiscale approximation associated with reservoir multiscale simulation along with coupling techniques for elliptic (Darcy-pressure-velocity) and hyperbolic (conservation-saturation-transport) problems as pursued in this work.
The paper is organized as follows. In Section 2, we construct an embedded high-order model for second-orde elliptic problems with local and global mass conservation, clarifying and simplifying the presentation of its conservative properties in lines as introduced in [14,14,75,3,16]. Next, we construct new a locally conservative Lagrangian-Eulerian method for hyperbolic-transport with focus on a novel approach for conservation properties of the no flow surface region for hyperbolic conservation laws in Section 3. In Section 4, we present and discuss a the oupling conservative finite element method for Darcy flow problem with a locally conservative Lagrangian-Eulerian method for hyperbolic-transport, along with a set of representative computational results. In Section 5, a summary with concluding remarks and perspectives for future work are highlitghted.
2. Elliptic problem and mass conservation. Many porous media related practical problems (scalar and systems) lead to the numerical approximation of the Buckley-Leverett typemodels given by the highly-nonlinear multiscale problem like (1.1)-(1.4). For the approximation of the pressure field given by the pressure-velocity Darcy-elliptic fundamental multiscale problem (1.2). We use the method designed and analyzed in [14,75]; see also [3,16,74]. We now recall this high-order conservative FEM formulation, clarifying and simplifying the presentation of its conservative properties with focus on the conservation properties in multiple scale coupling and simulation for Darcy flow with hyperbolic-transport in complex flows. A general interpretation of this methodology goes as follows: given a Ritz approximation of the pressure and a computational procedure to obtain fluxes we can then, formulate an constrained (to local flux conservation) minimization problem to obtain approximated solution that satisfy local conservation properties in the form of average fluxes on local regions. See [75,3,16].
2.1. Imposing local mass conservation. We follow the presentation in [14,75]. Denote where v ∈ H 1 D (Ω) and the bilinear form a is defined by
Problem (1.2) is equivalent to the minimization problem: Find
In the IMPES approach, for each time step the mobility can be thought as a function of position and reads simply as Λ(x). Thus, in order to consider a general formulation for porous media applications we let Λ be a 2 × 2 matrix with entries in L ∞ (Ω) in Problem (1.2) to be almost everywhere symmetric positive definite matrix with eigenvalues bounded uniformly from below by a positive constant.
In order to deal with mass conservation properties we follow the method introduced in j=1 be a primal mesh made of elements that are triangles or squares. Here N h is the number of elements of the triangulation. We also have a dual mesh where the elements are called control volumes and N * h is the number of such volumes. In general it is selected one control volume V i per vertex of the primal not in ∂Ω D . In case |∂Ω D | = 0, N * h is the total number of vertices of the primal triangulation including the vertices on ∂Ω. Figure 1 illustrate primal and dual meshes made of squares where ∂Ω D = ∂Ω, in this case N * h is equal to the number of interior vertices of the primal triangulation. If q ∈ L 2 we have that solving (1.2) is equivalent to: Find p ∈ H 1 div,Λ (Ω) and such that where the subset of functions that satisfies the mass conservation restrictions is defined by Let M h = Q 0 (T * h ) be the space of piecewise constant functions on the dual mesh T * h . The Lagrange multiplier formulation of problem (2.4) can be written as: Find p ∈ H 1 div,Λ (Ω) and λ ∈ M h that solves, Here, the total flux bilinear form a : The first order conditions of the min-max problem above give the following saddle point problem: Find p with p ∈ H 1 div,Λ (Ω), and λ ∈ M h that solves, For the analysis of this formulation see [14]. Recall that we have introduced a primal mesh made of elements that are triangles or squares. Here N n is the number of elements of the triangulation. We also have given a dual mesh where the elements are called control volumes. Figure 1 illustrate a primal and dual mesh made of squares. See for instance [37,75,3]. Let us consider P h = Q r (τ h ) the space of continuous polynomial functions of degree r on each element of the primal mesh, and let P h 0 be the space the functions in P h that vanish in ∂Ω. Let M h = Q 0 (τ * h ) be the space of piece constant functions on the dual mesh τ * h . For more details on the constructions see [14,75]. The discrete version of (2.7) is to find p h ∈ P h 0 and λ ∈ M h such that Let {ϕ i } be the standard basis of P h . We define the matrix Note that A is the finite element stiffness matrix corresponding to finite element space P h . Introduce also the matrix Λ∇ϕ j · n.
With this notation, the matrix form of the discrete saddle point problem is given by, where vectors f and g are defined by, For the analysis of the continuous and discrete problem and corresponding error estimates see [14]. As an important remark we mention the optimality of the approximation error in both H 1 and L 2 norms. From the analysis in [3,16] we know that u h is an optimal approximation in the H 1 semi-norm (which becomes a norm when restricted to appropriate subspace). We also found in [3] that, in each control volume, the u h + λ h offers optimal approximation in the L 2 norm. We real that in many approaches imposing conservation properties of second order formulations leads to a non-optimal L 2 approximation. For more details on this formulation in porous media applications; see [2,75,3,14,15].
Numerical tests.
Let us illustrate the two main features of the HOCFEM method which are high-order approximation rate and conservation of mass.
2.2.1. Homogeneous medium. We consider the Equation (1.2) with Ω = [0, 1] × [0, 1] in homogeneous medium (Λ(x) ≡ 1), to be discretized on a regular mesh made of 2 M × 2 M squares. The dual mesh is constructed by joining the centers of the elements of the primal mesh as in Figure 1. We consider homogeneous Dirichlet's boundary conditions in ∂Ω and construct the example (2.14) by fixing the solution p and computing a source term q so we can compare the numerical solution with the exact solution (2.14) q(x, y) = 2π(cos(πx) sin(πy) − 3 sin(πx) cos(πy) + π sin(πx) sin(πy)(−x + 3y)), We apply HOCFEM to example (2.14) with Q 1 , · · · , Q 5 finite element spaces to the problem and compute HOCFEM and FEM solutions. We estimate the L 2 and H 1 errors and plot them in a log-log graphic shown in Figures 2 and 3.
Numerical convergence is observed in H 1 norm with the same error variation rate of FEM. The error p − p h in L 2 norm is not optimal but with the correction p − (p h + λ h ) (in each 2) with data (2.14), using Q 1 , Q 2 , Q 3 , Q 4 and Q 5 basis through a mesh refinement. 2) with data (2.14), using Q 1 , Q 2 , Q 3 , Q 4 and Q 5 basis through a mesh refinement.
control volume) we recover convergence rate of FEM in this norm for HOCFEM. We compute conservation of energy indicator (2.15) and conservation of mass indicator (2.16) by the below formulas, respectively, As we can see in Table 1 conservation of Energy remains similar for both methods while HOCFEM exhibit much better conservation of mass than classical FEM.
Heterogeneous medium.
Let us move to a heterogeneous medium with highcontrast coefficients. The medium to be consider is the last 64 × 64 block of the geological SP E10 porous medium taken from [56] shown in Figure 4. This is a widely used heterogeneous porous medium for simulations (see for example [56]). We perform a numerical experiment to study convergence and conservation of energy and mass of HOCFEM in such a realistic heterogeneous medium. Let us consider the model problem (1.2) on Ω = [0, 1] × [0, 1] with constant forcing term and homogeneous Dirichlet's boundary conditions over ∂Ω. The mobility coefficient Λ is taken from the SPE10 medium as described before. We compute HOCFEM approximations using Q 1 and Q 2 basis over 3 square meshes of norm h = 2 −M with M = 6, 7, 8 and compute errors in L 2 and H 1 norms against a reference solution calculated using Q 3 basis in the finest mesh. Figures 5 and 6 show the L 2 and H 1 errors computed for both solutions using Q 1 and Q 2 . We observe that the error variation rate is similar for both solutions. This behavior of the error in heterogeneous medium is different to homogeneous medium where rates where proportional to the grade of the polynomials used in basis.
We also compute the approximated solution u h HOCF EM solving system in (2.12) for a 256× 256 computational mesh and estimate the conservation of energy and mass indicators defined in (2.15) and (2.16). Table 2 shows both, local mass and energy conservation indicators, for our high order HOCFEM formulation (E(u HOCF EM ) and J(u HOCF EM )) and for classical FEM (E(u F EM ) and J(u F EM )). From Table 2 we see that conservation of global energy does not change from FEM to HOCFEM while conservation of mass is superior in large with our new formulation.
3. Conservation properties of the no flow surface region for hyperbolic conservation laws. The aim of this section is to present an extension of the Lagragian-Eulerian scheme (see [17,18,19,20,21,22]) for hyperbolic conservation laws in two-space dimensions with some initial condition coming from abstract nonlinear problems of hyperbolic conservation laws. We can also consider problems of physical interest in fluid mechanics such as multiscale flow in porous media scalar and systems treated in this work.
We mention that our novel hyperbolic Lagrangian-Eulerian solver (it its simplest form) can Table 2: Energy minimization and conservation indicator for numerical solution of Problem (1.2) with data (2.14) in a fixed mesh 64 × 64 using basis Q 1 , Q 2 , Q 3 , Q 4 , Q 5 and Q 6 . be viewed as a monotone scheme (see Section 3.4). For coupling Darcy flow, the transport method captures fine-scale effects using a (conservative) fine-grid finite element technique combined with Lagragian-Eulerian scheme. For the purpose of this work we use Cartesian grids since for the case of monotone scheme convergence and error analysis reduces (essentially) to a one-dimensional problem and it retains convergence results and approximation to the entropy weak solution by recalling [64,65,48,57].
We improve the interpretation of the construction of numerically stable Lagrangian-Eulerian no flow surface region in two-space dimensions previously presented and analyzed in [17] for one-dimensional balance and conservation laws. It turn out that our monotone Lagrangian-Eulerian is a building block for construction of a novel class of Lagrangian-Eulerian shock-capturing schemes for first-order hyperbolic problems. The early monotone versions of the Lagrangian-Eulerian approach has been employed successfully in a number of very non-trivial problems and also developed theoretically [17,20,18,19,21,22] linked to several transport models such as the Burgers' equation with Greenberg-LeRoux's and Riccati's source terms, the shallow-water system, Broadwell's rarefied gas dynamics, Baer-Nunziato's system linear, non-linear convex and non-linear non-convex 2D scalar conservation laws (see [18,17]). It is worth mentioning that the Lagrangian-Eulerian framework is able to compute qualitatively correct (entropy) solutions involving intricate non-linear wave interactions of rarefaction and shock waves. It is of significance to mention that the scheme is able to handle resonance effect associated to non classical transitional shock in a 2×2 three-phase flow water-oil-gas system [18,19] and an intricate shock structure linked to a 5 × 5 isentropic Baer-Nunziato model (see [17]). The Lagrangian-Eulerian scheme does handle properly the sonic rarefaction linked to Burgers' equation, namely, a typical small (and unphysical) discontinuity jump within the rarefaction structure; such discontinuation in the solution is unphysical, and thus with no mathematical relation with an entropy violating shock. Indded, our Lagrangian-Eulerian scheme does not produce the well-known spurious entropy glitch effect in the sonic rarefaction (as is the case of Rusanov and Godunov monotone schemes). In addition, our first-order monotone Lagrangian-Eulerian scheme is less difusive than the classical Lax-Friedrichs scheme, but retains robustness and it is simple to implement and efficient for numerical computing [4].
A key hallmark of the our Lagrangian-Eulerian (monotone) method is the dynamic tracking forward of the no-flow region (per time step). This is a considerable improvement compared to the classical backward tracking over time of the characteristic curves over each time step interval, which is based on the strong form of the problem. Indeed, in the case of systems and multi-D problems, we can say that backward tracking is not understood.
Our new method can handle, with great simplicity, nontrivial scalar and systems problems in 1D and multi-D [17,18]. Another key hallmark of the our Lagrangian-Eulerian (monotone) method is the a flux separation strategy and its impact on the balancing (multiple scale) discretization between the first-order approximation of the hyperbolic flux and the source term to take into account nonlinear wave interactions preserving conservation properties. For instance, in [17] the numerical tests show that the discretizations resulting from the flux separation strategy when applied to the 2 by 2 shallow-water system and 5 by Baer-Nunziato's system seem to be of good quality. Moreover, such strategy seem to be very appropriate to deal with convex and nonlinear non-convex 2D scalar conservation laws.
With respect to the theory of monotone scheme, the no flow region (see Figure 8) is the control volume where the (local) wave interaction (always in the fine mesh of any multiscale method approach) takes place. On the other hand, in light of modern reasearch (see [54,39] and [17]), the no flow region is a space-time cutoff to account the complex and intricate nonlinear wave group interaction within control volume per time step in the overall simulation ( [13,33,95,17,18]). In computing practice, the no flow region parallel with the CFL stability criterion associated with the space-time discretization of many numerical methods.
Therefore, the monotone Lagrangian-Eulerian approach is a interesting novel framework for hyperbolic conservation laws and multiscale transport flow models.
Lagragian-Eulerian technique with conservation properties. We discus our new
Lagragian-Eulerian technique with conservation properties for the approximation of the 2D initial value problem for hyperbolic of conservation laws, where Ω is a interior square domain in R 2 , whit boundary ∂Ω and T = t f > 0.
For the finite dimensional function spaces we introduce the following standard notation. The space region (R × R) × R = {(x, y, t) : −∞ < x, y < ∞, t > 0} is replaced by the lattice (Z × Z) × N = {(i, j, n) : i, j = 0, ±1, ±2, . . . ; n = 0, 1, 2, · · · }. We consider the sequence U n = (U n ) i,j , i, j ∈ Z for n = 0, 1, 2, ..., for a given grid size ∆x, ∆y > 0 and time level The pair (x n i , y n j ) is the centers of the (i, j)-cell, i, j ∈ Z. From now on, for short, when there is no chance of misunderstanding, the limits of integration will indicate the time level where integration calculation takes place with respect to the pair (x i , y j ) of the (i, j)-cell, i, j ∈ Z. In each cell u(x, y, t n ) dx dy, u(x, y, t n+1 ) dx dy, Figure 7: Illustration of the notation related to the ,(i, j)-cell.
It is worthy to mention that the approximation value U (x i , y j , t n+1 ) is performed over the region R n+1 i,j ; see the right picture in Figure 10 as well as Figure 11, for an illustration of the projection procedure over original grid in control volumes. Note that in (3.3) and (3.4), the quantity u(x, y, t) is a solution of (3.1). The discrete counterpart of the space L p (R 2 ) is l p ∆x,∆y , the space of sequences U = (U i,j ), with i, j ∈ Z, with norm given by To build the new two dimensional scheme we extend the concept of no flow surface region D n ij (see [17,18]) to three dimensional variables (x,y and t) as D n i,j ⊂ R 3 , where i and j refer to (x i , y j ) and n refers to time state t n . The border of the control volume D n i,j is represented by ∂D n i,j = R n i,j ∪ S n i,j ∪ R n+1 i.j where (see Figure 7), in R 2 is the entry of the no flow surface region is the exit of the no flow sufarce region, and • S n i,j , in R 3 , is the lateral surface of the no flow surface region. We consider now (3.1) in the generalized space-time divergence form, Integration over the control volume and the use of the divergence theorem gives, The normal vector in the entry of the no flow surface region, R n i,j , is [−1 0 0] T and the vector normal in the exit of the no flow surface region, R n+1 i,j , is [1 0 0] T . Then, the right side of (3.6) can be written as We assume there is not flow through the surface S n i,j (that is, S n i,j is impervious; this is natural in many applications [17,18,19,20,22]). Therefore surface integral of S n i,j is zero, i.e., which we call conservation identity. The numerical approximations U n i,j and U n+1 i,j appearing in (3.3) and (3.4), respectively, can be defined from equation (3.8) with the desired conservation properties and reads, On other hand, from (3.5) and by the natural conservation properties of the no flow surface region for hyperbolic conservation law it follows (3.10) and σ n i− 1 2 ,j (t) ∈ S i,j . Analogously, we can define parameterized curves correspoding to other sides of R n i,j such that σ n in the respective center of the side of R n i,j (see Figure 8). Construction of a lateral curve of the no flow surface region. First, such construction is not unique. Actually, this might lead to a family of methods; this interesting issue will not be addressed in this work. Make fixed the point σ n i− 1 2 ,j (t n ). The normal vector to the corresponding side of R n i,j is the vector [t n , −1, 0] T . Moreover, the normal vector on the curve σ n i− 1 2 ,j (t) at t n ≤ t < t n1 is a orthogonal vector to vector (σ n i− 1 2 ,j ) (t) = [1, σ 1 (t), 0]; see right frame in Figure 8. Indeed, the vector at point σ n i− 1 2 ,j (t) may be calculated as n = −1, 1 σ 1 (t) , 0 and follows: , y j ). Finally, since σ n i− 1 2 ,j (t) is in the plane y = y j , then σ n i− 1 2 ,j (t) = (t, σ 1 (t), y j ). We point out that an analogous reasoning as in (3.11)-(3.12) might lead to the parametrized curves σ n γ 1 (t), such that α 2 (t) and θ 2 (t) must satisfy the exact conditions, Remark: We point out that solutions for the generalized ODE system (3.14) to compute σ n i−1/2,j (t) in Eq.(3.15) by the differential equation on the edge of the no flow surface region (see Figure 9 and Figure 10 We choose the simplest approximation of system (3.13), by setting U n i,j at t = t n we get: (3.14) Thus, we can approximate curves of the no flow surface region at t n < t < t n+1 as: The approximation of the volume D n i,j gives (see right frame in Figure 10): The new conservative Lagrangian-Eulerian scheme is given by very simply formulas: STEP I (Lagrangian Evolution, see Figure 10, and below h ≡ ∆y = ∆x) /2 )∆t). STEP II (Eulerian Projection, see Figure 11) 3.2. Improving numerically the solution of the generalized ODE system (3.14) with conservation and robustness. Solutions σ n i−1/2,j (t) of the differential system can be obtained using the approximations (3.18) Additional and even high-order approximations are also acceptable for (3.14)). As in [23], the piecewise constant numerical data is reconstructed into a piecewise linear approximation through the use of MUSCL-type interpolants: For the numerical derivative 1 ∆x u i,j , there are several choices of slope limiters for scalar case; in the book [93] there is a good compilation of many possible options. Finally, in order to show the flexibility of the reconstruction we use the nonlinear Lagrange polynomial in U n i−1,j , U n i,j−1 , U n i,j , U n i,j+1 and U n i+1,j . Therefore, equation (3.16) reads 3.3. A Lagrangian-Eulerian CFL stability constraint. Next, by definition of f i−1/2,j , f i+1/2,j , f i,j−1/2 and f i,j+1/2 in (3.14), we obtain the resulting coefficients of the Eulerian projection formula (3.17) as follows. Let the vectors where C xl = 0.5(1 + sign(f i−1/2,j ))f i−1/2,j ∆t, C xr = 0.5(1 − sign(f i+1/2,j ))f i+1/2,j ∆t and where C yl = 0.5(1 + sign(g i,j−1/2 ))g i,j−1/2 ∆t, C yr = 0.5(1 − sign(g i,j+1/2 ))g i,j+1/2 ∆t.
We define the coefficients of projection formula (3.17) as the coefficients of the matrix under the CFL-condition (along with h ≡ ∆x = ∆y) 3.4. A connection with monotone convergent entropy stable numerical scheme. For the purpose of this work, we invoke the solid theory of general monotone difference schemes (see, e.g., [64,65,48,57]) to illustrate the generality or our Lagrangian-Eulerian approach (3.16)- (3.17), under the CFL stability constraint (3.22). By definition of matrix, the Eulerian Projection step (3.17) over original grid may be recast as or in the form of conservative monotone scheme as, where, along with (3.16) and taking h = ∆x = ∆y, we have, ,j )). and G(U n i−1,j−1 , ..., U n i+1,j+1 ) = G R (U n i−1,j , U n i,j , U n i+1,j+1 , U n i,j+1 , U n i+1,j ) − G L (U n i+1,j , , U n i,j , U n i,j−1 , U n i+1,j , U n i,j+1 ), . We can note that, F and G satisfy condition (3.26), this implies consistentcy with (3.1) and thus the numerical method to 2D-hyperbolic equations is monotone.
In order for the above scheme be consistent with (3.1), we must have: Here, the functions F 1 and F 2 , are the corresponding numerical fluxes of the perninent approximation. The difference approximation is monotone on the interval [a, b] if G a nondecreasing function of each argument U n i,j so long as all arguments lie in [a, b]. Write u(x, y, t) = (S(t)u0)(x, y), where S(t) : for each t ≥ 0 and t → S(t)u 0 is continuous into L 1 (R 2 ). To compute this solution numerically we set where X n j,k is the characteristic function in the respective cell. Indeed, it turns out that conservative monotone schemes converge to entropy solutions. Therefore, convergence toward the entropy solution to out our Lagrangian-Eulerian approach (3.16)-(3.17) is proven. In [22], we were able to establish entropy convergence and error estimates for conservative Lagrangian-Eulerian method on triangular grids.
3.5.
Numerical experiments with the Lagrangian-Eulerian with conservation properties. We present a benchmark comprehensive set of numerical tests which explore the role of accuracy of our new 2D Lagrangian-Eulerian scheme with conservation properties.
It is easy exercise to show that the exact solution to problem (3.28)-(3.29) is u(x, y, t) = Figure 12: Initial condition for problem (3.28)-(3.29). On the left (resp. right) picture we show a "3D-plot's view angle" (resp. a oblique projection over the plane x = y).
sin(π(x + y − 2t)). The solution will be advanced from t = 0 to t = 1 and we notice that at this time the solution is merely traslated by one period 2π, with respect to (3.29) in the oblique x = y direction. The approximation computed with our scheme to problem (3.28)-(3.29) is shown in the Figure 13 (left frame) along with the exact solution on the right frame.
In Figure 14 we observe numerical convergence rates to (3. In the table are shown errors between the numerical approximations (U ) and exact solutions (u) in l 1 h , l 2 h and l ∞ h norms to problem (3.28) with initial condition u(x, 0) = sin(π(x + y)), advanced from t = 0 to t = 1 along with CFL condition 0.67.
A Buckley-Leverett's problem with gravity.
We consider the reservoir flow model for two-phase water-oil immiscible incompressible fluid with gravity [49], (3.32) − ∇ · [Kλ tot (S w )∇p] = q tot , where K is the absolute permeability tensor, λ tot is the total mobility, p is the thermodynamic pressure, φ is the porosity, S w ∈ [0, 1], S w is the water saturation, and u tot = (u tot , v tot ) is the total velocity (i.e., u tot = u w + u o ). The pressure equation (3.32) as written is elliptic in the absence of compressibility and reads −∇ · [Kλ tot (S w )∇p] = 0. Because the total mobility depends of saturation, the pressure yields fields changes as the displacement evolves, this is just a statement of Darcy's law combined with the conservation of mass. Once the pressure is computed from (3.32), the total velocity is given by Darcy's law: u tot = −Kλ tot (S w )∇p. The equation (3.33) is referred to as the saturation equation. Finally, in the absence of gravity and capillarity effects the x-and y-direction flux functions f (S w ) and g(S w ) are both just the fractional flow function of water, i.e., the non-convex Buckley-Leverett flux: , here µ w and µ o are the water and oil phase viscosities, respectively. For simplicity, in the simulations discussed here, we have chosen the following values of the parameters: K is the 2 identity matrix, λ tot (S w ) = 1, φ = 1, q tot = q w = 0. Generally, the complet solution of the system (3.33) and (3.32) is obtained by the implicit method to the pressure equation (3.32) and the explicit method by the hyperbolic equation such approach is called an Implicit Pressure Explicit saturation (IMPES) sequential solver.
In this example, we consider the Buckley-Leverett problem with gravity proposed in [49] under the above assumptions with u tot = (1, 1). The equations are significantly more challenging when gravitational effects are included in the saturation equation, resulting in different (non-convex) flux functions in the x-and y-directions. In this case, f (·) once again the Buckley-Leverett flux (Flow), but for the flux in the y-direction we have, with (x, y, t) ∈ [−1.5, 1.5] × [−1.5, 1.5] × [0, 0.5], and initial condition, (3.37) u(x, y, 0) = 1, x 2 + y 2 < 0.5, 0, otherwise. Finally, we notice that we impose the solid wall (slip) boundary condition u tot · n = 0, everywhere on the boundary ∂ Ω , where n is the outward unit normal to ∂Ω, upon the system (3.32) and (3.33). This means that there are no inflow boundaries and, hence, no boundary conditions on S w . Here we have two situations we want to test our Eulerian-Lagrangian scheme: (1) a rudimentar test to address the issue of grid orientation effects (this anomalous phenomenon is observed when computational grid is rotated and substantially different numerical solutions are obtained for a same problem) and (2) accommodation of no flow boundary condition, exact or approximate. Finally, we see our numerical solutions shown in Figure 18 for the Buckley-Levertt's problem described above (3.32)-(3.37) are in a very good agreement with those computed solutions as in [49].
3.5.4.
Conservation property verification test with scalar analytical solution velocity. In Figure 19 is shown 2D numerical solutions displayed as time evolves for a 2D symmetric Buckley-Leverett problem, in which is possible compare with exact analytical solution. Notice in top frames are shown the projection over xu-plane at times T = 10, 110, 220, 340 hours, respectively. It is clear we might see the correct front velocity of the 2D simulation being approximated with our proposed scheme when compared with the Buckley-Leverett analytical solution (superimposed red lines).
4. Coupling conservative finite element method for Darcy flow problem with a locally conservative Lagrangian-Eulerian method for hyperbolic-transport. We combine a novel high-order conservative finite element method for Darcy flow problem (Section 2) with locally conservative Lagrangian-Eulerian method for hyperbolic-transport (Section 3) to address conservation properties in multiple scale in both, the complexity multiscale heterogeneity structures from rock geology appearing in the elliptic-pressure-velocity model as well as multiscale wave structures resulting from shock wave interactions from the hyperbolic-transport model.
We solve saturation and pressure equations (1.1)-(1.2) in the IMPES sequential fashion in a geological domain of 256m × 64m considering two representative situations in which any one can easly reproduce further latter, namely, homogeneous medium and heterogeneous barrier and both in the slab geometry as previously described.
• Test 1 : Let's consider an homogeneous medium (K(x) ≡ 1) and use the method to Figure 20 (homogeneous medium) and 21 (heterogeneous barrier high high-constrast permability). The evolution of velocity field correponding to the heterogeneous barrier flow situation is displayed in Figure 22. Indeed, we also present the computation of the relative mass errors computed with our multiscale coupling procedure through situations Test 1 and 2, depicted in Figure 23 (homogeneous medium) and 24 (heterogeneous barrier). We also present a numerical convergence study that corroborates our findings. Based on the reported results, we were able to show the promising methodology on the conservation properties in multiple scale coupling and simulation for Darcy flow with hyperbolic-transport in complex flows. 5. Concluding remarks and perspectives. In this paper, we are concerned with modeling, simulation and numerical analysis for approximate solutions in multiscale nonlinear PDE related to highly complex problems. A better comprehension of multiscale fluid flow in subsurface is very hard, challenging and undoubtedly still of current events and demand innovative multiscale approaches, since ingenious difficulties stem from the lack of regularity of solutions. We revisited a novel volumetric locally conservative and residual-based Lagrange multipliers saddle point reformulation of the standard high-order finite method , clarifying and simplifying the presentation of its conservative properties. A new robust and accurate dynamic forward tracking Lagrangian-Eulerian scheme for hyperbolic problems do deal with multiscale wave structures resulting from shock wave interactions is also introduced. We present numerical results with realistic high-contrast two-dimensional multiscale coefficients with coupling multiscale oil-water flow simulations along with convincing numerical tests of local and global local mass conservation. We expect to combine the novel approach into the framework of Generalized Multiscale Finite Element Methods as recently introduced in [2]; see also [22] with particular interest to the case of complex flow systems as discussed in [8,10,12] for real-file applications, but in which issues of existence, stability properties and uniqueness are not well understood in line of works [5,8,12,13,16,50,67,80]. Figure 23: A 2D homogeneous slab problem coupling test for the Darcy problem with hyperbolic-transport having 256m × 64m: on the top a decreasing of the relative error of mass under a mesh refinement study and on the bottom we see evidence of numerical convergence of the full Darcy-hyperbolic-transport two-phase flow system. Figure 24: A 2D barrier slab problem coupling test for the Darcy problem with hyperbolictransport having 256m × 64m: on the top a decreasing of the relative error of mass under a mesh refinement study and on the bottom we see evidence of numerical convergence of the full Darcy-hyperbolic-transport two-phase flow system.
|
2020-06-15T01:00:37.208Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "948cfa965f588e51ec856aa84ace1d7149609b00",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.07150",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "948cfa965f588e51ec856aa84ace1d7149609b00",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
]
}
|
233714692
|
pes2o/s2orc
|
v3-fos-license
|
Prediction of clinical tremor severity using Rank Consistent Ordinal Regression
Tremor is a key diagnostic feature of Parkinson's Disease (PD), Essential Tremor (ET), and other central nervous system (CNS) disorders. Clinicians or trained raters assess tremor severity with TETRAS scores by observing patients. Lacking quantitative measures, inter- or intra- observer variabilities are almost inevitable as the distinction between adjacent tremor scores is subtle. Moreover, clinician assessments also require patient visits, which limits the frequency of disease progress evaluation. Therefore it is beneficial to develop an automated assessment that can be performed remotely and repeatably at patients' convenience for continuous monitoring. In this work, we proposed to train a deep neural network (DNN) with rank-consistent ordinal regression using 276 clinical videos from 36 essential tremor patients. The videos are coupled with clinician assessed TETRAS scores, which are used as ground truth labels to train the DNN. To tackle the challenge of limited training data, optical flows are used to eliminate irrelevant background and statistic objects from RGB frames. In addition to optical flows, transfer learning is also applied to leverage pre-trained network weights from a related task of tremor frequency estimate. The approach was evaluated by splitting the clinical videos into training (67%) and testing sets (0.33%). The mean absolute error on TETRAS score of the testing results is 0.45, indicating that most of the errors were from the mismatch of adjacent labels, which is expected and acceptable. The model predications also agree well with clinical ratings. This model is further applied to smart phone videos collected from a PD patient who has an implanted device to turn"On"or"Off"tremor. The model outputs were consistent with the patient tremor states. The results demonstrate that our trained model can be used as a means to assess and track tremor severity.
: Clinical rating criteria for tremor scores. In clinical practice, a clinician or video rater fills in the score cells based on the given criteria.
tremor across diverse populations who often suffer from a debilitating illness that limits mobility, there is clear value in developing automated methods of assessment that can be conducted remotely in patients' home environments. Video-based tremor quantification methods hold significant promise as a method to quantify tremor activity as tremor is easily detected and primarily assessed visually.
Clinical tremor are assessed by Tremor Research Group's Essential Tremor Rating Assessment Scale (TETRAS) contains a 9-item performance subscale that rates action tremor from 0 to 4 in half-point intervals [1]. Hand tremor ratings are defined by specific amplitude ranges in centimeters, see Figure 1. However, as the TETRAS score is given from clinicians or raters' observation, lacking of ways to quantitatively measure movement, the reliability varies.
To automate tremor scoring from videos, we retrieved videos of essential tremor patients as they were assessed for tremor severity. The video sessions are coupled with TETRAS score sheets for each patient. Then a deep neural network (DNN) is trained to predict TETRAS scores from videos.
TETRAS scores are ordinal labels, which are discrete values with implicit rank information. Ordinal data is common in various applications, such as longitudinal progress predication of diseases, severity assessment of pathologies, and age estimation. While classification models are studied most extensively in machine learning and deep learning, for ordinal labels, like TETRAS scores, a typical classification model would encounter problems that samples with adjacent labels may overlap considerably and the label ordering is not retained in predication throughputs. Therefore, ordinal regression (classification), a learning method incorporates rank information intrinsically, becomes a good option for such applications. Previous research shows that ordinal regression is more accurate and robust than unordered categorical classification on large scale public datasets [2] . Meanwhile, ordinal regression should also performs better than continuous regression as it predicts only a limited number of values, a much smaller dimensionality of output space than the infinite range of real values as in continuous value regression, making the optimization more plausible. More recently, Niu et al. applied ordinal regression with deep neural networks [3], and Cao et al. further improved the approach by enforcing rank consistency [4].
In this work, we propose to use the rank-consistent ordinal regression [4] to train a deep neural network (DNN) with TETRAS scores of clinical videos from ET patients (see Figure 2). In this approach, we first apply an optical flow network (FlowNet [5]) to extract object motion from consecutive frames in a video, then use transfer learning to address limited training data issue. We finally perform ordinal regression with rank consistency [4] to train a model predicting ranked tremor score.
Method
As shown in Figure 2, our approach consists of three main components: optical flow calculation, transfer learning, and rank-consistent ordinal regression.
Optical Flows for Tremor Movements
Optical flows [6] are widely used in video processing to represent pixel-wise change between consecutive frames. Let p t denote a pixel at time point t, then its displacement from time point t 0 to t 1 defines the optical flow (u, v) = p 1 − p 0 . Figure 3 gives an example of optical flow from hand tremor. Note that the visualization of the optical flows in Figure 3 is different from the commonly used color scheme. We set the normalized u and v components as the green and blue channel value in RGB, with red channel as 0.
While the combination of RGB frames and optical flows are popular in video analysis [7], we only use optical flows in our proposed approach, objects (hands) themselves are not important, the characteristics of movements are. Compare with original RGB frames, optical flows from tremor videos have much less information, showing only movement patterns. That would improve model performance especially when training data is limited. More detail on the advantage of using optical flow in hand tremor analysis can be found in [8]. In our work, we use fully convolutional networks, FlowNet (FlowNet 2.0) [5,9], to infer optical flow maps from RGB videos for its reported good performance and data-driven potential attributed to deep learning. To avoid overfitting and save computational cost, we fixed the pre-trained FlowNet in this work.
Transfer Learning
While more and more research work demonstrate the power of transfer learning, we also use transfer learning in this work to enhance the performance of our trained model, in particular, with limited training data. As in many medical applications, though the clinical videos with expert labeling we collected is from a relatively large clinical trial, it is still not sufficient to train a DNN that has a large number of parameters (weights of the neural network) and requires large amount of labeled data, typically thousands to millions of samples to train a high performance model.
To overcome this challenge, transfer learning is exploited in model training through a previously trained tremor frequency model [8]. The tremor frequency model is a spatio-temporal adversarial autoencoder (ST-AAE) that integrates spatial and temporal information simultaneously into the original AAE, taking optical flows as inputs. A latent space is constructed to represent the critical features for tremor analysis by encoding various movement patterns. From the latent space, random samples can be drawn from a prior distribution to generate synthetic image with desired labels. The ST-AAE was trained and evaluated with 3068 two-second long video segments from 28 subjects by cross-validation, and the weighted average of the AUCs of the ROCs is 0.97.
Since the ST-AAE model is also trained using the same optical flows as inputs, and targeted on a similar tremor analysis task, we take all the layers from the bottom to penultimate layer in the encoder of the ST-AAE model as the backbone of the DNN in this work. The last layer in the encoder of the ST-AAE is excluded as it is trained as task-specific weights for tremor frequency prediction. We do not use the decoder of the ST-AAE because it is trained to generate frequency specific tremor movement, which is irrelevant to this work.
Rank-Consistent Ordinal Regression
Although ordinal labeling can simply be handled by classification models, followed by post-processing to enforce rank order, a general machine learning framework [10] with extended binary classification [2], is more commonly used for ordinal data. This general framework was later used together with deep neural networks [3,4].
Ordinal label representation It worth to mention that ordinal regression sometimes is also referred as ordinal classification and it is more related to classification rather than regression in terms of label representation. Like classification model training, ordinal regression also uses vector representation for labeling. While classification models use one-hot vector labels, ordinal regression uses "multi-hot" vector labels. That is, if we let r i be the rank of No.
Layer Parameters Output Size 0 ordinal data labels y i , with the ordering r 1 ≤ r 2 ≤ ... ≤ r n , then for a ranked label r i , instead of only marking the corresponding vector component as 1, ordinal regression marks 1 for all the vector components that less than and equal to r i in order, as shown in Figure 4.
Using the extended "multi-hot" vector representation and typical classification cross-entropy as loss function, ordinal regression penalizes more for erroneous labels farther away from the correct labels, and is more tolerate with errors adjacent to the correct labels. Let L(r y , r c ) be the cross-entropy loss for a sample X, where r c is the ground labeled rank, and r y is the predicted rank, then L(r i , r c ) < L(r j , r c ) if |r i − r c | < |r j − r c |. This is the desirable property we would prefer, so that the ordinal regression can be trained to embed the rank information from the labels.
Rank-consistent ordinal regression
To address the training complexity of convex cost matrix and ensure rankmonotonic threshold of binary classification extension, Cao et al. proposed CORAL framework to product rankconsistent predication for each binary task [4]. We follows the CORAL framework in this work, using transfer learning from our previously trained ST-AAE model.
From the ST-AAE encoder, we first exclude the last task specific layer, then add a 3D convolutional layer for combined spatio-temporal information, followed by a fully connected layer (FC layer in Figure ??) and a linear bias layer, as shown in Table 1.
Clinical Tremor Data Acquisition
We retrieved patient videos as they were assessed for tremor severity. These patients were recruited as part of a Phase 2 Clinical Trial for the drug CX-8998 conducted, registered under IRB #201702183 [11]. Patient criteria includes essential tremor patients between ages 18 to 75. The video sessions are coupled with TETRAS score sheets for each patient. The clinical ratings are indicated both by clinicians (in-person) and by raters from reviewing the patient videos (video). Both ratings were used for evaluation, but only clinician ratings are used for training as in general clinician ratings are more accurate and reliable than raters.
There were in total 276 videos from 36 patients, with 1-4 video recordings for each patient, and each video recording has two separate tremor video segments for the left and right hand. Most of the patients had 4 videos that include screening, baseline, and 2 follow up videos. Each video is associated with two TETRAS scores. Figure 5 shows the histogram of TETRAS scores from clinician. We can see that most of the videos were scored as mild to moderate, with very limited samples of no tremor and severe cases. With this imbalanced data distribution, our ordinal regression uses the task importance weighting from [4].
To evaluate our propose approach, the clinical data is split by patients for training and testing. 186 videos from 24 patients were used for training, while 90 videos from 12 patients were kept for evaluation. Figure 6 shows the convergence curve of training and testing mean squared error (MSE).
The model performance was evaluated by mean absolute error and classification accuracy. While the classification accuracy is 0.36, the mean absolute error was 0.45. That means most of the errors were from the mismatch of adjacent labels, which was expected and acceptable given the subtle differences among these labels and observation variabilities.
To further evaluate the efficacy of our model, we also showed the correlation of model predication on tremor probability with clinical scores, as shown in Figure 7. This correlation showed the monotonic increase of tremor probabilities from model predication as TETRAS scores increase, indicating that our model can predict tremor severity for assessment as observation based TETRAS score do. The two non-monotonic points (marked by green ellipses) in Figure 7 were from score groups with only one video, which is rather ad-hoc and negligible. The model trained with essential tremor patients was also applied to videos recorded using a smart phone from a Parkinson's Disease patient. The patient has a brain implant that literally lets her turn on and off her tremor symptoms. The videos were captured when the patient symptoms were "On" or "Off". The predicted overall TETRAS score for the patient video with tremor "OFF" was zero, indicating no tremor was predicted from the model; the predicted overall TETRAS for the patient video with tremor "ON" is 2.0, indicating moderate tremor was predicted by the model (see Figure ??). The predicated overall TETRAS scores for the Parkinson's Disease patient were consistent with the tremor states. This result demonstrates the efficacy of our approach for remote assessment Parkinson's and other tremor related conditions.
|
2021-05-05T01:31:36.353Z
|
2021-05-03T00:00:00.000
|
{
"year": 2021,
"sha1": "f9394ee305b6fe9fd03603d5fed9238d6665d239",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f9394ee305b6fe9fd03603d5fed9238d6665d239",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
203911752
|
pes2o/s2orc
|
v3-fos-license
|
The Supramolecular Chemistry of Cycloparaphenylenes and Their Analogs
Cycloparaphenylenes (CPPs) and their analogs have recently attracted much attention due to their aesthetical structures and optoelectronic properties with radial π-conjugation systems. The past 10 years have witnessed a remarkable advancement in CPPs research, from synthetic methodology to optoelectronic investigations. In this present minireview, we highlight the supramolecular chemistry of CPPs and their analogs, mainly focusing on the size-selective encapsulation of fullerenes, endohedral metallofullerenes, and small molecules by these hoop-shaped macrocycles. We will also discuss the assembly of molecular bearings using some belt-persistent tubular cycloarylene molecules and fullerenes, photoinduced electron transfer properties in supramolecular systems containing carbon nanohoop hosts and fullerene guests, as well as the shape recognition properties for structure self-sorting by using dumbbell-shaped dimer of [60]fullerene ligand. Besides, the supramolecular complexes with guest molecules other than fullerenes, such as CPPs themselves, iodine, pyridinium cations, and bowl-shaped corannulene, are also discussed.
INTRODUCTION
Supramolecular chemistry is the subject of the association of two or more chemical species held together by intermolecular forces, such as electrostatic interactions, hydrogen bonding, van der Waals forces, etc., which could lead to organized entities of higher complexity (Lehn, 1985(Lehn, , 1988. It is one of today's fastest growing disciplines, crossing a range of subjects from biological chemistry to materials science, and shows great potential in the fields of catalysis, drug delivery, biotherapy, electrochemical sensor, self-healing materials (Zhang and Wang, 2011;Yan et al., 2012;Dong et al., 2015;Yang et al., 2015;Zhang et al., 2017a;Zhou et al., 2017). As one of the most important aspect of supramolecular chemistry, the host-guest molecular recognition requires that the two species must complement each other both in geometry (size and shape) and binding sites (Lehn, 1985(Lehn, , 1988. Macrocyclic structures, in principle, meet the requirements as they usually contain the cavities, clefts, and pockets with appropriate size and shape that provide the framework for substrate species by multiple non-covalent interactions. The representative macrocyclic molecules during the development of supramolecular chemistry, such as crown ether, cyclodextrins, calixarenes, and cucurbiturils, have been the classical structures in this field (Yang et al., 2015;Zhou et al., 2017).
Recently, the introduction of pillar[n]arenes ( Figure 1A) as new types of macrocyclic hosts by Ogoshi et al. (2008), rapidly received significant attention for their prominent hostguest properties.
Meanwhile, another type of carbon-rich macrocyclic molecules with radially oriented π systems pointing inwards to the cavity have emerged as a new class of strained, non-planar aromatic structures, which were named as cycloparaphenylenes (CPPs) or carbon nanohoops because of their structural relationship with carbon nanotubes (CNTs) (Jasti et al., 2008;Jasti and Bertozzi, 2010). Despite their simple structures, however, the synthesis of CPPs was only achieved in 2008 from curved molecular precursors after intensive efforts (Jasti et al., 2008) Following this work, several other novel strategies for CPP synthesis were developed and a number of CPP-related carbon nanorings with various sizes and atomic compositions were prepared (Darzi et al., 2015;Segawa et al., 2016 (Povie et al., 2017). Furthermore, development of this synthetic strategy to the preparation of [16]CNB and [24]CNB analogs were also reported by the same group (Povie et al., 2018). Using a new ligand system, the yield of the final belt-forming, nickel-mediated reaction for [12]CNB was improved from 1 to 7%, and [16]CNB and [24]CNB were obtained in 6 and 2% yield, respectively. These studies are important steps toward the bottom-up synthesis of other carbon nanobelt structures and CNTs. Another interesting and valuable work which should be mentioned is the thermally induced cycloreversion strategy for the synthesis of carbon nanohoops reported by Huang et al. (2016). They converted the anthracene photodimer synthon into anthracene-incorporated aromatic macrocycle through ring expansion reaction based on the cycloreversion of its dianthracene core. This work sheds light on the utility of the anthracene photodimerization-cycloreversion method for "bottom-up" carbon nanohoop synthesis. The past 10 years have witnessed a remarkable advancement in CPPs research, from synthetic methodology to optoelectronic investigations due to their size-dependent behavior and promising applications in materials (Segawa et al., 2012;Wu et al., 2018;Huang et al., 2019;Toyota and Tsurumaki, 2019;Xu and Delius, 2019).
In a recent work, Delius et al. overviewed the host-guest chemistry of carbon nanohoops, the preparation of mechanically interlocked architectures, and crystal engineering (Xu and Delius, 2019). In this present minireview, we only highlight the supramolecular chemistry of CPPs and their analogs, mainly focusing on the size-selective encapsulation of fullerenes, endohedral metallofullerenes, and small molecules by these hoop-shaped macrocycles. We will also discuss the assembly of molecular bearings using some belt-persistent tubular cycloarylene molecules and fullerenes, photoinduced electron transfer properties in supramolecular systems containing carbon nanohoop hosts and fullerene guests, as well as the shape recognition properties for structure self-sorting by using dumbbell-shaped dimer of [60]fullerene ligand. Besides, the supramolecular complexes with guest molecules other than fullerenes, such as CPPs themselves, iodine, pyridinium cations, and bowl-shaped corannulene, are also discussed.
SUPRAMOLECULAR COMPLEXES CONSISTING OF CPP S AND FULLERENES
The first series of macrocyclic hosts was the molecules with sp 2 /sp-hybridized carbon atoms, cyclic paraphenyleneacetylenes (CPPAs) (Figure 1B), reported by Kawase et al. (1996). The complexation between CPPA congeners and fullerenes were extensively studied (Kawase et al., 2003a(Kawase et al., ,b, 2007Miki et al., 2013). Although CPPA derivatives tend to form tight complexes with C 60 , their unstable nature hindered further experimental studies. In contrast, the solely sp 2 -hybridized CPP derivatives without acetylene linkers are sufficiently stable, and could similarly encapsulate fullerene molecules.
The initial example of the host-guest complex of this type was reported by Iwamoto et al. (2011). The CPP receptor with 10 phenylene units ([10]CPP) has an ideal diameter (1.38 nm) to accommodate C 60 (0.71 nm) (Figure 1C), showing a binding constant K a of 2.79 × 10 6 M −1 in toluene determined by fluorescence quenching titration, which was two orders of magnitude higher than those obtained for [6]CPPA⊃C 60 (Kawase et al., 2003a). The variable-temperature NMR (VT-NMR) spectroscopy experiments showed that the rapid exchange between free [10]CPP and [10]CPP⊃C 60 took place at room temperature, and the energy barrier for the exchange was determined to be 59 kJmol −1 . The crystal structure of [10]CPP⊃C 60 obtained by Jasti' group revealed the presence of convex-concave π-π interactions (Xia et al., 2012). It is noteworthy that C 60 can be selectively encapsulated by [10]CPP among the mixture of [8]- [12]CPPs, indicating that the cavity sizes of other CPPs were not appropriate for constructing a strong complex with C 60 . Interestingly, it was found that C 70 , which has an ellipsoidal shape with long axis of 0.796 nm and short axis of 0.712 nm, could also be encapsulated by [10]CPP in its "lying" orientation with its long axis perpendicular to [10]CPP plane ( Figure 1D), but with reduced association constant K a (8.4 × 10 4 M −1 in toluene) compared with [10]CPP⊃C 60 (Iwamoto et al., 2013). Nevertheless, C 70 was adopted the "standing" orientations to be accommodated in the cavity of [11]CPP with its long axis within the [11]CPP plane ( Figure 1E). Besides, [11]CPP deformed into an ellipsoidal shape to maximize the van der Waals interactions with the long axis of C 70 . All these results indicated the size-and orientation selectivity for the CPP⊃fullerene systems. Furthermore, a deep exploration by analyzing geometry structures through theoretical calculations revealed that C 70 selectively adopts lying, standing, and halflying orientations when combined with [10]CPP, [11]CPP, and [12]CPP, respectively (Yuan et al., 2015).
In 2014, Shinohara et al. demonstrated the high binding abilities of [11]CPP toward C 82 -based endohedral metallofullerenes, including Gd@C 2v -C 82 , Tm@C 2v -C 82, and Lu 2 @C 2v -C 82 , which provided a facile non-chromatographic strategy for Gd@C 82 extraction and enrichment from crude fullerene mixtures (Nakanishi et al., 2014). Later, another example of C 82 -based endohedral metallofullerene peapod, [11]CPP⊃La@C 82 was reported (Iwamoto et al., 2014). The solid structure of the complex was determined by X-ray crystallographic analysis, which showed that the La atom was located near the periphery of [11]CPP rather than the tube axis with the dipole moment of La@C 82 nearly perpendicular to the CPP axis. These evidence demonstrated the different orientations of La@C 82 in CPP and CNT peapods, which suggests that the orientation of La@C 82 in CNT was mainly determined by interactions among the adjacent ones. More importantly, due to the strong electron accepting properties of La@C 82 , partial charge transfer (CT) from [11]CPP to La@C 82 in the ground state was firstly observed by electrochemical experiments combined with UV/Vis-near-infrared (NIR) titration studies and density functional theory (DFT) calculations, but no fully ionized complex was formed.
The CPP-based fully ionized complex, Li + @C 60 ⊂[10]CPP, was synthesized and characterized by Ueno et al. (2015) ( Figure 1F). The ionic crystal structure was confirmed by Xray crystallographic analysis. Unlike the empty C 60 , the cationic Li + @C 60 core drastically increased the electron accepting ability which could induce strong charge transfer from the electron donors. Cyclic voltammetry experiments revealed that Li + @C 60 was harder to be reduced when accommodated by [10]CPP than Li + @C 60 itself, which could be ascribed to the higher electron density around the Li + @C 60 cage through CPP to Li + @C 60 charge transfer interaction. The strong charge transfer interaction also caused the positive charge of the lithium cation delocalized to the outer CPP ring. The broadened absorption bands at around 350 nm and in the NIR region was also related to this interaction. Besides, photoluminescence (PL) lifetime of Li + @C 60 ⊂[10]CPP (2.5 ns) is shorter than that of [10]CPP (4.3 ns) and C 60 ⊂[10]CPP (4.3 ns), suggesting that the charge transfer (CT) interaction may occur.
Recently, Delius et al. reported the synthesis of a porphyrin-[10]CPP conjugate, in which [10]CPP moiety served as a supramolecular junction for charge transfer between a zinc porphyrin electron donor and fullerene electron acceptor (Xu et al., 2018b). Efficient photoinduced electron transfer was observed with a lifetime of charge separation state up to 0.5 µs in the 2:1 complex between [10]CPP and the fullerene dimer ( Figure 1G). The intramolecular energy transfer between [10]CPP and porphyrin was also observed. Later, the same group achieved the synthesis of two [2]rotaxanes consisting of one [10]CPP moiety binding to a central fullerene with bis-adduct binding site and another two fullerene hexakisadduct stoppers using a concave-convex π-π template strategy ( Figure 1H) (Xu et al., 2018a).
[10]CPP served as an effective supramolecular directing group with the central fullerene as an efficient convex template, steering the reaction exclusively toward two trans regioisomers in the final step. The mechanically interlocked structures of [2]rotaxanes were analyzed by variabletemperature NMR (VT-NMR) and mass spectrometry. Transient absorption spectra revealed the interesting consequences of the mechanical bond on charge transfer processes. A later work conducted by Wegner et al. used a dumbbell-shaped dimeric azafullerene [(C 59 N) 2 ] as the ligand to combine with two [10]CPP rings, giving [10]CPP⊃(C 59 N) 2 ⊂[10]CPP complex ( Figure 1I) (Rio et al., 2018). Two stage binding constants were determined to be K a1 = 8.4 × 10 6 M −1 and K a2 = 3.0 × 10 6 M −1 , respectively, with weak interactions between the two CPP rings. Photoinduced partial charge transfer was observed from [10]CPP to (C 59 N) 2 by differential pulsed voltammetry experiments.
SUPRAMOLECULAR COMPLEXES CONSISTING OF π-EXTENDED CARBON NANOHOOPS AND FULLERENES
As the π-π interaction operates via the surface-to-surface contacts in supramolecular chemistry, it becomes important for large aromatic moieties with increasing π-surface areas. Based on the rapid development of the synthesis strategies, carbon nanohoops with embedded polycyclic aromatic hydrocarbon (PAH) structures, such as hexa-peri-hexabenzocoronene (HBC) (Quernheim et al., 2015;Lu et al., 2016;Huang et al., 2019), were subsequently prepared. These π-extended macrocycles usually show larger binding constants with guest molecules due to their larger contact area compared with simple CPP hosts. The [4]cyclo-2,11-para-hexa-peri-hexabenzocoronene ([4]CHBC) synthesized in our laboratory was found to selectively incorporate C 70 with a binding constant K a of 1.07 × 10 6 M −1 in toluene ( Figure 1J), but no evidence of complexation with C 60 guest was observed, which could be due to the "standing" or "lying" orientations of C 70 in the cavity of the carbon nanoring (Lu et al., 2017). Similarly, another HBCcontaining three-dimensional capsule-like carbon nanocage, tripodal-[2]HBC also exhibited the preference of affinity toward C 70 (K a = 1.03 × 10 5 M −1 in toluene) rather than C 60 , which was demonstrated by MS, NMR, and photophysical experiments ( Figure 1K) . More recently, our group achieved the synthesis of two novel π-extended crown-like molecules (TCR and HCR) with embedded curved nanographene units, HBC or TBP (tribenzo[fj,ij,rst]pentaphene) (Huang et al., 2019). These two species were found to show high binding affinity toward guest molecule C 60 with the association constants K a of 3.34 × 10 6 M −1 for TCR⊃C 60 , and 2.33 × 10 7 M −1 for HCR⊃C 60 , respectively ( Figure 1L). The gradual increasement in binding constants from [10]CPP⊃C 60 (K a = 2.79 × 10 6 M −1 ) (Iwamoto et al., 2011) to TCR⊃C 60 , then HCR⊃C 60 , should be ascribed to the increasing π-surfaces that could provide stronger π-π interactions between the hosts and C 60 . Besides, photocurrents were generated when using these molecular crowns or their supramolecular complexes on FTO electrodes under visible light irradiation. Time-resolved spectroscopic measurements suggested fast photoinduced electron transfer in the supramolecular heterojunctions.
The recently reported shape-persistent tubular carbon nanorings demonstrated the binding ability with fullerenes. (Figure 1M) in o-DCB with K a = 4.0 × 10 9 M −1 , while isomers of (P)-(11,9)-, (10,10) AABB -, and (10,10) ABAB -[4]CC also showed the binding constant above 10 9 M −1 . The lowest K a was recorded for (+)-(16,0)-[4]CC⊃C 60 (2.0 ×10 4 M −1 in o-DCB), but was still higher than that for [10]CPP⊃C 60 (6.0 ×10 3 M −1 in o-DCB) (Iwamoto et al., 2011). These results clearly show that the belt-persistency in tubular Frontiers in Chemistry | www.frontiersin.org structures also plays a crucial role in binding with fullerenes besides the cavity size. Therefore, a molecular rolling bearing with C 60 in the [4]CC bearing was constructed as the bearing can hold the fullerene molecule tightly to prevent its run-out motion. The C 60 molecule did not exchange and took rapid relative rolling motion on the NMR timescale within the bearing from the 1 H NMR analysis of (P)-(12,8)-[4]CC⊃C 60 . The crystal structures of this molecular bearing was further analyzed by X-ray diffraction, demonstrating the presence of smoothly curved surface that allows the dynamic motion of C 60 even in the solid state . Theoretical studies by density functional theory (DFT) indicates that the calculated association energies were quite method-dependent, and the energy barriers for the rolling motions within the bearing were as low as 2-3 kcal mol −1 with two distinct rolling motions (precession and spin) .
Besides C 60 guest, another twelve fullerenes, including C 70 , nine exohedral functionalized fullerenes, and two endohedral fullerenes, were selected and assessed as rolling journals in the belt-persistent [4]CC bearing .
[4]CC tolerated the modified fullerenes but with reduced binding constant. C 70 was found to be superior guest not only for the high binding constant (K a = 5.0 × 10 9 M −1 in DCB), but also for its tolerance of introduction of bulky shaft without obvious decrease in binding constant. A lengthened version of (P)-(12,8)-[4]cyclo-2,8-anthanthrenylene ((P)-(12,8)-[4]CA) can also bind with C 60 and C 70 (Figure 1N) with enhanced association enthalpy as the increase of the C-C contact area compared with the shorter congener (P)-(12,8)-[4]CC (Matsuno et al., 2013. The electronic properties of the molecular bearings were then systematically studied. The bearing systems can generate chargeseparated species under light irradiation. (P)-(12,8)-[4]CC⊃C 60 system exhibits a rapid back electron transfer to give triplet C 60 journal after the formation of triplet charge-separated species via photoinduced electron-transfer (Hitosugi et al., 2014). The lengthened version of [4]CA⊃C 60 could generate a triplet excited state at the outer bearing, whereas the endohedral fullerene Li + @C 60 enabled the back electron transfer processes without triplet excited species .
Unlike the relatively rigid conformation of the arylene panels in [4]CC, [7]cyclo-amphi-naphthylene ([7]CaNAP) was rather flexible with its panels rotate rapidly at ambient temperature (Sun et al., 2016). However, this rotation did not significantly affect its binding ability for C 70 with the K a in the range of 10 7 -10 9 M −1 (depending on the solvents) ( Figure 1O) (Sun et al., 2019). More importantly, the structure of [7]CaNAP deformed during the rotation to track the orientation changes of the ellipsoidal C 70 .
By using dumbbell-shaped C 60 dimer (C 120 ) as the ligand with two binding sites, two-wheeled composites can be assembled with the shape-persistent macrocycles as the receptors (Matsuno et al., 2016(Matsuno et al., , 2017. The thermodynamics of the 2:1 complex revealed the two-stage association constants, for example yielding 50% amount of (P)-D 4 ⊃C 120 ⊂(P)-D 4 , 50% amount of D 2d ⊃C 120 ⊂D 2d , and no (P)-D 4 ⊃C 120 ⊂D 2d was detected ( Figure 1P). This shape recognition can be explained by the repulsive van der Waals interactions between aliphatic side chains caused by the H-H contacts at the interfaces of the receptors as revealed by the crystal structures.
SUPRAMOLECULAR COMPLEXES WITH NON-FULLERENE COMPOUNDS
When two aromatic moieties stack in a face-to-face fashion, the π-π interaction could hold the two species together, such as the case of CPP analogs with fullerenes. Besides, other non-covalent interactions, such as CH-π, metal-π interactions also play important roles in various supramolecular systems. The CH-π interaction, which is a kind of atomto-surface hydrogen bond and relatively weak, could also assemble host-guest complex. On the other hand, the metal-π coordination usually could strongly stabilize the associated architecture.
By analyzing the ions in the gas phase of the complex mixture from CPP synthesis through matrix assisted laser desorption ionization (MALDI) together with ion-mobility mass spectrometry (IMMS), Müllen's group provided evidence for the existence of possible catenanes composed of CPPs, such as [12]CPP+[24]CPP, 2×[18]CPP (Figure 2F), or even a trefoil knot (Zhang et al., 2017b). Most recently, Itami et al. reported the synthesis of all-benzene catenanes and trefoil knot through silicon-based template method which adjoined two neighboring CPP fragments in a crossing pattern followed by removal of the silicon tether after macrocyclization (Segawa et al., 2019). Interestingly, the trefoil knot shows only a single proton resonance in 1 H-NMR spectrum even at −95 • C, indicating its ultrafast motion on the NMR time scale. (Fan et al., 2018). The solid state structure shows a Möbius topology stabilized by non-covalentinteractions. A 2,6-pyridyl embedded nanohoops were recently synthesized for the preparation of nanohoop-based rotaxanes through active metal template reactions ( Figure 2G; Van Raden et al., 2019). The triazole-embedded [2]rotaxanes showed dramatic changes in fluorescence emission (turn-off) when Pd(II) salt was added, suggesting its possible applications in ion sensing. Inspired by this study, another non-emissive [2]rotaxane was devised and synthesized, which has a fluorescencequenching 3,5-dinitrobenzyl stopper and a fluoride-cleavable triisopropylsilyl (TIPS) stopper. Upon the addition of tetran-butylammonium fluoride (TBAF), 123-fold emission was recovered as the nanohoop fluorophore was released, indicating that the nanohoop rotaxanes could effectively serve as turn-on fluorescence sensors. Itami et al. described the assembly of iodine within [n]CPPs (n = 9, 10, and 12) (Ozaki et al., 2017). Upon electric stimuli, [10]CPP-I turned out to emit white light, caused by the formation of polyiodide chains inside the [10]CPP cavity through charge transfer between [10]CPP tubes and encapsulated iodine chains.
A novel type of host-guest complex assembled solely by CHπ hydrogen bonds rather than π-π interactions was devised by the Isobe group (Matsuno et al., 2018a). A bowl-shaped corannulene can be encapsulated by a [4]CC host through multiple weak CH-π contacts to form a 1:1 complex in solution, driven by a large association enthalpy. The 1:2 hostguest combination was unveiled in the crystalline solid state ( Figure 2H). Despite the multiple weak hydrogen bonds, the guest was still allowed dynamic rotational motions in the host. Solid state analysis revealed a single-axis rotation of the bowl in the tube.
SUMMARY AND OUTLOOK
In this featured article, we overviewed recent progress on supramolecular properties of CPPs and their analogs. Various types of new carbon nanohoops were prepared by transition metal-catalyzed coupling reactions. These macrocycles usually possess well-defined cavities with rigid conformation and fixed diameters, which makes them good supramolecular hosts for incorporating a wide range of compounds, such as spherical fullerenes through π-π, metal-π, and/or CH-π interactions. These non-covalent interactions enabled efficient molecular recognitions and host-guest energy transfer. Although the synthesis of new carbon nanohoops and related supramolecular complexes has been growing very fast during the past decade, the applications of these carbon-rich architectures in some fields, such as, organic electronic devices, molecular sensing, and molecular machines, is still far from satisfaction. For further advancement, research efforts should be devoted to explore robust synthetic strategies which are essential for the diversification of carbon nanohoop family. Interdisciplinary studies with cooperative material sciences, analytical, biological, physical, and theoretical chemistry, will dramatically expand the understanding and application of the macrocycles and their supramolecular complexes. It is reasonable to expect that these carbon-rich structures will attract further research interests, and lead to the preparation of unique and unprecedented molecular tools and materials in the future.
|
2019-10-09T13:13:23.650Z
|
2019-10-09T00:00:00.000
|
{
"year": 2019,
"sha1": "b04b02be9db5b76b972f473698c2bc5c2f47ad83",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2019.00668/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b04b02be9db5b76b972f473698c2bc5c2f47ad83",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
211066349
|
pes2o/s2orc
|
v3-fos-license
|
Calibration of a star formation and feedback model for cosmological simulations with Enzo
We present results from seventy-one zoom simulations of a Milky Way-sized (MW) halo, exploring the parameter space for a widely-used star formation and feedback model in the {\tt Enzo} simulation code. We propose a novel way to match observations, using functional fits to the observed baryon makeup over a wide range of halo masses. The model MW galaxy is calibrated using three parameters: the star formation efficiency $\left(f_*\right)$, the efficiency of thermal energy from stellar feedback $\left(\epsilon\right)$ and the region into which feedback is injected $\left(r\ {\rm and}\ s\right)$. We find that changing the amount of feedback energy affects the baryon content most significantly. We then identify two sets of feedback parameter values that are both able to reproduce the baryonic properties for haloes between $10^{10}\,\mathrm{M_\odot}$ and $10^{12}\,\mathrm{M_\odot}$. We can potentially improve the agreement by incorporating more parameters or physics. If we choose to focus on one property at a time, we can obtain a more realistic halo baryon makeup. We show that the employed feedback prescription is insensitive to dark matter mass resolution between $10^5\,{\rm M_\odot}$ and $10^7\,{\rm M_\odot}$. Contrasting both star formation criteria and the corresponding combination of optimal feedback parameters, we also highlight that feedback is self-consistent: to match the same baryonic properties, with a relatively higher gas to stars conversion efficiency, the feedback strength required is lower, and vice versa. Lastly, we demonstrate that chaotic variance in the code can cause deviations of approximately 10\% and 25\% in the stellar and baryon mass in simulations evolved from identical initial conditions.
INTRODUCTION
The large-scale structure of the universe can be understood quite precisely by considering models that consist purely of dark matter. Numerical simulations of structure formation in such models have been performed with high accuracy and progressively higher resolution and larger box size (Efstathiou et al. 1985;Moore et al. 1999;Springel et al. 2008;Diemand et al. 2008;Klypin et al. 2011). But on the baryonic side, limitations in numerical resolution mean that several baryonic processes are not simulated from first principles. These processes include fundamental phenomena of the transformation of cold gas to stars, feedback from the energy released by stars, supernovae and massive black holes. Such effects are implemented using a subgrid approach in cosmological hydrodynamical simulations (Springel & Hernquist E-mail: bkoh@roe.ac.uk 2003; Governato et al. 2010;Agertz et al. 2013;Shimizu et al. 2019). If these analytical implementations are too simplistic, they risk being sensitive to poorly determined parameters, thus limiting their capability to make robust predictions. Improving the accuracy of subgrid physics requires both a better understanding of physical processes and identification of their limitations.
Feedback processes are essential in order to solve fundamental issues in numerical simulations such as the 'overcooling problem' (Cole 1991;White & Frenk 1991;Blanchard et al. 1992) and the 'angular momentum problem' (Katz & Gunn 1991;Navarro & White 1994;Hummels & Bryan 2012). Overcooling results in the formation of too massive galaxies particularly in high-resolution simulations (Davé et al. 2001). Feedback is also important for shaping the density profile of dark matter haloes (Pontzen & Governato 2012;Martizzi et al. 2013;Davis et al. 2014). In addition to these issues of small-scale subgrid physics, cosmological simulations contain additional uncertainties. In the absence of feedback, Genel et al. (2018) highlighted differences in the properties of galaxies induced by very slight changes in the initial positions of dark matter particles. Even if a galaxy is evolved from identical initial conditions, the simulation code can introduce variances which result in fluctuations in the simulated properties between repetitions of the same simulation (Keller et al. 2019). The problem is alleviated by the self-regulating nature of feedback (Keller et al. 2019) and highlights the need to understand the impact that subgrid implementations have on the resulting properties of galaxies in simulations.
Feedback processes that inject energy into the gas are therefore integral to numerical simulations. For smaller mass haloes, the energy comes mainly from supernovae explosions. In contrast, for more massive ones, the main energy sources are active galactic nuclei (AGN) (Sijacki et al. 2007;Booth & Schaye 2009;Teyssier et al. 2011) and gravitational heating as a result of infalling clumps of matter (Dekel & Birnboim 2006;Khochfar & Ostriker 2008). However, it is unclear how the energy should be distributed between generating motion and heating the gas. For supernova feedback alone, various techniques have been employed across different simulation codes (Stinson et al. 2006;Cen & Ostriker 2006;Dubois & Teyssier 2008;Dalla Vecchia & Schaye 2012;Smith et al. 2018). Given the huge diversity in the method of implementation, it is not unusual to expect significantly different outcomes (Thacker & Couchman 2000;Springel & Hernquist 2003;Okamoto et al. 2005;Oppenheimer & Davé 2006;Schaye et al. 2010), and variation in feedback effects is the most significant source of uncertainty in a cosmological simulation. In particular, the role of resolution should be emphasised: the resolution in cosmological simulations is limited but feedback occurs on all scales, so rigorous numerical convergence cannot be expected. The subgrid parameterisation, or at least the subgrid parameter values, will need to change according to the resolution in order to match calibrating observations, and there is no guarantee that all predicted galaxy properties will then be independent of resolution.
To reproduce a realistic picture of the observed universe, there is thus a need to calibrate the parameters of the appropriate subgrid routines (Schaye et al. 2015). These are adjusted to match specific observational properties of the galaxy population. By matching related properties, the simulation can then be used to answer a wide range of questions. For example, the feedback implementation in the 'Evolution and Assembly of GaLaxies and their Environments' (EAGLE) simulation project is calibrated to reproduce the observed z = 0.1 galaxy stellar mass function (GSMF), the relation between the mass of galaxies and their central black holes and realistic galaxy sizes (Schaye et al. 2015). The Illustris group calibrate their parameters to match various observational scaling relations and galaxy properties at low and intermediate redshifts (Vogelsberger et al. 2014). Despite the calibrations, there are shortcomings in each simulation. For example, Illustris recognised that the decrease of their simulated cosmic star formation rate density was too slow, leading to an update in their feedback prescription, resulting in the introduction of IllustrisTNG (Pillepich et al. 2018).
In contrast to these full cosmologically representative box simulations, zoom simulations focus computational resources on smaller volumes (Springel et al. 2008;Griffen et al. 2016;Wang et al. 2015). In particular, Wang et al. (2015) studied a halo mass range from dwarf masses (5 × 10 9 M ) to Milky Way (MW) masses (2 × 10 12 M ). They included baryonic processes and were able to reproduce the stellar to halo mass relation from abundance matching (Behroozi et al. 2013b;Moster et al. 2013;Kravtsov 2013) across a wide range of redshifts. However, they did not account for the mass of gas remaining in the haloes, and this is an important issue for the present analysis.
In this paper, we use zoom simulations of MW haloes in an attempt to quantify the stellar and gas mass present in such a halo at z = 0. In particular, we examine the degree of calibration allowed by the model introduced by Cen & Ostriker (1992). Although newer models are available, the Cen & Ostriker model remains as one of the most highly-used models in Enzo simulations. We calibrate our parameters governing star formation and feedback via a comparison with the inventory of baryonic and gravitating masses of cosmic structures presented in McGaugh et al. (2010), in particular the mass fraction of baryons in the halo and the conversion efficiency of gas into stars. Not only is this the first suite of simulations using these observables for feedback calibration, it tests how well the Cen & Ostriker (1992) model can be calibrated.
This paper is structured as follows. Section 2 describes the generation of initial conditions used in the simulations, the code, and setup used to evolve them. Also, we describe the parameters used for calibration and analysis tools used to extract and analyse the results. Section 3 presents the properties from McGaugh et al. (2010) that we attempt to match, along with the observational fit of the Kennicutt-Schmidt relation (Kennicutt et al. 2007). Section 4 describes the results from various simulations: effects of single parameter variation, calibration of parameters to results from Mc-Gaugh et al. (2010) and performance of the simulations to match other constraints. Lastly, the results are summarised and discussed in Section 5.
SIMULATION SETUP AND ANALYSIS
This section provides an overview of the simulation setup and the associated subgrid physics. In particular, the focus is on a MW-sized halo, at which mass scale we expect that AGN feedback will be subdominant (Bower et al. 2006;Behroozi et al. 2010;Storchi-Bergmann 2014). The main parameters investigated will thus be related to star formation efficiency and supernova feedback, and one aim of this investigation is indeed to see to what extent we can reproduce the baryonic properties of the MW using solely these ingredients. As described by Crain et al. (2015), the resulting baryonic properties of the halo are very sensitive to the variations of feedback parameter values. Therefore, a detailed explanation of the role of each parameter in the physical model is necessary.
The cosmological parameters in this suite of simulations are taken from WMAP-9 (Bennett et al. 2013). The key parameters are Ω m = 0.285, Ω Λ = 0.715, Ω b = 0.0461, h = 0.695 and σ 8 = 0.828 with the usual definitions. With these parameters, we generate initial conditions with MUlti-scale Initial Conditions (MUSIC) for cosmological simulations (Hahn & Abel 2011). We derive all zoom simulations from the parent simulation with a volume of L = 100 h −1 cMpc with 256 3 particles.
The simulation is evolved using Enzo, an adaptive meshrefinement (AMR) code (Bryan et al. 2014). Enzo uses a block-structured AMR framework (Berger & Colella 1989) to solve the equations of hydrodynamics in an Eulerian frame using multiple solvers. In the simulations presented here, we use the ZEUS (Stone & Norman 1992) hydro solver in combination with an N-body adaptive particle-mesh gravity solver (Efstathiou et al. 1985). Parameter space exploration is performed mainly on the star formation and feedback routines; the results of this exploration will be outlined extensively in Section 2.1 and 2.2. Lastly, the chemistry and cooling processes are handled by the Grackle library (Smith et al. 2017). We use the equilibrium cooling mode from Grackle, which utilises the tabulated cooling rates derived from the photoionisation code CLOUDY (Ferland et al. 2013) together with the UV background radiation given by Haardt & Madau (2012).
The MW-sized halo is initially identified from a dark matter only parent simulation through its merger history and final dark matter halo mass. It is isolated, has not experienced a major merger in its merger history since at least z = 2 and has a final mass of approximately 10 12 M . The particles within a high-resolution region, typically larger than the virial radius, then undergo additional levels of refinement in mass while the region's spatial resolution is increased. Each nested level is equivalent to an increase in spatial and mass resolution by a factor of two and eight, respectively. Contamination occurs if larger mass particles cross the region of interest (Oñorbe et al. 2014). In our simulations, we define a high-resolution region of three virial radii from the centre of the halo to carry out the refinement (Simpson et al. 2018) as a preventive measure. We use three nested levels, giving an effective resolution of 2048 3 particles or a nested dark matter particle mass of 1.104 × 10 7 M . This nested simulation is evolved with an additional five levels of AMR which is only allowed around particles within the high-resolution region, resulting in a maximum resolution of eight levels of spatial refinement or 2.196 comoving kpc (ckpc). This simulation setup is similar to that presented by Peeples et al. (2019) and Hummels et al. (2018).
From the high-resolution region of the MW halo, we identify an additional smaller halo with a mass of approximately 10 10 M . We then run a separate simulation zooming in only on this halo with two additional levels of initial nesting. The purpose of this smaller halo is to test the universality of the optimal feedback parameters from the MW zoom simulation. Due to the additional nesting levels, the dwarf is made up of approximately the same number of dark matter particles as in the MW halo. The increased mass resolution translates into an effective resolution of 8192 3 particles or a nested dark matter particle mass of 1.715 × 10 5 M . Because of the additional nested levels, we reduce the number of AMR levels to three, maintaining a constant maximum spatial resolution of 2.196 ckpc.
Star formation parameters
This paper employs the model described by Cen & Ostriker (1992) with modifications for the purpose of calibration. This model is one of the most commonly used in Enzo. The conditions required for star formation in a cell include: (i) No further refinement within the cell (ii) Gas density greater than a threshold density: ρ gas > ρ threshold (iii) Convergent flow: ∇ · v < 0 (iv) Cooling time less than a dynamical time: t cool < t dyn (v) Gas mass larger than the Jeans mass: m gas > m jeans (vi) Star particle mass is greater than a threshold mass If all the conditions are fulfilled, the algorithm generates a 'star particle' within the grid cell with a mass where m gas is the gas mass in the cell, ∆t is the timestep, t dyn is the dynamical time and f * is a dimensionless efficiency factor. The mass of the generated star particle is compared to a user-defined minimum star particle mass. If the mass exceeds the threshold, a star particle will be created. It is positioned in the centre of the cell and possesses the same peculiar velocity as the gas in the cell. It is treated dynamically as all other particles. An equivalent mass of gas to that of the star particle is then removed from the cell to ensure mass conservation.
To calibrate the simulation, certain aspects of the star formation criteria are modified. These include the Jeans instability check, time dependence of star formation, threshold stellar mass and the value of f * . The following sections will explain the role that each parameter plays: they are organized in the order that each factor is used in the star formation condition check.
Jeans instability check
In item (v) of the list of conditions in Section 2.1, the creation of star particles is only allowed when the gas mass exceeds the Jeans mass of the cell. This criterion is aimed at low resolution simulations that cannot resolve local Jeans masses. However, modern implementations with better resolution resolve such clouds with multiple cells at the star formation threshold density. When the spatial resolution of the simulation is high enough to resolve the Jeans length, this particular check in the star formation routine instead restricts star formation that can occur because an individual cell needs to wait until enough mass has accumulated within it.
Minimum star particle mass
Once a cell fulfils all five conditions for star formation, the final barrier to star formation is the minimum mass of a star particle that will be inserted into the simulation. This threshold is explicitly designed to prevent the production of too many star particles, which can increase computational costs significantly. However, the inability to exceed this minimum star particle mass can lead to a build-up of poten- tial star-forming gas in surrounding cells. This accumulation then reaches a point where a burst in star formation occurs.
Timestep dependence of star formation
Two factors affect the mass of the star particle to be compared to the threshold value as seen in Equation 1: ∆t/t dyn and f * . They correspond to the timestep dependence of star formation and a conversion factor respectively. The ∆t/t dyn factor aims to explicitly satisfy the Kennicutt-Schmidt (KS) relation, which states that a fraction f * will turn into stars over a dynamical time. However, this factor is introduced at multiple points in the star formation process, which impedes the promptness of star formation and its associated feedback by only converting a limited amount of gas into stars. By opting for a timestep independent star formation, the factor ∆t/t dyn is removed from the calculation shown in Equation 1, resulting in a stellar mass of where the symbols have the same meaning as in Equation 1. In this timestep independent approach, the simulation instantaneously converts f * of gas into stars in each timestep and the associated feedback will immediately start regulating further star formation. This modification greatly improves the efficiency of the star formation and feedback processes but requires further adjustments as discussed in detail in later sections. As we show in Sections 4.2 and 4.3, the timestep independent star formation model generally leads to a smoother buildup of stellar mass, but not without some additional effects. When we contrast the performance of the simulation, a simulation employing timestep dependent star formation will take roughly a month to complete whereas a similar setup with timestep independent star formation completes in approximately two days, reflecting the production of fewer star particles. Shorter run times allow for more exploration of the parameter space. However, this choice has significant impact on the resulting KS relation. These effects will be quantified and discussed in Section 4. In summary, we calibrate the feedback in two different star formation setups as detailed in Table 1. The reasons for two setups will be discussed in Section 4.3.
2.1.4 Star formation efficiency factor, f * As mentioned, regardless of the timestep dependence of star formation, there exists an efficiency factor, f * , in both Equations 1 and 2. This parameter regulates the conversion efficiency of identified gas mass in a cell into star particles: f * can vary from zero to unity but not including the limits where none or all the identified gas mass in the cell is converted to stellar mass respectively. The latter scenario will remove all the gas from the cell, resulting in a cell having a density of zero, crashing the simulation.
Feedback parameters
Although the creation of a star particle is immediate, feedback happens over a longer timescale, designed to mimic the gradual process of star formation. In each timestep, the star forming mass is given by where m 0 is the star particle mass, t 0 and t are the creation time of the star particle and current time in the simulation respectively. Through this implementation, according to Equation 3, the rate of star formation increases linearly and peaks after one dynamical time before declining exponentially . We adopt the Smith et al. (2011) modification of the Cen & Ostriker (2006) thermal supernova feedback model. The star particles add thermal feedback to a set of neighbouring grids with size and geometry that can be tuned by the user, known as distributed stellar feedback. This feedback continues until 12 dynamical times after its creation. In each timestep, feedback is deposited in the form of mass, energy, and metals.
Mass is removed from the star particle and returned to the grid as gas, given by where f ej is the fraction of mass removed. The momentum of this gas is where v particle is the velocity of the star particle and is conserved by addition into the grid cell hosting the star. The feedback energy deposited into the user defined cells is where and c are the feedback efficiency and speed of light respectively. For an value of 10 −5 (Cen & Ostriker 1992), an energy of 10 51 erg is injected for every ∼ 56 M of stars formed. Metals are returned to the grid cells and their corresponding metallicity is given by where Z star and η are the star particle metallicity and the fraction of metals yielded from the star respectively. We assume that 25% of the mass is removed from the star particle and returned to the grid as gas ( f ej = 0.25) with 10% of this returned gas being metals (η = 0.1), consistent with Cen & Ostriker (1992). These values result in a total metal yield of 0.025 of the mass of the star particle, similar to the calculations by Madau et al. (1996). Also, this metal yield is consistent with average values in the MW, with a mean SFR of ∼ 3 M yr −1 , a core-collapse supernova rate of 1 per 40 years, and an IMF-averaged metal yield of ∼ 3 M per supernova . Therefore, we leave the values of both f ej and η unaltered.
Instead, we focus on the factors that influence the energy injection, both in terms of the amount and the physical extent. We select three factors in the feedback implementation to be varied for the calibration of the simulations. They are , radius of feedback (r) and number of cells (s) within r. The first parameter is related to the amount of feedback energy emitted by the star particle (see Equation 6) while the remaining parameters work together to define the extent of energy injection. These will be described in more detail in the following sections.
Feedback efficiency,
The amount of feedback energy injected as thermal energy is given by Equation 6. It is dependent on both rest mass energy (m form ×c 2 ) and a user-defined fraction, . The former relies on the amount of stellar mass created per timestep (see Equation 3), and the latter defines the percentage of the rest mass energy injected into the IGM. Together with Equation 4, this implementation is similar to the temporal release of Galactic Superwind energy and ejected mass from stars into the IGM discussed in Cen & Ostriker (2006).
Feedback energy injection extent
In the original feedback method described by Cen & Ostriker (2006), all of the feedback energy is injected into the grid cell housing the star particle. However, Smith et al. (2011) modified this to allow the feedback to be spread across multiple zones as a means of bypassing the overcooling issues, where too much energy injected into a single grid cell can result in unphysical short cooling times. This setup is known as distributed stellar feedback, and it is described by r and s. These parameters work together to define the physical extent of the injection of feedback from the star particle. We can visualise it in terms of a cube surrounding the star particle in the centre. r is the distance of the cell from the star particle. When r = 1, it refers to a 3 × 3 cube since all the cells are within one cell distance away from the star particle. Similarly, when r = 2, it refers to a 5×5 cube around the star particle. These alternatives are illustrated in two dimensions in the left and right panel of Figure 1 respectively.
The parameter s gives the number of steps allowed to be taken from the star particle within the cube determined by r. Referring to the left panel of Figure 1, setting s = 2 corresponds to an allowable two steps of movement away from the star particle, specifying injection within the cells labelled 1 and 2 in the 3 × 3 cube. As the value of r increases, shown in the right panel of Figure 1, so the maximum accessible value of s increases. These increased values translate to more flexibility in the usage of distributed stellar feedback.
In summary, we calibrate our simulations with , r and s, and f * to match the observations. For the remainder of the paper, when discussing the combination of parameters in a simulation setup, they will be referred to as a vector with components ( , r s, f s ), e.g., (1.0×10 −5 , 1 3, 0.1).
Gravitationally bound systems
Methods Stellar dominated spiral galaxies Rotation velocities Gas dominated galaxies Baryonic Tully-Fisher relation Elliptical galaxies Gravitational lensing Local group dwarfs Direct measurement Clusters of galaxies Hot X-ray emitting gas
Analysis
Haloes are identified using the Robust Overdensity Calculation using k-Space Topologically Adaptive Refinement (ROCKSTAR) halo finder (Behroozi et al. 2013a). It is a 6dimensional phase-space finder, using both positions and velocities of particles to locate and define a halo. In regions where the density contrast is insufficient to distinguish which halo hosts a given particle, ROCKSTAR can differentiate subhaloes and major mergers that are close to the centres of their host haloes. This feature is particularly useful in identifying main haloes when creating zoom simulations of lower mass haloes. Analysis of the simulation results is then carried out using the yt analysis toolkit (Turk et al. 2011).
Baryon content of cosmic structures
The main observables matched in this suite of simulations are taken from the work of McGaugh et al. (2010), where the authors attempted to quantify the distribution of baryonic mass within cosmic structures. Galaxies are broadly categorised into rotationally supported and pressure supported systems. These are further divided into stellar dominated spiral galaxies and gas dominated galaxies for the rotationally supported system, and elliptical galaxies, local group dwarfs and some clusters of galaxies for pressure supported systems. The primary method for determining the total mass budget in the different systems is their equivalent circular velocity (V c ) obtained through various methods described in detail in McGaugh et al. (2010) and summarised in Table 2 their results using r 500 , a radius where the enclosed density is 500 times the critical density of the universe. The main result presented in Figure and the conversion efficiency of baryons into stars, where m b and m 500 refer to the baryonic and total mass within this radius respectively, and f b is the universal baryon fraction determined to be 0.17 ± 0.01 (Komatsu et al. 2009). One important point to note is that these fractions are dependent on the choice of radius. To facilitate comparison of our results with this paper, we produced the following fitting formula to the data from Figure 2 in McGaugh et al. (2010), which we illustrate in Figure 2: and where x = log 10 (m 500 /M ) − 12.91 1.12 , and y = log 10 (m 500 /M ) − 12.19 1.18 .
We aim to calibrate our suite of simulations to yield a good match to these fits. Also, we will compare our simulated galaxy properties to the Kennicutt-Schmidt relation, which serves as an additional constraint.
Kennicutt-Schmidt relation
The KS relation is a measure of the correlation between gas surface density and the SFR per unit area. From the work of Schmidt (1959); Kennicutt (1989Kennicutt ( , 1998; Kennicutt et al. (2007); Bigiel et al. (2008), there appears to be a tight correlation between these measured properties on galactic scales (∼ kpc). This strong relation makes it one of the critical observations that simulations with star formation attempt to match.
We adopt a similar methodology to that of the AGORA project (Kim et al. 2016). The SFRs are calculated using the mass of star particles and time-averaged over the past 20 Myrs of the simulation snapshot. Together with the gas density they are then deposited onto a fixed resolution grid of 750 pc, consistent with the methodology of Bigiel et al. (2008), to derive the SFR and gas surface density required by the KS relation. In fact, we find that the conclusions drawn are insensitive to changes in the grid resolution. With non-zero SFR surface density patches, we will also compare our results to log Σ SFR = 1.37 log Σ gas − 3.78, which is obtained from the best observational fit given by Equation 8 in Kennicutt et al. (2007).
MW galaxy zoom simulations with Setup 1
We explore the parameter space by switching on the Jeans instability check, applying timestep dependent star formation and setting a threshold star particle mass of 10 5 M (Setup 1); see Table 1. With this setup, we run a total of 22 simulations by modifying f * (see Equation 1), (see Equation 6), r and s (see Section 2.2.2), as shown in Figure 3. This explored region of parameter space is motivated both physically and numerically. Cen & Ostriker (1992) applied a value of (10 −4.5 ), which is similar to other work (Ostriker & Cowie 1981;Dekel & Rees 1987). The values of r and s are restricted by the maximum number of cells used to define a grid. Lastly, we can constrain the range of values that f * can take with the ratio of f s to f d . From McGaugh et al. (2010), f * is limited between 0.1 and 0.9 approximately across the halo mass range. Out of the 22 simulations, we classify the runs into those that reached z = 0 (completed) and those that did not (failed) since we are interested in the relevant properties at z = 0. A fraction of the simulations were unable to reach the final redshift due to unrecoverable errors in the hydrodynamics solver, mostly associated with extreme star formation and/or feedback parameters. Since the failed simulation contains extreme feedback parameters, e.g, large amount of feedback energy, it is unlikely that this prescription will result in the best match to the observed properties, presented in Section 4.1.1. Overlapping points with conflicting conclusions exist in Figure 3 because as we are showing the 2-dimensional projection of the 3-dimensional parameter space.
Comparison to baryonic properties from McGaugh et al. (2010) -Setup 1
Initially, we attempted to cover the parameter space optimally with minimal numbers of simulations using Latin Hypercube Sampling (McKay et al. 1979). We wanted to minimise the maximal distance between various points in our feedback parameter space as described by Heitmann et al. (2009). However, due to the failure of several runs to reach z = 0, it is not possible to obtain a space-filling design. Therefore, we try a more fundamental approach to quantify how changing each parameter will affect the observables. This result is presented in Figure 4, showing a plot of f s against f d across a range of m 500 . From the initial values of (1.0×10 −5 , 1 3, 0.1), we vary only, which corresponds to a change in the strength of feedback. Increasing the strength of feedback reduces both the f s and f d parameters of the halo (see the blue arrow in Figure 4). This evolution can be easily explained by the increased expulsion of gas due to stronger feedback, reducing the amount of fuel available to form stars, which leads to a decrease in f s . The removal of gas also causes the amount of baryons within r 500 or f d to decrease.
We then try to increase f * . This change has a direct impact on the total stellar mass as more gas mass is converted into stars. However, this increased star formation yields stronger feedback. Therefore, the net result of increasing f * is similar to increasing , which decreases both f s and f d (see the green arrow in Figure 4). To a lesser extent, however, this effect is evident from the small transition of the cyan point to the purple point on the top right of the plot. To improve the clarity of an increase in f * , we add another green arrow connecting another set of data points (grey and light green dots). This difference in the impact of f * also suggests its sensitivity to other feedback parameters.
The last parameters to adjust are r and s. Essentially, we are increasing the size of the cube into which the feedback energy is injected (see Figure 1). By increasing r (and, correspondingly, s), f s and f d are reduced, similarly to the effect of increasing and f s . However, this phenomenon only persists until r = 3 and s = 9, which corresponds to a 7 3 box or 343 cells centred around the star particle. Beyond this point, the trend changes when a further extension of the feedback injection decreases f s but increases f d , indicating the presence of a turnaround point. As energy is deposited further from the star particle, the gas is kept away at a larger distance from the centre of the gravitational potential well as seen in Figure 5. As a result, m * decreases as fewer stars form due to a deprivation of fuel for star formation while m gas increases as more gas is now present. Increasing the physical extent of feedback injection beyond r = 3 and s = 9 only serves to dilute the amount of feedback energy per cell, leading to gas remaining near the virial radius of the halo. Thus f d increases while f s decreases. Furthermore, the average number of cells within a single grid in an Enzo simulation is not likely to be much larger than about 7 3 , so extending beyond r = 3 and s = 9 should be avoided as feedback is only deposited on the local grid.
From the 16 completed simulations, the combination of parameters that yielded the most MW-like properties in the halo is (2.5×10 −4 , 1 3, 0.2), which is represented by the pale green dot in Figure 4. The halo contains a stellar mass comparable to the MW while having approximately 50% more baryon mass than the MW halo. This point is the closest match to the target for the region of parameter space that we sampled. The next best set of parameters that produce halo properties matching the target is (5.0×10 −4 , 1 3, 0.1). While it provides a better f d agreement, the value of f s is approximately zero. From the trends and the best match in Figure 4, further improvement in the agreement of halo properties will only be marginal. In order to achieve a better agreement, we suggest including other free parameters or even modifying the star formation and feedback model. Furthermore, this set of parameter is determined for a quiescent halo as discussed earlier. Success of this calibrated feedback prescription is likely to be dependent on the growth history as well. However, it is not within the scope of this work to design such modifications or test the robustness of our calibration against different merger histories.
Kennicutt-Schmidt relation -Setup 1
As discussed earlier, the KS relation provides an additional constraint on the feedback calibration beyond the global baryon makeup of a MW halo.To apply this constraint, we use the methodology described in Section 3.2 to compare and contrast with the observed KS relation (blue line) in Figure 6. We also include a rough approximation of the observed values of nearby galaxies from Bigiel et al. (2008) in the form of blue hatched contours.
The results show that the simulation data intersect with observations and the fit given by Equation 14, but the slopes of the simulation data differ from the KS relation in every feedback prescription. Most of our simulations manage to reproduce the characteristic 'threshold' gas density value of approximately 10 M pc −2 , which marks the transition point between high and low star formation efficiency and is apparent from the blue hatched contour. The slopes of the relations in the simulations do not appear to be significantly different from each other despite changes in the subgrid physics parameters. However, they are consistently steeper than the gradient of the observed relation.
When increasing , we observe a shift towards lower SFR but higher gas density from the transition between the green circles (1.0×10 −5 , 1 3, 0.1) and the red squares (5.0×10 −4 , 1 3, 0.1). This shift can be explained by the higher feedback energy budget associated with a larger The green dots and red crosses represent runs that reached and failed to reach z = 0 respectively. We can identify regions of parameter space more likely to result in the inability of the simulation to reach z = 0 and the causes are explained in more detail in Section 4.1.
value, which inhibits further star formation. The simulation data points are insensitive to any increase in r until r = 3. Beyond which, comparable SFR densities are associated with higher gas densities (compare green crosses (r = 4) and pink diamonds (r = 5)). This trend is consistent with the explanation provided for Figure 5. Lastly, from the data points of (1.0×10 −5 , 1 3, 0.1) and (1.0×10 −5 , 1 3, 0.2), it appears that increasing f * does not affect the relation significantly.
The best parameter values (purple hexagon) lie along with the KS relation fit but deviate from observations as they are clustered around high gas densities. This discrepancy with Bigiel et al. (2008) suggests that this combination of and f * is too weak to create patches of lower gas surface density. However, adjustment of either factor will, in turn, affect f s and f d , leading to a halo that reproduces the KS relation instead of the observations of McGaugh et al. (2010).
Haloes in the high-resolution region -Setup 1
Since we specify a safety factor of three virial radii to prevent contamination of the MW halo in the zoom simulation, there are other central and satellite haloes of varying mass in this region. Figure 7 illustrates the properties of other central haloes in the simulation with the best feedback prescription of (2.5×10 −4 , 1 3, 0.2). This plot is not presented in a similar way to Figure 4 because we are looking at a range of halo masses. Instead, we populate Figure 2 with the corresponding f s and f d of various central haloes in the high-resolution region of the MW galaxy zoom simulation.
We present the graph of f d against m 500 on the upper panel and f s against m 500 on the lower panel of Figure 7. (1.0×10 −5 , 1_3, 0.1) (1.0×10 −5 , 2_6, 0.1) (1.0×10 −5 , 3_9, 0.1) (1.0×10 −5 , 4_12, 0.1) (1.0×10 −5 , 5_15, 0.1) Figure 5. Cumulative plot of m gas against halo radius. The different coloured lines represent simulations of various r and s. As the extent of feedback injection is increased beyond r = 3 and s = 9, the amount of baryons in the outer region of the halo is significantly higher. This trend highlights the inhibition of gas collapse to form stars as the extent increases, providing support for the explanation given in Section 4.1.1 for the trend in Figure 4.
versal baryon fraction while possessing f s that is close to observations. These haloes are in contrast to the MW halo (rightmost red star), which hints at the need for additional modifications required to understand and determine if this discrepancy is a numerical byproduct due to the fractional mass resolution of the lower mass halo. Therefore, we attempt a zoom simulation of a dwarf galaxy around 10 10 M with a comparable mass resolution to the MW zoom to investigate if the conclusion from Figure 7 is due to resolution and whether this feedback prescription is universal.
Dwarf galaxy zoom simulations with Setup 1
Using the combination of parameters (2.5×10 −4 , 1 3, 0.2), we implement the feedback prescription in a dwarf galaxy with a mass of approximately 10 10 M . However, the results indicate an absence of stars within the halo, consistent with Figure 7. Reviewing the star formation routine (see Section 2.1), we find that the Jeans instability check is the bottleneck of star formation. Due to the spatial resolution implemented in this dwarf galaxy, according to the discussion in Section 2.1.1, the Jeans instability check restricts star formation that should occur in reality. Therefore, to allow star formation, we switch off this Jeans instability check in the star formation routine. We label such runs as NJ. Figure 8 illustrates the virial (black) and stellar mass evolution (red) in the dwarf galaxy with different setups (solid vs dashed lines). As expected, removing the Jeans mass criterion allows stars to form in the dwarf galaxy zoom simulation (solid red line). However, star formation starts (1.0×10 5 , 1_3, 0.1) (1.0×10 4 , 1_3, 0.1) (2.5×10 4 , 1_3, 0.1) (5.0×10 4 , 1_3, 0.1) (1.0×10 5 , 2_6, 0.1) (1.0×10 5 , 3_9, 0.1) (1.0×10 5 , 4_12, 0.1) (1.0×10 5 , 5_15, 0.1) (1.0×10 5 , 1_3, 0.2) (2.5×10 4 , 1_3, 0.2) Bigiel et al 2008 Figure 6. A graph of SFR surface density against gas surface density illustrating the KS relation. Different coloured points are simulation data from sub-kpc resolution consistent with rough approximation from the observations in nearby galaxies by Bigiel et al. (2008) represented by the blue hatched contours. The blue line is derived from the observational fit of Kennicutt et al. (2007). There is overlap between the simulation and observation but there are differences that will be discussed in Section 4.1.2. dark matter mass -NJ dark matter mass -Setup 2 stellar mass -NJ stellar mass -Setup 2 Figure 8. Redshift evolution of dark matter and stellar mass in the dwarf galaxy. Solid and dashed lines are mass evolution of NJ runs and Setup 2 runs respectively. Black lines refer to the dark matter mass evolution while red lines refer to the stellar mass evolution in the halo. By only switching off the Jeans instability check, the dwarf galaxy starts to form stars, albeit only at z ≈ 2, which is too late. Therefore, we need a full transition to Setup 2 where the minimum star particle mass is set to zero in order to allow for star formation at an earlier time. Refer to Section 4.2 for discussion.
around z = 2, which is late as compared to the MW zoom simulation, for which star formation commenced at z 6.5. Further investigations yielded the conclusion that the star formation threshold mass is the next limiting factor. Therefore, we reduce the threshold mass for star particle creation to zero, which relaxes the condition for star formation, allowing star particles to be created at z = 8 in the simulation. On top of these changes, we switch off the timestep dependence of star formation. This results in Setup 2 as shown in Table 1. The purpose of ∆t/t dyn in Equation 1 is to ensure the adherence of star formation to the KS relation. However, in Equation 3 where feedback is modelled to occur across time, there are additional factors of ∆t/t dyn present to regulate these processes according to the KS relation. Hence, by switching to timestep independent star formation, we improve the promptness of the feedback. Lastly, since star formation is now instantaneous once conditions are met, highdensity regions of gas are absent, reducing the time used to calculate the hydrodynamic evolution in the simulation. This absence of high-density gas is evident from the number of timesteps required for the evolution to reach z = 0 and the time per timestep. For an identical feedback prescription, Setup 1 takes 1263 timesteps and ∼ 435s per timestep to reach z = 0, which is in stark contrast to Setup 2 where it takes 663 timesteps and ∼ 125s per timestep for the simulation to reach z = 0. The net result is an improvement in the speed of completion of simulations from weeks to days.
In summary, we modify the setup to switch off the Jeans instability check, turn off the timestep dependence of star formation and remove the requirement of a minimum star particle mass. This results in Setup 2 shown in Table 1. This setup enables us to recover a more realistic star formation history beginning at z ∼ 8 (see Figure 8), which is the main dark matter mass dark matter mass -Setup 2 stellar mass stellar mass -Setup 2 Figure 9. Redshift evolution of dark matter and stellar mass in the MW halo. Solid and dashed lines are mass evolution of NJ runs and runs with Setup 2 respectively. Black lines refer to the dark matter mass evolution while red lines looks at the stellar mass evolution in the halo. By changing the star formation conditions, we obtain a smoother stellar mass evolution while not affecting the dark matter mass evolution. Refer to Section 4.3 for discussion.
motivation for the switch in setup. However, we do not compare the properties of the dwarf galaxy to observations for reasons that will be explained in Section 4.3.
Simulations with Setup 2
Due to star formation issues in the dwarf galaxy zoom simulations, we make significant changes in the simulation setup. In Section 4.2, we show that the stellar mass of a dwarf galaxy at z = 0 changed from zero to ∼ 10 9 M by switching to Setup 2. We now have to review the results of the MW galaxy presented in Section 4.1. Figure 9 shows the evolution of the dark matter and stellar mass of the MW halo in different setups. The lines and labels are similar to Figure 8. From the identical dark matter mass evolution in Figure 9 for different setups (black lines), we know that we are comparing the same halo across simulations. However, the stellar mass evolution paints a different picture. Comparing both setups, although the haloes start forming stars at the same time (z 6.5), the simulation using Setup 2 has a lower initial and final stellar mass as a result of its corresponding relaxed star formation conditions. With the minimum mass of the star particles set to zero, the stars are allowed to form with a smaller mass, which explains a lower starting point in Setup 2. Between z = 1 and z = 0 in Setup 1, we note a spike in stellar mass due to the build up of gas eligible for star formation (see Section 2.1.3). Despite these differences in the star formation history, the most significant one is the stellar mass of the halo at z = 0. The final stellar mass of the MW halo in Setup 1 is approximately 10 10 M , which is two orders of magnitude higher than that in the new run with a value of roughly 10 8 M . This difference means that these haloes have vastly different f d and f s .
Due to the non-linear coupling of the various processes, changing individual prescriptions always requires new pa-rameter fitting (Crain et al. 2015). With a new star formation setup, we have to re-explore the feedback parameter space with Setup 2. However, we have two distinct advantages as compared to before. The first is that we understand the general effects changing the feedback parameters have on the f s and f d of the halo (see Figure 4). Secondly, the simulations will complete much faster, allowing us to obtain more data points, both in general feedback parameter space and in the region around the best match to observations. This improvement will help us narrow down the feedback prescription, and possibly identify more than one combination that yields a close match. Obtaining more than one set of parameters will open up the possibilities of testing the robustness of the feedback prescription in the MW halo zoom simulations, haloes in the high-resolution region and the dwarf galaxy zoom simulations.
MW galaxy zoom simulations with Setup 2
We perform the following parameter space exploration with Setup 2 in Table 1. With this setup, we run a total of 49 simulations in order to calibrate the feedback prescription, and we make a similar classification as before, shown in Figure 3. We summarise the various properties of the halo of interest of simulations with Setup 1 and 2 in Table 3. This table includes simulations that will be discussed in Sections 4.5 and 4.6.
From the 49 simulations, only one simulation with (3.0×10 −5 , 1 1, 1.0) failed to reach z = 0 due to the complete removal of gas when stars form. The process of iteration started from the best combination of parameters found in Section 4.1.1, (2.5×10 −4 , 1 3, 0.2) and progressed based on the trends found in Figure 4 to move the simulation data point closer to the target. This process will be explained later. We introduce a measure of closeness between the simulated and the observed galaxy properties via the Cartesian distance to the target, where subscripts sim and obs refers to simulation and observation respectively. Lower values of d represent a more realistic simulated galaxy in terms of both f s and f d . For the goodness of fit of individual properties, we refer to Table 3. Comparing the feedback parameter values covered in both Setup 1 and 2, it is clear that they do not cover an equal area of parameter space. The main differences lie in the usage of high f * while having low values of r and in Setup 2 as compared to Setup 1. There are two significant volumes of parameter space not covered in Setup 2: large values of coupled with low r and f * and large values of r with low values of and f * . Also, there are regions (intermediate values of and f * , high values of r and intermediate values of f * ) in the parameter space of Setup 2 that are not sampled. The reason why we do not have any simulations in these regions will be explained in the next section with Figure 10.
Comparison to Baryonic properties from McGaugh et al. (2010) -Setup 2
We will identify the best star formation and feedback parameters through an iterative process beginning from the initial point (2.5×10 −4 , 1 3, 0.2) from before, applying the knowledge of trends from Figure 4. We use arrows to represent the general movement of data points due to the initial adjustments of f * and before using r and s for the finer last adjustments on the f s and f d plane. We present this with a representative set of simulations in Figure 10, similar to Figure 4 by starting from the best combination of parameters (blue dot) in Setup 1. It is evident that identical feedback prescription in different settings produced a MW with disparate f s and f d . In Setup 2, the previously optimal values produced a MW galaxy with minimal stellar mass. This small amount of stars at z = 0 is a result of the relaxed star formation conditions producing numerous small star formation events, which instantly yield feedback and reduces future star formation. From the starting point, we increase f * from 0.2 to 0.9 (see green arrow in Figure 10). This trend indicates that as f * increases, f d decreases while f s stays constant, which is in agreement with the combination of effects of the green and blue arrows shown in Figure 4. Despite only having two data points, we know from the direction given by the green arrow in Figure 4 that it will have the same effect on the properties as increasing (blue arrow). Therefore, if we increase f * further in Figure 4, we can expect it to follow the last blue arrow, which is a horizontal motion of decreasing f d with constant f s . Together with the immediate feedback from stars, increasing f * converts more gas into stars, which reduces the amount of gas, leading to the decline in f d . Although more stars form initially, the feedback is stronger, reducing the amount of gas available to form more stars as the halo aged, resulting in a constant f s . Therefore, we increase f * in an attempt to move the data point as far left as possible in Figure 10 in preparation for the next step. The simulation with f * = 1.0 does not produce a MW galaxy with significantly different f s and f d . Furthermore, this value of f * caused the only failed run from 49 simulations. Hence, we settle on a f * value of 0.9 (orange dot) as the starting point for the next phase of iteration.
After obtaining the minimal f d with (2.5×10 −4 , 1 3, 0.9), we attempt to increase f s and f d in the next iteration to move closer to the target. From what we have learned from Figure 4, we can achieve this by either decreasing or r. Since r is already at a minimum, lowering is the only option. We present only a representative set of data points connected by the blue arrows to illustrate the general change in f s and f d due to smaller values. This increase in f s and f d is in agreement with Figure 4, explained by the less efficient baryon expulsion, which leads to higher star formation and retention of gas within r 500 .
The final step is to adjust r and s to improve the match to the observed f s and f d . Initially, we maintain the injection of feedback energy in a cube and increase the size, i.e, from r = 1 and s = 3 to r = 2 and s = 6. The aim is to obtain a point to the top right of the target and increase r and s correspondingly to move it towards the target as predicted by Figure 4. However, we do not obtain any good fit. Coupled with an upper limit to the extent of feedback (2.5×10 4 , 1_3, 0.2) (2.5×10 4 , 1_3, 0.9) (5.0×10 5 , 1_3, 0.9) (4.0×10 5 , 1_3, 0.9) (3.0×10 5 , 1_3, 0.9) (2.0×10 5 , 1_3, 0.9) (3.0×10 5 , 1_1, 0.9) (2.5×10 5 , 1_1, 0.9) target decreasing f e increasing f * Figure 4. The best combination of feedback parameter (blue dot) in Setup 1 no longer produces a realistic baryonic makeup of the MW halo. Instead, we have to re-calibrate the star formation and feedback prescription with the trends from Figure 4, resulting in (2.5×10 −5 , 1 1, 0.9) and (3.0×10 −5 , 1 1, 0.9) as the values required for Setup 2. For detailed description, we refer to Section 4.4.1.
injection where f d increases instead beyond r = 3 and s = 9 (see Figure 4), we decide to change the shape of energy injection from a cube to just the adjacent cells centred around the star particle. In parameters terms, we change r = 1 and s = 3 to r = 1 and s = 1. As a result, the feedback energy is injected into four instead of 27 cells, effectively increasing the energy concentration per cell by approximately an order of magnitude. This increased energy density causes a larger decrease in f d than in f s . In contrast, increasing the extent of feedback injection maintained in a cube region generates a comparable change in both f s and f d .
We determine (2.5×10 −5 , 1 1, 0.9) and (3.0×10 −5 , 1 1, 0.9) as the two sets of parameters able to produce the smallest d value (see Table 3). Given the vast area of unexplored parameter space and the starting point of the iterative process, we justify that the steps taken constitute the most reasonable route through parameter space that can produce a close match to observations. The starting values of (2.5×10 −4 , 1 3, 0.2) define the boundaries where values can be adjusted. r and f * are almost at the minimum, meaning they can only increase while can either decrease or increase. Furthermore, the low f s of the starting point of properties in Figure 10 suggests that the current feedback is too strong that it restricts star formation.
Together with the trends of changing parameters, the possible motions of the data point are a horizontal movement to the left or right, and diagonally right. The worst possible option is to increase , moving the data point to the left. This choice leaves us stranded because we cannot create further motion since r and f * are already close to their minimum values. The next possible option is to increase r above 3, causing the data to move horizontally right. The next steps associated with this first movement will be decreasing to iterate data points towards the top right before increasing f * to reduce the data to match the target. However, given the initial movement away from the target, we believe that this will not produce a better match than what is presented. The most plausible option is to decrease , moving the data point along the blue arrows indicated in Figure 10. f * can then be increased to move it down diagonally left towards the target while fine-tuning r and s. This change is preferred over increasing r because of the turn around expected beyond r = 4, which limits the degrees of freedom. However, following this option will generate a combination of parameters similar to what we have found. Out of the possible options to move the initial point in parameter space, we have chosen the path that will produce the best match to fit the observational data from McGaugh et al. (2010). Since the argument put forth does not mention the possibility of an ideal set of parameters lying in the region of parameter space consisting of intermediate values of and f * and high values, they are not investigated.
Comparing the values of the feedback parameters that reproduce the MW baryonic makeup from both setups, we can identify the self-consistency of our feedback implementation. Setup 1 yielded an optimal combination of (2.5×10 −4 , Table 3. List of feedback prescriptions discussed in Sections 4. 1, 4.4, 4.5 and 4.6 with the relevant properties of the halo of interest. These include m 500 , f d(obs) , f d(sim) and d as described in Section 4.4. The combination of feedback parameters that produces the lowest value of d, i.e., the most realistic galaxy in terms of its baryonic makeup is highlighted for each setup. We have included the sections in which each individual simulation is discussed, in order to guide the reader. 1 3, 0.2) but in Setup 2, we conclude that (2.5×10 −5 , 1 1, 0.9) and (3.0×10 −5 , 1 1, 0.9) reproduced the most realistic MW galaxy. In Setup 2, the simulation forms more star particles but they are of lower masses than in Setup 1. Therefore, in order to produce a similar amount of stars observed in a MW galaxy at z = 0, Setup 2 requires a higher gas to star conversion efficiency, 0.9 as compared to 0.2 in Setup 1. In response to this larger conversion efficiency, Setup 2 require a lower . The value of differs significantly between the setups as a result. Setup 2 is preferred because of the more realistic star formation history in the dwarf galaxy (see Section 4.2), and a more extensive exploration of parameter space due to higher computational resources efficiency.
Kennicutt-Schmidt relation -Setup 2
In this section, we will present the agreement of star formation in the simulation with the KS relation described in Section 3.2. As in Figure 6, we choose non-zero SFR patches within r 500 at z = 0 and compare it to the fit given by Equation 14 and observations of nearby galaxies by Bigiel et al. (2008), shown in Figure 11. There is a clustering of points around the fit but no slope can be deduced from the points. Also, the simulated gas density is too low for comparison to observational data. We believe the concentration of points around low surface gas density is due to the relaxed star formation criteria and the higher f * . These conditions result in a more efficient conversion of gas into stars, leading to more feedback energy injection that lowers the gas density.
While (3.0×10 −5 , 1 1, 0.9) and (2.5×10 −5 , 1 1, 0.9) recovers f s and f d well, there is an absence of patches with high gas surface density, restricting our ability to probe the KS relation in that regime. This absence also suggests that feedback might have been too efficient in driving gas out of the central region of the galaxy. Comparing Setup 2 to Setup 1, the former is not as good in recovering the KS relation. Setup 2 provides a relatively more instantaneous conversion of gas into stars, which drives gas surface density to lower values. As discussed earlier, a larger quantity of stars is formed in Setup 2, which begins feeding back into the IGM immediately. Coupled with the high conversion ef- (2.5×10 4 , 1_3, 0.2) (5.0×10 5 , 1_3, 0.9) (4.0×10 5 , 1_3, 0.9) (3.0×10 5 , 1_3, 0.9) (2.0×10 5 , 1_3, 0.9) (3.0×10 5 , 1_1, 0.9) (2.5×10 5 , 1_1, 0.9) Bigiel et al 2008 Figure 11. Graph of SFR surface density against gas surface density illustrating the KS relation as in Figure 6. Different coloured points are simulation data with sub-kpc resolution consistent with rough approximation from the observations in nearby galaxies by Bigiel et al. (2008) represented by the blue hatched contours. The blue line is derived from the observational fit of Kennicutt et al. (2007). As a result of the difference in the feedback prescription, the simulated galaxy have a much lower gas surface density as compared to Figure 6. For further description of this figure, we refer to Section 4.4.2.
ficiency of gas to stars, it empties the central region of the galaxy of gas, explaining why the gas surface density is low.
Haloes in the high-resolution region -Setup 2
As in Section 4.1.3, we look at the f s and f d of the other haloes within the high-resolution region of three virial radii from the MW halo. We plot f d against m 500 on the left column, f s against m 500 on the right column, and simulations with (3.0×10 −5 , 1 1, 0.9) and (2.5×10 −5 , 1 1, 0.9) on the top and bottom rows in Figure 12 respectively.
With the exception of one and two haloes from the runs with (3.0×10 −5 , 1 1, 0.9) and (2.5×10 −5 , 1 1, 0.9) respectively, we find very good agreement for both f s and f d of haloes between 10 10 M and 10 12 M . This agreement is in contrast to Figure 7 where agreement is only achieved for f s and not f d . On top of that, the level of agreement with observations is much better in Figure 12 than Figure 7 as points lie closer to the fit. For haloes below 10 10 M , it is plausible that the lack of mass and spatial resolution is the cause of their inability to form stars. On the other hand, the larger mass haloes that suffer the same problem require future zoom simulations to be carried out in order to identify the root of the issue.
Dwarf galaxy zoom simulation with Setup 2
We conduct zoom simulations of a dwarf galaxy with m vir of approximately 10 10 M as an additional test of the universality of the feedback parameters in different halo mass bins. We described how we pick this dwarf galaxy from the highresolution region of the MW zoom simulation in Section 2. Similarly, we increase the number of nested levels to keep the number of particles defining the halo constant with that of the MW while keeping the spatial resolution constant. We then compare the f s and f d of the halo to McGaugh et al. (2010) in Figure 13.
We present a close-up view of the parameter space in Figure 13 because we are showing results from zoom simulations of the dwarf galaxy using the two best sets of parameters only. It is clear that the f s and f d of the simulated galaxy in both feedback prescriptions are comparable to the target. We expect good agreement based on the results of Figure 12. Therefore, we argue that this feedback prescription is insensitive to mass resolution with a smaller mass halo having a lower and higher resolution in Figures 12 and 13 respectively. However, it is also essential to investigate the dependence of the feedback prescription on spatial resolution in future work.
Chaos and variance
Recognising the argument put forth by Keller et al. (2019) for chaotic variance in numerical simulations, we conduct our zoom simulations twice on different processors. They have identical initial conditions and feedback prescriptions but evolved on different combinations of processors in the same computing cluster. The aim is to find out how much the halo properties would differ from each other due to the usage of a different set of processors. We quantify this difference in Figure 14.
Dots and stars in Figure 14 represent the pair of simulations with (3.0×10 −5 , 1 1, 0.9) (blue) and (2.5×10 −5 , 1 1, 0.9) (red) respectively. Despite both of them being close to the target, f s and f d for each pair can differ as much as running a simulation with a different set of feedback parameters. Comparing (3.0×10 −5 , 1 1, 0.9) run 2 to (4.0×10 −5 , 1 1, 0.9) in Figure 10, the simulated galaxies have similar values of f s and f d . This variance is also apparent from the values of m 500 where the maximum, minimum and the mean values are shown by the black crosses.
Looking at Figure 14, the deviation in f s from the pair of simulations is comparable to the 10% difference in stellar mass concluded in Keller et al. (2019) despite not using identical processors. However, the deviation in total baryon mass is as high as 33%, possibly arising from the coupling of star formation and feedback where a 10% difference in stellar mass affects the feedback significantly. There is not a consistent trend observed in Figure 14, i.e., increase or decrease in f s both cause an increase in f d . We attribute this to these ratios containing a mixture of stellar and gas mass. Due to the complex coupling of star formation and feedback, it is difficult to disentangle the contribution of each component. For example, increasing stellar mass results in a decrease in gas mass but it is unclear which is the more dominant effect. As a result, the baryonic composition of the halo can differ drastically. Figure 13. Plot of f s against f d for the zoom dwarf galaxy simulations. Various coloured dots represent runs with different set of feedback parameters with the colour bar having the usual meaning. It is focused on a small area near the target due to closeness of simulation results with the observed properties. Consistent with Figure 12, (3.0×10 −5 , 1 1, 0.9) and (2.5×10 −5 , 1 1, 0.9) are able to produce a dwarf galaxy with f s and f d close to observations. See Section 4.5 for a detailed description. Figure 14. Plot of f s against f d for pairs on zoom MW simulation with identical initial conditions and feedback prescriptions evolved with different processors. The same coloured symbols refer to the simulations with identical setup while dots and stars represent identical simulations with two different sets of processors. The colour bar has its usual meaning. It is again focused in a small area near the target due to the level of agreement of simulation results with the properties. Rerunning a simulation with identical initial conditions can produce simulated properties that differ significantly. See Section 4.6 for a detailed description.
SUMMARY AND DISCUSSION
We present results from a large number of zoom simulations of both a MW and a dwarf galaxy. This suite of simulations is the first application of numerical simulations calibrated to match the baryon content and stellar fraction properties presented by McGaugh et al. (2010). Using the star formation routine of Cen & Ostriker (1992) and the thermal super-nova feedback of Cen & Ostriker (2006), we select factors such as f * to tune the conversion efficiency of gas to stars, for the feedback energy budget, and lastly, both r and s to calibrate the extent of feedback injection in the simulations. We also identify additional parameters that require adjustments in order to achieve realistic star formation histories. They are the Jeans instability check, the star particle threshold mass and the timestep dependence of star formation. These directly influence the criteria used to determine the occurrence of star formation.
It is remarkable that there is such a small variance associated with the data presented by McGaugh et al. (2010). This is the main reason why we strive to improve the agreement between our simulation results and observations as much as possible. However, it is also important to note the possibility of underestimates in the errors and unaccounted systematics. The method of determining the mass of the halo from observations affects the amount of scatter too. If abundance matching is used, m * will have a lot more scatter than m b in the Tully-Fisher plane at low mass, leading a corresponding amount of scatter in f s * and f d . Since most of the mass in low mass rotating galaxies is gas and not stars, one can also question the applicability of extrapolating abundance matching relations to such low masses.
With the mentioned parameters, we produce a MW galaxy with realistic baryon and stellar fraction when compared to the observations of McGaugh et al. (2010) with our suite of simulations. We achieve this agreement with two different setups shown in Table 1. Setup 1 utilises a timestep dependent star formation with Jeans instability check and a star formation threshold mass of 10 5 M . We attempt a total of 22 simulations with this setup and find that (2.5×10 −4 , 1 3, 0.2) managed to reproduce the observed f s and f d . However, the simulated MW galaxy in this feedback prescription does not match the observed KS relation very well. By applying this feedback prescription to a zoom simulation of a dwarf in this setup, we find star formation starting too late as compared to the simulated MW galaxy. To resolve this issue, we propose switching to a timestep independent star formation setup with no Jeans instability check and threshold mass (Setup 2). However, due to the non-linear coupling of the various processes in the simulation, a new prescription requires re-exploration of subgrid parameters.
We begin an iterative process from (2.5×10 −4 , 1 3, 0.2) in Setup 2, concluding with two sets of parameters that produced a close fit to the f s and f d with the use of 49 simulations. They are (2.5×10 −5 , 1 1, 0.9) and (3.0×10 −5 , 1 1, 0.9). As in Setup 1, there are issues with the KS relation of the simulated galaxy. However, these feedback prescriptions performed remarkably well in matching the baryonic makeup of haloes between 10 10 M and 10 12 M in the high-resolution region to observations. A perfect feedback prescription that is able to replicate all the observables in the universe does not exist. If the prescription is tuned to certain observables, it might fail to reproduce others, which then requires further iterations to the feedback implementation (e.g. Pillepich et al. (2018)).
The main difference between setups is the conditions for star formation, and this is reflected in the best values of the feedback parameters we find. In Setup 2, with more relaxed star formation criteria, f * is high, and is low as compared to Setup 1. In Setup 2, star particles form with ease, of lower mass but have a larger quantity. In order to match the same observed value of f s with Setup 1, we use a higher value of f * , creating star particles with higher mass. However, since we demand a good agreement with the observed f d , we have to lower the feedback energy efficiency from these higher mass star particles. This adjustment results in a lower as compared to Setup 1. Therefore, combining the values of feedback parameters with the star formation criteria, we show the self-consistent characteristics of the feedback processes.
In Setup 2, the points coalesce around low gas surface density, with more gas being converted to stars due to the higher value of f * and the relaxed star formation criteria. As a result, in the recovery of the KS relation in both simulation setups, Setup 2 did not perform as well as Setup 1. This inability to obtain an appropriate slope of the KS relation in both setups hints at a fundamental limitation of the Cen & Ostriker (1992) model. In terms of matching other observed properties, this feedback prescription requires more tuning or parameters.
Looking at the other haloes in the high-resolution region in Setup 2, all but three of the haloes within 10 10 M and 10 12 M with the calibrated star formation and feedback prescription are an excellent fit to f s and f d observed by Mc-Gaugh et al. (2010). In comparison to the results from Setup 1, the feedback prescriptions in Setup 2 perhaps suggest universality for haloes within the mass range described. We verify this claim with the zoom simulations of a dwarf galaxy of 10 10 M with these feedback prescriptions. Through the haloes in the high-resolution region of the MW zoom simulation and the halo in the dwarf galaxy zoom simulation, we demonstrate the insensitivity of our feedback prescription on the mass resolution. However, we have to conduct the same test with much lower mass haloes as well as with different spatial resolutions. On top of the resolution, the universality and robustness of the feedback prescription should also be extended to galaxies with various star formation and merger history.
As we demonstrate, non-deterministic variance is a cause for concern; more computational resources need to be invested in order to understand, quantify and minimise these effects. Since we do not reproduce all the observational constraints mentioned, there exists the possibility of including more parameters in the feedback model or developing a different model. These should be the focus of future work to improve the feedback prescription in order for the simulated galaxies to better match observations.
|
2020-02-10T02:00:42.990Z
|
2020-02-07T00:00:00.000
|
{
"year": 2020,
"sha1": "3192377e139d52a49afc45beebb840fdf77b2c13",
"oa_license": null,
"oa_url": "https://www.pure.ed.ac.uk/ws/files/163713157/Calibration_of_star_formation_and_feedback_model_for_cosmological_simulations.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "751e36a2057f483d59724df60f032c0a345491e0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
9673866
|
pes2o/s2orc
|
v3-fos-license
|
Method for combined biometric and chemical analysis of human fingerprints
This paper describes a method for combining direct chemical analysis of latent fingerprints with subsequent biometric analysis within a single sample. The method described here uses ion mobility spectrometry (IMS) as a chemical detection method for explosives and narcotics trace contamination. A collection swab coated with a high-temperature adhesive has been developed to lift latent fingerprints from various surfaces. The swab is then directly inserted into an IMS instrument for a quick chemical analysis. After the IMS analysis, the lifted print remains intact for subsequent biometric scanning and analysis using matching algorithms. Several samples of explosive-laden fingerprints were successfully lifted and the explosives detected with IMS. Following explosive detection, the lifted fingerprints remained of sufficient quality for positive match scores using a prepared gallery consisting of 60 fingerprints. Based on our results (n = 1200), there was no significant decrease in the quality of the lifted print post IMS analysis. In fact, for a small subset of lifted prints, the quality was improved after IMS analysis. The described method can be readily applied to domestic criminal investigations, transportation security, terrorist and bombing threats, and military in-theatre settings.
Introduction
Friction ridge skin impressions, or latent fingerprints, are an extremely important piece of trace evidence often discovered at the scene of a crime. Each person has unique fingerprints, and people can unintentionally leave detailed impressions of these friction ridges specific to their fingers on the objects they touch. Such latent fingerprints are often developed, photographed, and collected at the crime scene, and the images are later compared to known prints for a possible identification match [10]. Latent fingerprints are typically composed of a mixture of sebum and sweat excretions, and can also be contaminated with substances that a person has handled such as narcotics or explosives [14,8,3]. The ability to screen for these substances in a latent fingerprint is highly beneficial for placing an individual at a specific scene, and to determine what contraband that person may have come into recent contact with Wynn et al. [14], Ng et al. [8], Day et al. [3], Hazarika et al. [6], Chen et al. [2], Bhargava and Perlman [1] and Mou and Rabalais [7]. This paper describes the development of a method for combined chemical and biometric analysis of lifted fingerprints. The ideal characteristics for such a method include a low-cost, field deployable technique that provides rapid results [5]. Ion mobility spectrometry (IMS) is a desirable chemical analysis technique due to its ease of use, rapid analysis time, and current widespread availability. IMS is a rugged and portable technique that can be used immediately at a crime scene or in theatre to detect contraband substances such as narcotics and explosives [13]. There have been over 10 000 IMS instruments deployed worldwide in airports alone [4]. These screening instruments are used by physically swiping a person's suitcase, purse, laptop, etc. with a collection wipe to collect trace contaminants. The wipe is then inserted into the instrument and heated to temperatures exceeding 200°C to thermally desorb the volatile analytes, and after a 7 to 30 second chemical analysis, it produces an indication as to whether explosives (and/or narcotics) were detected. IMS is commonly used in situations (e.g., prison and border checkpoints and airports) requiring high throughput screening for narcotics and/or explosives. One caveat to this method is the need for thermally stable samples that have low chemical background. These potential issues are overcome with this new fingerprint lifting method by using a thermally stable substrate and adhesive with very low chemical background for lifting latent fingerprints.
A typical method for lifting latent fingerprints uses fingerprint powders and inexpensive transparent lifting tape to lift the developed fingerprint. However such fingerprint lifting tape is not suitable for high temperature chemical analyses, as the tape and adhesive would melt during the heating/ desorption process and cause significant background interferences in the chemical analyzers. Other chemical analysis techniques such as gas chromatography or liquid chromatography combined with mass spectrometry (GC/MS or LC/MS) could be used after biometric analysis, but these and similar techniques require the dissolution or destruction of the sample in order to perform the chemical analysis, and destroying such evidence is not desirable. With the fingerprint lifting method described here, a latent fingerprint is visualized (i.e., with fingerprint powders) and lifted, and then screened for explosives or narcotics with IMS. The analyzed fingerprint stays intact for subsequent imaging and matching algorithms, typically done at a later time. This fingerprint lifting technique uses a thermally stable substrate and adhesive so that the issues mentioned above are eliminated, providing an opportunity to chemically analyze the lifted fingerprint immediately at a crime scene or later in a laboratory.
Materials and methods
Experiments were conducted using this method to determine the feasibility of lifting a fingerprint, analyzing it for explosives, and determining the usefulness in a print matching system. A white 0.015 in thick (0.38 mm) Teflon® sheet (McMaster Carr, Chicago, IL) was cut into 1 in × 3 in pieces (25.4 mm×76.2 mm). A heat resistant, low outgassing silicone adhesive type CV-1161 from NuSil® 1 (Carpinteria, CA) was diluted with a volume fraction of 1:2 in ethyl acetate solvent (Sigma-Aldrich, St. Louis, MO) and applied to the region of interest on the polytetrafluoroethylene (PTFE, or Teflon®) strips by airbrushing (Aztek Airbrush set, amazon.com). The adhesive coated strips were heated to 230°C for 1 h to cure, and then were ready to use. Latent fingerprints were made by an anonymous volunteer who pressed their fingers onto clean glass slides. The latent prints were then brushed with black or magnetic fingerprint powder for development. Several additional latent prints containing trace amounts of cyclotrimethylenetrinitramine (RDX) explosive were also prepared using modeling clay containing small amounts of RDX explosive to simulate composition 4 (C-4) plastic explosive. All latent prints were lifted with the prepared fingerprint lifting substrate (Fig. 1). The lifted prints were then scanned at 1000 dpi to create a digital image, and organized in a computer gallery of 'unknown' samples, or probes. Known exemplar fingerprints from the same unnamed volunteer were collected on five FD-258 standard fingerprint cards using ink. These cards and the lifted samples were scanned using an FBI Appendix F certified scanning station, and the images were cropped and organized in a gallery of Fig. 1 Photographs of lifting fingerprints (These images were made using an artificial fingerprint to protect personally identifiable information (PII). The print was manufactured using computer aided design (CAD) software and fabricated with a 3D rapid prototyping printer. A cast was made of the resulting fake 'finger' using dental casting stone, and ballistics gelatin was poured into the cast to create an artificial finger. This gelatin finger was used to deposit sebaceous fingerprints for photographing and publishing purposes. More details of this process will be published elsewhere.). a Lifting the powdered latent fingerprint. b and c are side by side comparisons of resulting fingerprint lifts from a common tape pull using forensic tape (b) and the new adhesive swab lift (c). Note that (c) was originally a mirror image of (b); thus computer software was used to horizontally invert the image known prints. All images were cropped of most white-space, and the latent fingerprints were inverted across the vertical axis to correct for the inversion resulting from the lift-capture. The digital fingerprints were measured for relative quality using the NIST Fingerprint Image Quality (NFIQ) algorithm [11], processed through the MINDTCT minutiae detector (NIST Biometric Image Software [9]) and the resulting minutiae templates were matched using the BOZORTH3 matcher (NIST Biometric Image Software [9]) to verify that a match can be made between the latent fingerprint and the matching exemplar fingerprint. Twenty samples were compared to 60 known gallery images, for a total of 1200 comparisons. Once a match score for all 20 samples was generated, the samples were analyzed using a 400B IMS instrument (Smith's Detection, Danbury, CT) or an Itemiser DX IMS instrument (Morpho Detection, Wilmington, MA) where each sample was heated to 230°C for 7 s. The IMS responses were recorded, and the samples were then rescanned and passed through the matching system again to determine the effect of the heating process on the lifted fingerprints.
Results and discussion
There are two ways that the performance of the lift medium can be measured. The first is using a monolithic measure tool which can predict/estimate the probability of successfully using the given fingerprint image for matching purposes which is typically referred to as a 'fingerprint quality' metric. The second method is to actually scan and match the lifted prints to its known exemplar and obtaining a match score for the image pair. The second method was used here for the collection of experimental data consisting of actual match scores rather than predictive quality estimates. Results have been divided into three basic outcomes; neutral cases, desirable cases, and undesirable cases. A neutral case describes the matching results for comparisons between lifted and exemplar images that remain the same before and after undergoing chemical analysis using IMS. For example, if a fingerprint lift was falsely matched to a known print both before and after chemical analysis, no changes were observed as a result of chemical analysis and therefore it was considered a neutral case. In the traditional sense of quantifying biometric matcher behavior one would consider a false positive match to be a poor result; however, the point of this study was not to test how well the matching algorithm works, but to test whether the chemical analysis affects the matching result. Most of the matching tests (n=1144, > 95 %) resulted as a neutral case because the outcome of the latent fingerprints through the matching system remained unchanged when compared before and after the chemical analysis.
A match case was considered desirable when a false match before chemical analysis became a true rejection after chemical analysis, or when a missed match before chemical analysis became a true match after a chemical analysis. In this situation, the chemical analysis process unintentionally enhanced the lifted fingerprint impression enough to change it from an incorrect answer prior to chemical analysis to a correct answer after chemical analysis in terms of matching results. Three percent of all the matched samples had this desirable outcome. This is considered desirable because we can potentially use the chemical analysis technique to enhance the fingerprint as an aid for matching. It is hypothesized that the sebum in the fingerprint ridges melts slightly, causing the particulates from the fingerprint powder to adhere more strongly to the ridges. An undesirable result is just the opposite; when a correct match result becomes an incorrect match result after the chemical analysis. Less than 2 % of all the analyses had an undesirable result. Table 1 lists a summary of this data.
A select number of fingerprints were prepared using a simulated plastic bonded explosive. These samples were not Table 1 Overall probe-gallery fingerprint matches before and after chemical analysis, organized by neutral, desirable, and undesirable results a positive match threshold set to score value of 13 used in the match study mainly due to issues of potentially contaminating various laboratory surfaces and the image scanner. These lifted fingerprints with simulated explosive contamination were prepared only for IMS analysis, to ensure that residue left in a latent fingerprint could be detected in a trace contraband detector. In order to avoid handling the explosives, gelatin fingers prepared from dental casting stone as previously described were used. The artificial gelatin fingers were pressed into modeling clay containing small amounts of RDX to simulate composition 4 (C-4) plastic bonded explosive. The RDX contaminated gelatin fingers were then pressed onto clean glass slides. This was a qualitative study because the mass of simulated explosive deposited in each print was not measured or controlled. All 14 prints analyzed produced an IMS response with a relative standard deviation (RSD) of 49.8 %. The results appear to be variable due to the high RSD, but this is because the mass of explosive present in each sample is unknown. This represents a more realistic distribution since fingerprints can contain variable amounts of explosive even when depositing several fingerprints from handling a single piece of explosive [12]. An IMS spectrum of the resulting RDX detection is shown in Fig. 2.
Conclusions
The results of this study show the feasibility of lifting a latent fingerprint using this novel method and chemically analyzing it immediately without destroying the lifted print. We have shown that the powdered fingerprints lifted with the hightemperature adhesive media are useful in a fingerprint matching system. We have also shown that explosives residues present in such lifted fingerprints can be successfully screened and explosives detected using trace detection equipment. An application of the described technique would be for military personnel in theatre when they come in contact with a suspicious package that could be an improvised explosive device (IED). They could quickly brush the package for prints, lift the print with the fingerprint lifting media, and analyze it immediately with a field-ready trace explosives detector. The analyzed fingerprint could be saved for subsequent matching to try and determine who has handled the package. Such analysis is not currently available. Future efforts will include adding fiducial marks for easier scanning and matching and finding an ideal protective covering for the lifting medium for both before and after lifting a fingerprint.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
2018-04-03T00:11:54.123Z
|
2014-03-15T00:00:00.000
|
{
"year": 2014,
"sha1": "9c1a8a4c0828da09439155d91bfe3abaf7425287",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12127-014-0148-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c1a8a4c0828da09439155d91bfe3abaf7425287",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Computer Science"
]
}
|
235825736
|
pes2o/s2orc
|
v3-fos-license
|
Driving Waveform Optimization by Simulation and Numerical Analysis for Suppressing Oil-Splitting in Electrowetting Displays
Electrowetting display (EWD) is a new reflective display device with low power consumption and fast response speed. However, the maximum aperture ratio of EWDs is confined by oil-splitting. In order to suppress oil-splitting, a two-dimensional EWD model with a switch-on and a switch-off process was established in this paper. The process of oil-splitting was obtained by applying different voltage values in this model. Then, the relationship between the oil-splitting process and the waveforms with different slopes was analyzed. Based on this relationship, a driving waveform with a narrow falling ramp, low-voltage maintenance, and a rising ramp was proposed on the basis of square waveform. The proposed narrow falling ramp drove the oil to rupture on one side. The low-voltage maintenance stage drove the oil to shrink with a whole block. The proposed rising ramp was pushed the oil into a corner quickly. The experimental results showed that the oil splitting can be suppressed effectively by applying the proposed driving waveform. The aperture ratio of the proposed driving waveform was 2.9% higher than that of the square waveform with the same voltage.
INTRODUCTION
Electrowetting (EW) is a phenomenon which can change the wettability of solid-liquid surface by using electric field [1]. The theory of EW was first proposed in 1981 [2]. In recent years, EW technology has been widely used in the chemical industry, bioengineering, display, and other fields [3][4][5]. Among them, the EWD is a new reflective display technology after electrophoretic display technology [6,7]. Compared with conventional display technologies [8,9], the EWD technology has the advantages of low power consumption, high contrast, fast response, and full color [10,11], which is considered one of the most attractive emerging display technologies [12].
An EWD pixel is mainly composed of substrate, ITO (Indium Tin Oxides) guide electrode, hydrophobic insulation layer, pixel wall, colored oil, polar liquid, and top plate as shown in Figure 1 [13,14]. A complete driving process of EWD can be divided into two stages. In the first stage, a suitable voltage is applied between upper and lower electrodes, the colored oil is pushed into a corner. The color of bottom substrate is displayed in pixels. When the voltage is removed, the hydrophobic layer is completely covered by the colored oil again. The color of colored oil is displayed in pixels [15].
At present, there are still some defects in EWDs, such as oilsplitting, charge trapping, oil overflow, and so on [10]. To solve these defects and further improve the display performance of EWDs, many researchers have made many attempts. For example, a three-dimensional EWD model was established and the process of fluid flow was simulated. In the simulation, the principle of phase-field was used to simulate the movement of oil interface successfully, and the influence of contact angle of the pixel wall on oil-splitting was studied [16]. However, the problem of oil-splitting caused by high voltage had not been solved. For oil-splitting, the influence of surface tension and contact angle in the fluid was considered. Through model simulation and numerical analysis, the influence of contact angle and surface tension on oil-splitting was proved [17]. In addition, the oilsplitting was affected by the oil thickness. The oil in the EWDs pixel was non-uniform, and the oil was split at the thinnest point [18]. However, the influence of thickness change on the process of oil movement was not considered. To suppress the oil-splitting, a driving waveform which consisted of four stages: starting, rising, displaying, and recovering was proposed. Starting from a low voltage to drive the oil can suppress oil-splitting effectively [19]. However, the speed of oil movement was slowed down.
In order to suppress the oil-splitting, a simulation model was established by COMSOL Multiphysics software. The relationship between electrostatic field force and oil shape change was studied by this model. And then, the change characteristic of oil shape was obtained in a switch-on process. According to this characteristic, a driving waveform which can suppress the oilsplitting effectively was proposed.
NUMERICAL METHODOLOGY
The simulation of EWDs was implemented by establishing and calculating numerical equations. It was done to track the change of the oil-water interface in the model [20]. By using COMSOL Multiphysics software and Finite-Element method to simulate Two-Phase laminar flow with the electrostatic field, the physical field of laminar flow was coupled with the phase-field and the electrostatic field [21]. The finite element analysis method can be used to solve numerical calculation problems including the Cahn-Hilliard equation, Laplace equation, and Navier-Stokes equation [22]. Meanwhile, to solve Maxwell's stress tensor equation, an electrostatic field module was added to the simulation. The electrostatic field force of the electrostatic field module was fed back into the laminar flow module. The Navier-Stokes equation and phase-field equation were solved according to specified boundary conditions and calculation results of electrostatic field.
Governing Equations
Phase-field is used to describe the dynamic process of a two-phase flow interface. The movement of interface is tracked indirectly by solving two equations. One of them is used to solve phase-field variable ∅ and the other is used to solve the mixed energy density ψ [16,23,24]. The position of the interface is determined by minimum free energy [23]. A large number of data have been proved that the phase-field method could effectively predict droplet movement on the solid surface [25,26]. Eqs. 1-3 represent the governing equation of phase-field [27]. Eq. 4 describes the relationship between c and ε.
Where λ is the energy density and ε is the capillary width. Eq. 3 describes the relationship between these two parameters and the surface tension coefficient σ. The c is the mobility parameter and χ is the mobility tuning parameter in Eq. 4. ∅ was set to 1 for oil and −1 for water. In order to couple the electrostatic field with the laminar flow field, the dielectric constant, density, and viscosity between diffusion interfaces need to be calculated by Eqs. 5-7.
Where ρ, μ, and ε r represent the density, viscosity, and dielectric constant of fluids, respectively. The mean curvature between the two liquids interfaces was calculated by Eqs. 8,9 [27].
Where κ and G represent the mean curvature and the chemical potential, respectively. The laminar flow field can be solved by the Navier-Stokes equation and continuity equation [28]. The Navier-Stokes equation is described as Eq. 10. To depict the movement of two immiscible liquids, the transport of mass and momentum are governed by incompressible Navier-Stokes equations.
Where u, p, ρ, and μ represent the velocity, pressure, density, and dynamic viscosity of the fluid respectively. Each term in Eq. 10 corresponds to the inertial force, pressure, viscous force, and external force, respectively. The external force consists of surface tension, gravity, and a volume force, the F st , g and F vf represent the surface tension, gravity acceleration, and volume force, respectively. As stated in Eqs. 10, 11, the coupling between the laminar flow field and the electrostatic field is achieved by applying the electrostatic volume force to the Navier-Stokes equation. The electrostatic field force is the main factor that can cause fluidflowing [16]. In addition, the electrostatic field force can be obtained by calculating the divergence based on Maxwell Stress Tensor (MST) [16], and the calculation is expressed in Eq. 12.
MST formula is described by Eq. 13.
Where I is the identity matrix, E is the electric field and D is the electric displacement field, and their relationship is described by Eqs. 14, 15.
In a two-dimensional model simulation, the MST is expressed as Eq. 16.
T T xx T xy T yx T yy (16)
Eq. 17 can be obtained by substituting parameters.
Where E x and E y represent the horizontal and vertical electric fields respectively. The variation of volume force caused by the electrostatic field acts on the interface between oil and water, and the calculation can be expressed in Eq. 18.
When an electric field is applied, the water is in contact with the hydrophobic insulation layer with the action of electrostatic field force [29,30], the electrostatic field force can be described by Eq. 19.
Where d hyd , ε hyd and E hyd represent the thickness of the hydrophobic insulation layer, dielectric constant, and electric field strength on the hydrophobic insulation layer, respectively. Therefore, the force balance on the contact line can be derived from the Lippmann-Young equation, as shown in Eq. 20.
cos θ e cos θ hyd + ε 0 ε hyd V 2 2d hyd c ow (20) Where θ e , θ hyd , and c ow are the EW contact angle, Young's contact angle of the hydrophobic dielectric layer, and the surface tension, respectively.
Boundary Conditions
In the simulation model, boundary conditions are the prerequisites for determining the solution of governing equations on the boundary. The zero-charge boundary condition should be set on all sides of the model. For electrostatic field boundary conditions, the voltage V and the ground need to be specified. The wetted wall, the initial interface, and the outlet need to be specified in phase-field boundary conditions. The wetted wall boundary can be calculated by Eqs. 21, 22. n · ε 2 ∇∅ ε 2 cos(θ w )|∇∅| FIGURE 2 | The boundary conditions for the electrostatic field, phase field, and laminar flow field. Where u, P, θ w , and V represents the velocity, pressure, contact angle, and voltage of the boundary conditions respectively. The pixel wall and outlet are symmetrical boundary conditions.
Frontiers in Physics | www.frontiersin.org July 2021 | Volume 9 | Article 720515 n · cλ ε 2 ∇ψ 0 The interface of the two-phase flow was selected as an initial boundary condition. Both sides of the model (except the pixel walls) were selected as inlet and outlet boundary conditions. In addition, initial values of pressure and velocity in the laminar flow field were set to 0. The wall condition was set to no slip. These setting of boundary conditions were shown in Figure 2. The dotted box in Figure 2 represents the symmetric boundary condition.
PROCESS AND DISCUSSION
The parameters used in the simulation were shown in Table 1.
The fluid (oil and water) in the model was set to incompressible flow. It was assumed that the temperature (25°C) was kept constant during the fluid movement, so the thermal expansion of the fluid was ignored. The influence of pressure on dynamic viscosity was ignored. In addition, the Bond number describes the relationship between gravity and surface tension in the EWDs model, and it is far less than 1. So the gravity can be neglected [16]. In the simulation, the polar liquids were replaced by water [31]. The proposed simulation model was shown in Figure 2. The natural spreading and shrinking process of oil were realized in this model. To describe the display performance of EWDs, it is necessary to calculate the aperture ratio. The aperture ratio is an important performance index of EWDs, which is a proportion of opening area in a whole pixel [20]. When the aperture ratio was calculated in two dimensions, the bottom of the EWD was considered as a square, and A r is used to represent the aperture ratio, L is the length of the contact line between the oil and the hydrophobic insulating layer in two dimensions structure, and d is the area of the hydrophobic insulating layer. The aperture ratio was calculated by using Eq. 23.
When the electric field inside a pixel was analyzed, the electric field formula was used, as shown in Eq. 24. Where E, U, d represents the electric field, the voltage, and the distance between the two potentials, respectively.
In this paper, the EWD model was implemented by COMSOL Multiphysics 5.4. The aperture ratio test platform was shown in Figure 3. This test platform included a waveform editing system, a signal amplifier, and a detection system. The waveform editing system was used to edit and generate waveform signals, which was consisted of a computer and a function signal generator. The signal was amplified by the signal amplifier to drive the oil. The detection system was used to collect oil movement image of EWD panel in real time, which was consisted of a high-speed camera and an image processing system. The aperture ratio was obtained by the image processing platform.
Influence of Dynamic Viscosity
The dynamic viscosity is an important parameter in fluid. The dynamic viscosity of oil can affect the aperture ratio and response time [20]. The response time is expressed as T. In one driving cycle, the time for applying the driving waveform is represented by T1, and the time when the pixel reaches the maximum aperture ratio is represented by T2. The response time is equal to the difference between T2 and T1. In this paper, an experiment of oil dynamic viscosity from 0.0005 Pa·s to 0.003 Pa·s with an interval of 0.0005 Pa·s was designed, and the experimental results were shown in Figure 4. When the dynamic viscosity was changed from 0.002 Pa·s to 0.003 Pa·s, the response time was shorter and the aperture ratio was larger. Figure 5 showed that when the dynamic viscosity of oil was less than 0.0015 Pa·s, the oil was split into two pieces. Otherwise, the oil was pushed into a corner with a whole block. Therefore, the oil with a dynamic viscosity from 0.002 Pa·s to 0.003 Pa·s should be selected in current conditions. Figure 6 showed the process of a switch-on and a switch-off in a pixel with specific conditions. The conditions were that the voltage was 32 V and the oil dynamic viscosity was 0.002 Pa·s. The results showed that the oil was pushed into a corner with a whole block. So, the dynamic viscosity of oil was set to 0.002 Pa·s.
Influence of Driving Waveforms
In a pixel, the voltage of driving waveform was converted into electrostatic field force and applied to the water and oil. In the simulation, the gradually increasing voltage was set by the parametric scanning method, and the results were shown in Figure 7. When the voltage was lower than 16 V, the shape of oil was squeezed down at both sides of a pixel, but the oil was not split. When the voltage was changed from 16 to 36 V, the aperture ratio was almost increased linearly. When the voltage was higher than 36 V, the aperture ratio was increased slowly and reached the maximum. At the same time, the oil was split into two pieces. The blue curve in Figure 7 represented the maximum aperture ratio of an actual EWD pixel when the voltage was changed from 16 to 36 V. The length and width of an actual pixel were 150 and 150 μm, respectively. The aperture ratio of the actual pixel was measured on the test platform in Figure 3. The results showed that the value of simulation was close to the actual before 20 V. In the range of 20-28 V, the simulation value was 4.3% higher than the actual, and in the range of 28-36 V, the simulation value was 10.5% higher than the actual. Altogether, the trends between When the dynamic viscosity of the oil was higher than 0.0015 Pa·s, the oil was pushed with a whole block. Otherwise, the oil was divided into two pieces.
Frontiers in Physics | www.frontiersin.org July 2021 | Volume 9 | Article 720515 these two were consistent. Therefore, the accuracy of this simulation model was verified.
The driving waveform has influence on the movement of oil [32][33][34]. To comprehend the acceleration ability of oil with the action of driving waveforms in one cycle, these driving waveforms with a different rising slope were designed, as shown in Figure 8. The change of aperture ratio with the action of four driving waveforms was shown in Figure 9. With the slope increased, the response time was shorter. When the slope was infinite (square waveform), the oil was split into two pieces. The experimental data showed that the moment when the pixel reached the maximum aperture ratio later than the moment that the driving waveform reached the maximum voltage for 2-5 ms. The greater the slope, the stronger the oil acceleration. So, a maintenance period of 2-5 ms should be set to drive a pixel for reaching the maximum aperture ratio. It took more than 6 ms in the natural spreading stage.
Relationship Between Electrostatic Field Force and Oil Movement
The forces on the fluid can be divided into volume force and external force. The volume force can be divided into surface tension and electrostatic field force. The electrostatic field force is a key factor that affects the oil movement of EWDs [35,36]. To obtain the relationship between the movement status of oil with a whole block and the internal electrostatic field force. Parameters were determined based on Waveform 2 in Figure 8, the voltage was set to 32 V and the oil dynamic viscosity was 0.002 Pa·s.
The change of the internal electric field inside a pixel was shown in Figure 10. In the vertical direction, the internal space of the pixel was divided into a water channel and an oil-water mixed FIGURE 6 | In COMSOL software, the movement process of oil and water (black represents oil, light yellow represents water) was intercepted (A) After applying 32 V voltage, the oil was shrunk with the action of electrostatic field force, which was completed in 4 ms. (B) When the driving voltage was removed, the oil was naturally spread, and the process was completed in 6 ms.
FIGURE 7 |
The aperture ratio between simulation and an actual pixel. When the voltage was changed from 16 to 36 V, the change curve of aperture ratio between the actual pixel and simulation was obtained. The length and width of an actual pixel were 150 and 150 μm respectively. The changing trend of aperture ratio was consistent between simulation and actual EWD pixel. The starting voltage was set to 16 V. K in the cleft was the slope, and the slopes of the four waveforms are 50, 100, 200, and infinite, respectively. The effective driving cycle was 10 ms, and the latter 10 ms was set to 0 V. Frontiers in Physics | www.frontiersin.org July 2021 | Volume 9 | Article 720515 6 channel. As shown in Figure 10B, the internal space of the pixel can be divided into areas 1, 2, 3, and 4, respectively, areas 1, 3 were oilwater mixed channels, and areas 2, 4 were water channels. E1, E2, E3, and E4 represented the internal electric fields formed by areas 1, 2, 3, and 4, respectively. In the analysis process, the water with a high dielectric constant (80) as a conductor of electricity and oil as an insulator was considered. According to Eq. 24, the internal voltage U was 0 V in the water channel, so the internal electric field E was 0 (V/ m). In the area of the oil-water mixed channel, if U remained unchanged and d was increased, the internal electric field was decreased. On the contrary, the internal electric field was increased. Therefore, the internal electric field force was proportional to the thickness of the oil, and the oil was split easier in the thin area. The red arrows in Figure 10B indicated the direction of the internal electric field. When the water was considered as a conductor of electricity, the voltage at the oil-water interface was the same. As the height of the oil increased, the electric field at the high of the oil was smaller, and the electric field at both sides was greater than the highest. Therefore, a non-uniform electric field was formed in the oil, and the oil formed a spherical cap shape with the action of the non-uniform internal electrostatic field.
The change of the internal electrostatic field force inside a pixel was shown in Figure 11. The result was obtained by the integral calculation of electrostatic field forces in a pixel space (except the pixel walls). When the oil was in State 1, a new water channel was formed in the oil ruptured area, and the internal electric field was reduced in this area. So, the electrostatic field force was reduced at this stage. Then, the voltage was increased linearly, the electric field in the pixel space was increased overall. As a result, the electrostatic field force was also increased. When the oil became State 2, the oil was pushed to the highest. Although the voltage kept the maximum, the internal electrostatic field force was reduced. This was mainly due to the larger water channel area.
DRIVING WAVEFORM DESIGN FOR SUPPRESSING OIL-SPLITTING
In a two-dimensional EWD model simulation, a pixel switch-on process can be divided into three stages. In the first stage, oil was ruptured randomly on one side of a pixel. The other side was squeezed down, but not ruptured. This stage was maintained about 1 ms. In the second stage, oil was continuously squeezed, the oil was driven both vertically and horizontally with the action of the non-uniform electrostatic field force. If the voltage was large enough, oil on the side which was not ruptured previously was split. This stage was maintained about 2-5 ms. In the third stage, oil was driven horizontally, and the height of oil was almost unchanged. By analyzing these stages, the conclusions were obtained as follows. The oil was thin and the electrostatic field force applied to the oil was large in the first stage. When a high voltage more than 36 V was FIGURE 10 | The change of the internal electric field inside of a pixel (A) The internal electric field of the initial interface (B) The internal electric field in the process of oil movement. E1, E2, E3, and E4 represented the internal electric fields formed by areas 1, 2, 3, and 4, respectively. FIGURE 11 | The relationship between fluid status and electrostatic field force. Taking Waveform 2 as the input term, the electrostatic field force in a pixel space was integrated, and the results of the electrostatic field force in the driving period were obtained. FIGURE 12 | The proposed driving waveform and the traditional square waveform. The proposed driving waveform had three stages which were maintained for 1 ms, 2.5 ms, and 5.5 ms, respectively, and the period of the square waveform and the proposed driving waveform were both 18 ms. The rest of the blue curve overlapped with the square waveform.
Frontiers in Physics | www.frontiersin.org July 2021 | Volume 9 | Article 720515 applied, the oil was split into two pieces. The oil-splitting appeared in the first and second stages. So, combined with the analysis of oil movement in the first and second stages, a driving waveform was proposed for suppressing the oil-splitting. The proposed driving waveform was divided into three stages as shown in Figure 12. In the first stage, the waveform started at 36 V (high voltage), then was dropped to 30 V rapidly, and maintained for 0.5 ms, respectively. The purpose of this stage was mainly to drive the oil to rupture on one side. In stage 2, the voltage value was maintained at 30 V for 2.5 ms. At this stage, oil was mainly driven in one direction horizontally. In stage 3, when the oil was pushed thick enough, the voltage can be increased, and the oil was not split anymore. This stage was to improve the response speed of the pixel. When the square waveform was applied to this model, the oil was split into two pieces at 1.5 ms as shown in Figure 13A, and then the oil was driven to the middle of a pixel at 5 ms. When the proposed driving waveform was applied to this model, the oil was pushed a corner with a whole block as shown in Figure 13B. The result showed that the proposed driving waveform can effectively suppress oil-splitting, and the aperture ratio was increased by 2.9% compared to the square waveform.
CONCLUSION
In this paper, a two-dimensional EWD model was established. This model was used to simulate the influence of dynamic viscosity, voltage, and waveform on oil-splitting. The oil was easily broken into two pieces with a high voltage. In addition, the internal electrostatic field force and oil movement were affected by each other. On the basis of the traditional square waveform, the proposed narrow falling ramp drove the oil to rupture quickly on one side of a pixel. The proposed low voltage maintenance stage can effectively suppress the oil-splitting. After applying this optimized waveform, the aperture ratio of a pixel was increased. The simulation model can provide a prediction scheme for the selection of oil and the design of the driving system in practical application. By adjusting part of the structure or material parameters of the model, it can be applied to other schemes of EWDs.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
|
2021-07-15T13:37:48.666Z
|
2021-07-15T00:00:00.000
|
{
"year": 2021,
"sha1": "ffd5b5a857e6286650af9ebc86f7419b5c0791da",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2021.720515/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "ffd5b5a857e6286650af9ebc86f7419b5c0791da",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
248118865
|
pes2o/s2orc
|
v3-fos-license
|
The relative contributions of TWIP and TRIP to strength in fine grained medium-Mn steels
A medium Mn steel of composition Fe-4.8Mn-2.8Al-1.5Si-0.51C (wt.\%) was processed to obtain two different microstructures representing two different approaches in the hot rolling mill, resulting in equiaxed vs. a mixed equiaxed and lamellar microstructures. Both were found to exhibit a simultaneous TWIP$+$TRIP plasticity enhancing mechanism where deformation twins and $\alpha'$-martensite formed independently of twinning with strain. Interrupted tensile tests were conducted in order to investigate the differences in deformation structures between the two microstructures. A constitutive model was used to find that, surprisingly, twinning contributed relatively little to the strength of the alloy, chiefly due to the fine initial slip lengths that then gave rise to relatively little opportunity for work hardening by grain subdivision. Nevertheless, with lower high-cost alloying additions than equivalent Dual Phase steels (2-3 wt\% Mn) and greater ductility, medium-Mn TWIP$+$TRIP steels still represent an attractive area for future development.
Within the past decade, many medium Mn steels have been developed with tensile properties that exceed those of TWIP steels. By reducing the Mn content, the microstructure of medium Mn steels are duplex (γ + α). Depending on the thermomechanical processing history, a myriad of morphologies, phase fractions and distributions of the two phases can be produced. Regarding microstructure and morphology, the two most common microstructures are equiaxed and lamellar [11]. Medium Mn steels with equiaxed microstructures typically have polygonal and evenly distributed austenite and ferrite grains. Lamel-lar or laminated microstructures have alternating austenite and ferrite lamellae contained within a prior austenite grain. Most medium Mn steels are either equiaxed, lamellar or both [11][12][13]. However, medium Mn steels with lamellar microstructures tend to be stronger, due to Hall-Petch strengthening arising from a smaller lamella thickness although some exceptions exist [14], and are less prone to yield point elongation [15,16]. Nevertheless, due to industrial thermomechanical processing limitations, it is difficult to produce a microstructure that is entirely equiaxed or lamellar. The second medium Mn steel to be produced on an industrial scale by voestalpine in Linz had a mixed lamellar and equiaxed microstructure [17,18].
In addition to the many types of microstructures that can be produced in medium Mn steels, many different plasticity enhancing mechanisms, on top of dislocation glide, can be activated in the austenite phase. By reducing the Mn content compared to high Mn TWIP steels, the stability of the austenite phase is lowered greatly such that stress-assisted or strain induced martensitic transformation is possible, i.e. the Transformation Induced Plasticity (TRIP) effect. Additionally, if the Stacking Fault Energy (SFE) can be raised into the twinning regime (15-35 mJ m -2 ) through element partitioning during an Intercritical Annealing (IA) heat treatment, the TWIP effect may also be activated [19,20]. Medium Mn steels can therefore also be grouped according to their plasticity enhancing mechanism, i.e. TRIP-type or TWIP+TRIP-type [12,21].
Many medium Mn steels have exhibited both TWIP and TRIP behaviour but they often occur in different austenite grains, e.g. coarse austenite deforming via TRIP and fine austenite deforming via TWIP [22][23][24]. This means that the austenite grains do not deform homogeneously and such steels often exhibit serrations in the strain hardening curve. Therefore, the combined TWIP+TRIP effect, which occurs in a single austenite grain, is of particular interest as it allows for homogeneous deformation, sustained hardening during deformation and elongations in excess of 40% [19,20,[24][25][26][27]. However, the interplay between TWIP and TRIP can be very different. Lee et al. [19,28] identified the successive TWIP+TRIP effect in a steel with austenite composition Fe-10.3Mn-2.9Al-2.0Si-0.32C. It was found that the austenite phase first formed twins and subsequently strain induced martensite at the twin intersections. This successive TWIP+TRIP effect led to a two-stage hardening behaviour where the first stage was twinning dominated and the second stage was transformation dominated. Sohn et. al. [26,27] seperately indentified a slightly different mechanism where twins and martensite formed concurrently and independently within an austenite grain during deformation. In their steel with austenite composition of Fe-11.5Mn-4.74Al-0.55C, the simultaneous TWIP+TRIP effect led to a multi-stage hardening behaviour and an exceptional 77% total elongation.
Between the successive and simultaneous TWIP+TRIP mechanisms, the successive mechanism is more widely reported [19,25,28,29] and is also the more commonly accepted mechanism of Strain Induced Martensite (SIM) nucleation and growth in Metastable Austenitic Stainless Steels (MASS) such as 304 and 301 [30][31][32]. The steel benefits from a high strain hardening rate (> 1.5 GPa) because of two powerful plasticity enhancing mechanisms operating after each other. In the simultaneous mechanism, however, the twinning and transformation kinetics appear to be very slow, providing just enough hardening to avoid necking [26]. Nevertheless, it allowed for a very steady engineering stress up to the failure strain of 77%, signifcantly larger than medium Mn steels which exhibit the successive TWP+TRIP mechanism (40-50%).
It is still uncertain what factors determine whether the austenite phase deforms via successive or simultaneous TWIP+TRIP although factors such as microstructure, SFE and austenite stability are likely to play a large role [20,33,34]. Furthermore, the majority of medium Mn steels shown to exhibit the TWIP+TRIP effect possess equiaxed microstructures, i.e. polygonal austenite grains. Not much is known how the TWIP+TRIP effect occurs in lamellar or mixed microstructures. This study therefore aims to explore the effect of microstructure on the interplay between TWIP and TRIP mechanisms and therefore their relative contributions. This would be achieved by producing two types of microstructures with similar austenite volume fraction and composition and examining how the various deformation structures evolve with strain.
Experimental
A steel ingot of dimensions 70 × 23 × 23 mm was produced via vacuum arc melting. The bulk composition shown in Table 1 was measured using Inductively Coupled Plasma (ICP) and Inert Gas Fusion (IGF). In order to simulate an industrial reheating cycle, the ingot was homogenised in a vacuum tube furnace at 1250 • C for 2 h and allowed to furnace cool to room temperature. A previous study [35] showed that the abovementioned homogenisation schedule was able to significantly reduce microsegregation in an arc melted ingot. The ingot was then reheated to 1100 • C and rough rolled at the same temperature from 23 to 12 mm thickness in 4 passes and water quenched. The rough rolled ingot was then split along the long axis into two bars via Electric Discharge Machining (EDM). The bars were then reheated to 1000 • C and finish rolled from 12 mm to 1.5 mm thick strip in 5 passes with a finish temperature of 850 • C. The 5 passes were conducted at 50%, 40%, 30%, 30% and 25% thickness reductions with a reheat between each pass. One strip was water quenched immediately after the last pass, while the another strip was cooled to 600 • C and held for 30 min to simulate the coiling process and finally furnace cooled to room temperature. Both strips were then intercritically annealed at 750 • C for 10 min. The final samples produced either by furnace cooling or water quenching will be referred to as FC and WQ respectively. In order to improve strain uniformity along the length of the rolled strip during finish rolling, the rolling speed was gradually increased after each pass to ensure a relatively constant rolling duration and minimise excessive temperature loss from the strip. However, it is acknowledged that under laboratory conditions, it is very difficult to guarantee precise temperatures and the degree of deformation across the length of each strip as well as between strips since the finish rolling of FC and WQ conditions were conducted separately.
Tensile samples with gauge dimensions 19 × 3 × 1.5 mm were machined from both strips via EDM, such that the tensile direction was parallel to the rolling direction. Tensile testing was subsequently conducted on an Instron load frame with a 30 kN load cell at a nominal strain rate of 10 −3 s -1 .
Samples for Electron Backscattered Diffraction (EBSD) were mechanically ground and polished with an OPU polishing suspension. Samples for Transmission Electron Microscopy (TEM) and Transmission Kikuchi Diffraction (TKD) were prepared by mechanically thinning a 3 mm disk of material to below 90 µm in thickness and subsequently electrolytic polished using a twin-jet electropolisher with a solution containing 5% perchloric acid, 35% butyl-alcohol and 60% methanol at a temperature of −40 • C.
EBSD was conducted on a Zeiss Sigma FEG-SEM equipped with a Bruker EBSD detector. TEM and Energy Dispersive Spectroscopy (TEM-EDS) was conducted on on a JEOL JEM-F200 operated at an accelerating voltage of 200 kV. Transmission Kikuchi Diffraction (TKD) was con- [19,28,36] and Latypov et al. [37] was modified and used to model the plastic behaviour of the FC and WQ conditions. A full description of the model can be found in the Appendix and is available on GitHub [38].
Tensile properties
The tensile behaviour of the FC and WQ conditions are shown in Figure 1 and the tensile properties are summarised in Table 2. Due to the small widths and thicknesses of the tensile samples, associated with the use of 400 g scale laboratory melts, the uncertainty budgets are greater than in full scale standards-certified tests [39], approximately 3% for the yield stress. As a result, the yield and tensile strengths in Table 2 are given to two significant figures. The average yield strength could be determined from the various interrupted tensile samples; for the FC and WQ conditions, the average yield strengths were 810 ± 15 MPa (range 795 -829 MPa, n=4) and 910 ± 10 MPa (range 894 -918 MPa, n=4) respectively, where n is the number of test samples. While there may be some variation in yield strengths due to the relative errors in the measurement of the tensile gauge cross section, Figure 1a showed that all tensile samples had very similar deformation behaviour, giving confidence in the uniformity of composition, strain and temperaure during finish rolling across the length of the rolled strip.
From Figure 1a, it can be seen that both FC and WQ steels have very high strengths (>800 MPa) and exceptional ductility (>70%). The WQ sample had a higher yield and tensile strength but the FC sample had a slightly longer elongation. The inset in Figure 1a showed that both FC and WQ samples had a short yield point elongation of approximately 2% strain.
When the Strain Hardening Rate (SHR) was obtained by differentiating the true stress-strain curve, it can be seen that they possessed a very similar shape, similar to many medium Mn steels [19,28,40] and MASS [31,[41][42][43] that exhibit the successive TWIP+TRIP effect. Three hardening stages can be identified. At stage I, i.e. the onset of plastic deformation, both SHR curves showed a rapid decrease, reaching a local minima at approximately 0.02 true strain before increasing rapidly to the first peak between 0.04 and 0.05 true strain. At stage II, the SHR then decreases from the first peak to a saddle point and at stage III it rises again slowly to the second peak. While Triangles indicate the strain to which the tensile test was interrupted. Inset: early yielding behaviour of all tensile samples. (b) True stress strain curves and strain hardening rate drawn up to the point of uniform elongation for tensile samples tested to failure. Open red circles and black squares indicate true strains where interrupted tests were conducted. Stage I of the hardening rate was not labelled for clarity but refers to the true strain range between the onset of plastic deformation and the beginning of Stage II. N.B. extensometer was removed at 10% engineering strain. the shape of the SHR curves may be similar, the strain at which the first peak, saddle point and second peaks occur as well as its value differ slightly between the FC and the WQ steel. To investigate the microstructural evolution at these three unique points (first peak, saddle point and second peak), interrupted tensile tests were conducted up to the corresponding strains for the FC and WQ steel. The as-annealed and interrupted tensile samples are henceforth named FC0, FC4, FC20, FC60 for the FC steel and WQ0, WQ5, WQ13, WQ50 for the WQ steel where the digits represent the engineering strain (not true strain) to which they were tested, rounded to the nearest percent. (c) FC20 equiaxed microstructure (Figure 2a). It was likely that the cementite in the pearlite matrix globularised and transformed into austenite during the IA, while WQ0 adopted a mixed equiaxed + lamellar microstructure. The lamellar regions arose when austenite nucleated in between martensite laths, while the equiaxed regions formed due to some degree of recrystallisation of martensite, predominantly around the PAGBs. With increasing strain, it was observed that the austenite fraction began to decrease after the first peak, indicating that the TRIP effect was operative in both FC and WQ steels. However, due to the increasing Non-Indexed (NI) fraction at higher strain levels, there was some uncertainty in the calculated martensite fraction, which indexes as BCC in EBSD. It is acknowledged that the NI regions are typically either martensite or deformed austenite. Therefore, an upper and lower limit was established in Figures 2k-l where the upper martensite limit was obtained if the entire NI fraction was treated as martensite and the lower martensite limit was obtained if the entire NI fraction was treated as austenite and vice versa for the austenite upper and lower limits. In Figure 2k, it appears that the austenite fraction increased slightly between WQ0 and WQ5. This was likely due to a small variation in microstructure between the tensile specimens as austenite fraction cannot increase with strain.
Between the as-annealed condition and the first peak, there was no transformation in both FC and WQ steels, suggesting that an incubation strain was needed for the TRIP effect. Subsequently, with increasing strain, the martensite fraction in the WQ steel increased at a higher rate compared to the FC steel with a larger final fraction at the failure strain. In the FC steel, both ferrite and austenite grains were observed to elongate in the tensile direction while in the WQ steel, the lamellar regions were also observed to elongate but additially rotating and orienting themselves parallel to the tensile direction at the failure strain.
While EBSD was able to provide macroscopic insights into the microstructural evolution, it was necessary to probe the finer deformation structures using TEM. Figure 3 shows the microstructure of the FC and WQ steel in the asannealed condition under STEM-BF. The as-annealed FC microstructure showed an equiaxed microstructure with a low dislocation density. On the other hand, the asannealed WQ microstructure showed a lamellar microstructure with average lamella width of 290 nm. A relatively larger number of dislocations were observed in the WQ microstructure. Han et al. [14] also found that the ferrite phase in lamellar type microstructures had a higher dislocation density than equiaxed ferrite due to the lack of recrystallisation during IA. While the FC microstructure was not obtained via cold rolling and recrystallising, the multiple phase transformations during coiling, furnace cooling and IA were likely able to eliminate any residual dislocation density after hot rolling.
Stage I: zero strain to first peak
At stage I, there was no transformation as shown in Figure 2k. From Figure 4, it was observed that dislocation multiplication was occuring in the austenite and ferrite phases in both the FC and WQ conditions. Twinning in both conditions was also very limited. In FC4, multiple Stacking Faults (SFs) were observed growing from austenite and annealing twin boundaries. In WQ5, SFs were also observed but only in the more globular austenite grains. In the lamellar grains however, no SFs were observed. Instead, dislocations were emitted from the interphase boundaries and also seen to be piling up across the width of certain lamellar grains. The higher density of SFs in FC4 may also explain why the SHR at the first peak was higher than WQ5.
In many fine grained equiaxed medium Mn steels, the first peak is the result of yield point elongation [28,29]. Sun et al. [15] have shown that this was because of a rapid dislocation generation from the large amount of γ/α interfaces in a relatively dislocation free microstructure. Lamellar microstructures typically exhibit continuous yielding due to a higher dislocation density within the lamellar grains, as seen in Figure 3b. However, because WQ was a mixed microstructure, combining both equiaxed and lamellar regions, a short yield point elongation was still present. Figure 5 shows the microstructures of FC20 and WQ13 at the SHR saddle point. In FC20, some austenite grains were observed to have one set of twins, while others were observed to have two. Figure 5a shows an austenite grain with twins growing out from the grain boundary while stacking faults were growing in the other twinning direction. The SFs appear to have either nucleated from the grain boundary or from the first twins. In another austenite grain in FC20 as shown in Figures 5b-c, two twinning systems were clearly operating as also shown in the diffraction pattern where two sets of twinning spots were observed. According to the successive TWIP+TRIP mechanism described by Lee et al. [19,44], α -martensite at twin intersections should be observed at this strain. However, additional diffration spots associated with martensite transformation were not observed at the twin intersections in FC20. High Resolution TEM (HR-TEM) of the twin intersections also showed no martensite at the twin intersections.
Stage II: first peak to saddle point
In WQ13, there was very limited twinning compared to FC20. In Figure 5e, only a small twinned region could be found and was not located in a lamellar region. In Figures 5f-g, several thin laths of martensite were found in an austenite grain adjacent to a lamellar region. The diffraction pattern in Figure 5h showed that the martensite laths had a Kurdjumov-Sachs orientation relationship (KS-OR) with the parent austenite. Thin martensite laths were also observed by Lee et al. [40] in a lamellar medium Mn steel.
Stage III: saddle point to second peak
At the second SHR peak, the TEM micrographs of FC60 and WQ50 are shown in Figure 6. In FC60, many twinned austenite grains could be observed such as the one shown in Figure 6a. With increased magnification. Much finer but shorter twins were observed within the grain interior ( Figure 6b) and when the magnification was increased further, several short twins from the second twinning system could be observed (Figure 6c). This suggests that twinning continued to occur even up to 60% strain in the FC steel. In another austenite grain as shown in Figure 6d, twin thicknening was observed which has often been found in TWIP steels at large strains [45] and was also observed by Sohn et al. [26] in a medium Mn steel.
In WQ50, a large number of twinned lamellar austenite grains were observed. This suggests that in WQ, twinning was more active in stage III compared to stage II. Figure 6a shows a lamellar austenite grain with twins that grew across the grain. Further magnification revealed secondary twins at the tip of the lamellar grain. However, the diffraction pattern from the tip still showed that there was no α -martensite at the twin intersections. Nevertheless, in another grain (Figures 6g-h), martensite laths were observed growing from the grain boundary across a twinned lamellar austenite grain in the same direction as the twins.
While TEM has effectively revealed the twinned structures in both FC and WQ steels, it was difficult to identify martensitic regions. For this reason, TKD was conducted on the TEM foils from FC60 and WQ50. The resulting data are shown in Figure 7. In the FC60 sample ( Figures 7a-c), a deformed equiaxed austenite grain with a curved annealing twin (outlined in white) could be observed from the Band Contrast (BC) and FCC IPF-Z maps. From the BCC IPF-Z map (Figure 7c), BCC regions were observed within the outlined austenite grain. These BCC grains were observed to be within 5 • of the KS-OR with the parent austenite grain and were therefore different variants of α -martensite. These blocky α -martensite grains appear to have first nucleated from the austenite grain boundaries and then from each other, suggesting that αmartensite growth was limited and nucleation may have occured continuously with increasing strain. The limited growth may explain why size of the blocky α -martensite grains remained very small. However, it cannot be said that the α -martensite observed in Figure 7 only formed during stage III, it is likely that some α -martensite also formed during stage II.
In the WQ50 sample (Figures 7d-f), the austenite grains before transformation can be identified as the dark grey regions, i.e. more deformed regions in the BC map ( Figure 7d). This is also confirmed in Figure 7e where the untransformed austenite lie within the dark grey regions. In Figures 7e-f, four areas of interest are highlighted. In area 1, the arrows point to α -martensite grains with a lath morphology which were observed growing across austenite lamellae as observed in WQ13 (Figures 5f-h). However, this morphology was an exception rather than the rule as similar lath martensite morphologies were not commonly observed elsewhere. It is also worth noting that the three austenite lamellae contained in area 1 remained largely untransformed even up to 50% strain. In area 2, the outlined region was likely a globular prior austenite grain but was elongated at 50% strain. The prior austenite grain was mostly transformed with only a few pockets of austenite left. In the BCC IPF-Z map in Figure 7f, several small martensite grains were observed at the prior austenite grain boundary in a similar manner as described in FC60. However, the prior austenite grain was mostly dominated by a single martensite grain with [111] direction out of the page. In area 3, the outlined prior austenite grain was similarly partially transformed to martensite. In the more bulbous region near the bottom left, a single martensite grain with [111] direction out of the page also dominated the region. However, at the bottom tip and along the length of the lamellar grain, the austenite grain transformed into relatively equally sized submicron martensite grains. Finally, in area 4, the untransformed austenite grain contained several extremely fine intragranular martensite grains. These martensite grains were significantly finer and do not appear to have nucleated from a grain boundary in the same manner as in the aforementioned three areas. It is therefore possible that these fine martensite grains are of the twin-twin intersection variety which forms in the successive TWIP+TRIP mechanism.
Finally, beyond stage III, comparing the EBSD phase maps in Figures 2g-l, the austenite phase still continued to transform to martensite. However, the strain region beyond stage III is characterised by decreasing SHR (Figure 1b), suggesting that both twin and martensite fractions were both approaching saturation until both steels failed by necking, i.e. σ = dσ/dε.
Composition
The composition of austenite and ferrite in both FC and WQ steels were measured using TEM-EDS. The results are shown in Table 3. The C content in austenite was determined by using the lever rule, assuming negligible C solubility in ferrite. It should be noted that the bulk composition as measured by TEM-EDS, i.e.
where X i is the Mn, Al or Si content and V i f is the volume fraction of phase i, may not be the same as the bulk composition as measured using ICP. The difference may be attributed to the limitations of quantitative TEM-EDS. However, the resolution of TEM-EDS was needed to probe the compositions of the fine grained microstructures in the FC and WQ samples. The SFE was calculated according to the method proposed by Sun et al. [25]. Md 30 , defined as the temperature where half of the austenite transforms to martensite at a strain of 30% was calculated according to the following equation [46,47]: where compositions are given in mass % and d γ is the austenite grain size in µm. A higher Md 30 indicates lower austenite stability against strain induced martensitic transformation and vice versa. The Ms temperature, defined as the temperature where athermal martensite begins to form upon rapid cooling from austenite, was calculated according to the following equation [48]: A higher Ms indicates lower austenite stability against athermal martensitic transformation and vice versa. Both Md 30 and Ms have been used to qualitatively determine the stability of austenite against strain induced martensitic transformation.
Comparing the austenite compositions between the FC and WQ states in Table 3, it can be seen that the FC condition had a slightly higher Mn content while the other elements remained relatively equal. This may be attributed to the additional coiling and furnace cooling steps during the processing of the FC condition which provided additional time for Mn to partition out of ferrite and into the cementite phase. During the IA step, the cementite then globularised and transformed into austenite with an enriched Mn content compared to the WQ condition. The difference in Mn content resulted in a slightly higher SFE and lower Md 30 and Ms for FC.
The SFE of FC and WQ steels were both within the predicted twinning regime of TWIP steels [1] and also medium Mn steels [20]. On the other hand, because of the large C content in both steels (> 1 wt%), the Md 30 and Ms temperatures were very low, suggesting that the [19] demonstrated that it was possible to overstabilise the austenite phase such that the TRIP effect no longer becomes operative. While the TRIP effect was clearly observed in both FC and WQ, the high austenite stability would certainly have an effect on the nucleation and growth of α -martensite grains.
Modified constitutive model
After examining the microstructural evolution with strain using EBSD, TEM and TKD, it is evident that it was the simultaneous, rather than the successive TWIP+TRIP mechanism that was active in both FC and WQ conditions as α -martensite grains were not observed at twin intersections and have mostly nucleated at austenite grain boundaries. Microstructural examination also showed that the evolution of deformation structures such as twins and αmartensite were very different between FC and WQ conditions. The different twinning kinetics theoretically should have resulted in a clear difference in strain hardening profiles between the FC and WQ conditions. However, from Figure 1b, the strain hardening rate curves between FC and WQ conditions showed a very similar profile.
In order to reconcile the seemingly conflicting observations between the tensile properties and microstructure evolution, a constitutive model developed by Lee et al. [19,28,36] was used to determine if the tensile properties in Figure 1 could be reproduced given the microstructural data from Figures 2-7 and vice versa. Since the constitutive model was initially developed for medium Mn steels that exhibited the successive TWIP+TRIP mechanisms, two key changes were made in order to accomodate the si- were not shown for clarity as they were nearly identical to TWIP on curves. TWIP on and off curves are very similar in FC condition and nearly identical in WQ condition due to limited dislocation storage ability at slip lengths on the micron and submicron level. multaneous TWIP+TRIP mechanism. Firstly, equations responsible for the evolution of martensite fraction with strain were replaced with a single Avrami equation [49][50][51] which also effectively uncouples the dependence of TRIP on TWIP. Secondly, following the findings of Latypov et al. [37], the strength of the α -martensite phase was approximated to be constant with strain. The results from the modified constitutive model are shown in Figure 8.
From Figure 8a, a good agreement between the model and experimental stress-strain and SHR curves was observed for the FC condition. A slight overprediction was observed in the modelled stress-strain curve but can be attributed to the model assuming continuous yielding and not accounting for the slight yield point elongation in the FC condition. From Figure 8c, the predicted austenite and α -martensite phase fractions were shown to be in reasonable agreement with the experimental ranges as determined using EBSD in Figure 2. This largely confirms the validity of the modified constitutive model for modelling the simultaneous TWP+TRIP mechanism in equiaxedtype microstructures.
However, the same model did not work as well when applied to the WQ condition. In Figure 8b the model appears to show a good fit with the SHR curve, however the predicted α -martensite fraction fell short of the experimentally determined range at a true strain of 0.4 in Figure 8d. This was largely attributed to the mixed equiaxed and lamellar grain morphology in the WQ condition which might have led to a more complex strain partitioning mechanism that might not be best represented with the current equations. Nevertheless, the model was able to reasonably predict the initial and final martensite fractions in the WQ condition.
Since TWIP and TRIP mechanisms have been uncoupled in the modified constitutive model, it is possible to model the plastic response without the TWIP effect. In Figure 8a, both modelled stress-strain and SHR curves are shown with the TWIP effect turned on or off. Remarkably, there was no significant loss in strain hardening even with the TWIP effect turned off. In the WQ condition ( Figure 8b), there was almost no difference whether the TWIP effect was turned on or off. Therefore the modelled stressstrain and SHR curves with TWIP off were not shown in Figure 8b.
The component stress-strain curves in the FC and WQ conditions are shown in Figures 8e-f respectively. In the FC condition, the TWIP effect was observed to strengthen the austenite phase by 167 MPa at the failure strain. However, this strength was reduced to only 44 MPa when the loss of austenite volume fraction to α -martensite transformation was taken into account as seen in Figure 8g. From Figure 8f, the austenite phase was not observed to strain harden significantly in the WQ condition, largely due to the close competiton between dislocation multiplication and annihilation arising from the short lamella thickness. The lack of any difference in strength between the TWIP on and off conditions in WQ was likely due to the severe reduction in the dislocation storage rate in the austenite grains due to the already very fine lamella widths [52]. Therefore, further refinement of the grain size through the dynamic Hall-Petch effect [1,5] due to twinning proved to be ineffective in improving the strength of the austenite phase in the WQ condition.
By multiplying the component strength by the respective volume fraction of each phase, the cumulative contribution to global strength from each phase with strain can be obtained and is shown in Figures 8g-h. It can be seen that the strength contribution from the TWIP effect is dwarfed by the TRIP effect. The strength from the αmartensite phase contributes to nearly 50% of the UTS in the FC condition and more than 50% of the UTS in the WQ condition.
α -martensite nucleation and growth
In order to attain a sustained high SHR and large elongations in TRIP-assisted steels, it is necessary to continuously form fine α -martensite in a steady manner over a wide strain range [53,54]. A common strategy is to create a spread in austenite stability via grain size distribution [53], inhomogeneous Mn composition [55] or texture [33]. This leads to a spectral TRIP [53] or discontinous TRIP effect [33,55,56] where transformation begins and ends with the least and the most stable austenite grains respectively.
In both FC and WQ conditions, the austenite phase was chemically very stable against α -martensitic transformation due to the high Mn and C content (Table 3). Chatterjee et al. [57] showed that with such high C contents, formation of Strain-Induced Martensite (SIM) would be highly improbable. However, an applied stress can provide an additional mechanical driving force, ∆G mech , for Stress-Assisted Martensite (SAM) transformation [58,59]. In medium Mn steels, it is known that strain and therefore stress localises at the interphase boundaries during deformation due to the strength mismatch between austenite, ferrite and α -martensite [44]. The high stress localisation was likely to have been able to provide a sufficiently high mechanical driving force for SAM to nucleate at austenite grain boundaries as observed in Figures 5, 6 and 7. Grain boundary SAM nucleation was also observed by Yen et al. [60] in a medium Mn steel with a submicron grain size. The stress-assisted nature of α -martensite nucleation may explain why an incubation strain was observed (Figure 2l) as it would be necessary to build up a critical local stress at the austenite grain boundaries. However, the stress field at the grain boundary would decay rapidly towards the interior of the parent austenite grain and the driving force for SAM transformation would similarly diminish. The highly local stress concentration may explain why the αmartensite grains in both FC and WQ were very small as α -martensite cannot grow past the stress field. With additional deformation, α -martensite grains with favourable orientations with stress will grow (areas 2 and 3 in Figure 7), whereas repeated nucleation of α -martensite nucleating on top of each other will occur (Figures 7b-c). The phenomenon of α -martensite only being able to nucleate and grow within local stress concentrations keeps the α -martensite grains small and greatly extends the strain regime where TRIP occurs.
Effects of microstructure on TWIP and TRIP
The effects of grain size on twinning and martensitic transformation in austenite are well studied. In TWIP steels, there is an impression that grain size reduction generally increases the twinning stress [1]. However, in many studies, reducing the grain size to 1-5 µm does not appear to negatively affect tensile properties and elongation, although it is acknowledged that the twinning stress was increased [61,62]. From Table 3, the grain size, measured as the equivalent circle diameter, of FC and WQ conditions were not significantly different. However, from Figures 5 and 6 it was observed that extensive twinning occured in stage II and III for FC but mostly in stage III for WQ. In WQ, twins were observed to propagate across the width of the lamellar grains (Figures 6e-h), implying that the lamellar width (approximately 300 nm) should be considered rather than the equivalent circle diameter. Since the austenite lamellar width in the WQ condition was significantly finer than that of the equiaxed grain diameter in FC, the twinning stress in the WQ condition would be much higher and would explain why twinning in lamellar austenite grains was delayed to a later stage compared to equiaxed austenite grains.
In TRIP-assisted steels, it is well known that a decreasing grain size has a strong mechanical stabilisation effect on austenite, inhibiting the formation of α -martensite [26,48,63]. Additionally, blocky or equiaxed austenite is generally less stable than film or lamellar austenite in medium Mn steels [64,65]. Therefore, it should follow that the WQ samples would form less α -martensite than the FC samples at a similar strain since the lamellar width of the WQ samples was much finer than the grain size in the FC samples. However, from Figure 2l, there was more α -martensite in the WQ sample than in the FC sample at failure.
In this alloy, the austenite phase of both FC and WQ conditions were chemically very stable, dominating the mechanical stability term due to grain size refinement in Equation 2, as seen in Table 3. The effect of relative grain size difference on austenite stability between the equiaxed grain diameter in FC samples and the lamella grain width in WQ samples were therefore not expected to be significant. However, α -martensite nucleation was shown to be restricted to the austenite grain boundaries where there is a local concentration of stress. In the WQ samples, the lamellar grain morphology has a larger grain boundary area to volume ratio and therefore able to provide a larger number of nucleation sites for α -martensite to form. Therefore, it was likely that α -martensite was able to nucleate more easily in the WQ samples, resulting in a higher α -martensite fraction at failure. However, it is acknowledged that a simple explanation cannot fully capture the complexity of α -martensitic transformation in WQ. In Figures 7d-f, the fine austenite lamella in Area 1 and a wide lamellar austenite grain in Area 4 had different surface area to volume ratios yet both remained largely untransformed, suggesting that other effects such as texture, Schmid factors and stress shielding were possibly also involved [33,[64][65][66]. Nevertheless, in this alloy with a very high austenite stability where the formation of α -martensite was nucleation-limited, grain boundary area to volume ratio would have certainly played a significant role among the other factors known to contribute to α -martensite transformation.
Strain hardening behaviour and constitutive modelling
Perhaps the most striking observation from the modified constitutive model in Figure 8 was the lack of strengthening contribution from the TWIP effect, especially in the WQ condition. Through the original constitutive model developed for successive TWIP+TRIP medium Mn steel, Lee and De Cooman [19] showed that the TWIP effect was less effective in small austenite grains and concluded that strengthening from the TRIP effect was more pronounced compared to the TWIP effect. In this study, a similar conclusion was reached for the FC steel with an equiaxed microstructure. However, this study extends the concept to include the simultaneous TWIP+TRIP mechanism and for lamellar microstructures which showed a nearly complete lack of contribution to strengthening from the TWIP effect.
Given the lack of strengthening from the TWIP effect, it is therefore unsurprising that the observed differences in twinning kinetics in FC and WQ conditions did not result in a significant difference in the strain hardening behaviour. Instead, the strain hardening behaviour was dominated by the TRIP effect. Furthermore, since both FC and WQ conditions were found to share a similar αmartensite nucleation and growth mechanism, it is reasonable to expect the strain hardening profiles to be very similar.
Since the TWIP effect does little in terms of strength for medium Mn steels, it is probably better to not pursue the TWIP+TRIP effect in alloy design. In order to enable the TWIP+TRIP effect, a relatively high SFE and austenite stability is needed [20]. This is enabled by a combination of either low Mn and high C (current alloy), or high Mn and low C. If the TWIP effect can be forgone during alloy design, it would be possible to enable low to medium Mn and low C (3-6 wt% Mn, 0.05-0.2 wt% C) compositions that exhibit the TRIP effect only. Such low Mn, low C compositions have been increasingly termed lean medium Mn steels [67], and are desirable in terms of lower segregation after casting [35], better weldability [68], processability, etc. However, it is worth noting that while the TWIP effect does little for TWIP+TRIPtype medium Mn steels, the tensile properties still tend to be better in terms of elongation than in pure TRIP-type medium Mn steels [19,26,27,69]. The higher Mn and C contents necessary for the TWIP+TRIP effect also stabilises the austenite phase and prolongs the TRIP effect to significantly higher strains. Therefore, the alloy chemistry surrounding TWIP+TRIP-type medium Mn steels is still worth further study, although less focus might be given to the TWIP effect. It may also be of research interest to determine if the TWIP effect may have other benefits in terms of ductility or performance at high strain rates.
Conclusion
The effects of microstructure on the TWIP+TRIP mechanism were examined in a 5Mn-0.5C type medium Mn steel in two conditions representing different processing strategies in the steel mill. In the first condition, furnace cooling after hot rolling was employed (FC), as in a coiler; and in the other, water quenching (WQ) on the run-out bed. Both were then intercritically annealed, as on a continuous annealing line. These processing strategies produced steels with similar initial austenite compositions and phase fractions but with equiaxed vs mixed equiaxed+lamellar microstructures, respectively. The main findings are: 1. Both FC and WQ conditions showed superior mechanical properties compared to TWIP and DP steels, and the simultaneous TWIP+TRIP plasticity enhancing mechanism was identified to be operative in both conditions regardless of microstructural form. 2. A novel α -martensite nucleation and growth mechanism in high C austenite was proposed. Stressassisted α -martensite was able to nucleate at the austenite grain boundaries due to the high stress concentration during deformation. The α -martensite is unable to grow beyond the stress field into the parent austenite grain due to the high chemical stability and therefore remains small. Therefore, the continuous formation and slow growth of α -martensite grains greatly extends the strain regime where TRIP is operative and allows for large elongations to failure. 3. The shorter austenite lamella width in the WQ samples compared to the austenite grain diameter in the FC samples resulted in a shorter mean free path for twinning. This raised the critical twinning stress such that extensive twinning was only observed at higher strains in the WQ sample. 4. A modified constitutive model was developed and found to have a good fit with the experimental data.
The model also showed that the TWIP effect did not provide significant strengthening in the FC condition and almost no strengthening in the WQ condition. The lack of strengthening from the TWIP effect was attributed to the extremely fine slip length in the austenite phase, especially in WQ condition where the lamella thickness was in the submicron regime.
Acknowledgements
TWJK would like to thank A*STAR, Singapore for a studentship. PG would like to acknowledge funding from SUSTAIN Future Steel Manufacturing Research Hub (EP/S018107/1). DD's work in the early stages of this project was funded by EPSRC (EP/L025213/1) and he also holds a Royal Society Industry Fellowship.
Appendix
To model the simultaneous TWIP+TRIP effect in the FC and WQ conditions, we follow the constitutive modelling work of Lee et al. [19,28,36] and Latypov et al. [37]. This work similarly applies the iso-work assumption to model the strain partitioning between austenite, ferrite and martensite. The iso-work assumption can be expressed as: where σ γ , σ α and σ α are the flow stresses of austenite, ferrite and α -martensite respectively and dε γ , dε α and dε α are the incremental strains of austenite, ferrite and α -martensite respectively. The global applied strain can therefore be expressed as a rule of mixtures: where f γ , f α and f α are the phase fractions of austenite, ferrite and α -martensite respectively. Stresses were calculated by incrementally increasing true strain and updating the dislocation densities using the local gradient of dislocation density as a function of strain. To calculate strain partitioning, iso-work constraints were applied by using a trust-region-dogleg algorithm, a refined Newton's method in MATLAB. The model was implemented in MATLAB and is available online [38].
To begin, the flow stress, σ, of FC and WQ was assumed to obey the law of mixtures: The flow stress of each phase, σ i , where i denotes austenite, ferrite or α -martensite, can be described as: where σ Y S i is the yield strength, A is a constant equal to 0.4, M i is the Taylor factor, µ i is the shear modulus, b i is the magnitude of the Burger's vector and ρ i is the stored dislocation density of phase i. The yield strength of each phase was determined by summing the solid solution and grain size strengthening contributions through the following equation: where σ s i is the solid solution strength, K i is the Hall-Petch parameter and d i is the grain size of phase i. The austenite Hall-Petch parameter by Rahman et al. [62] was used in place of original value used by Lee and De Cooman [28] as it was found to give a better fit. Additionally, the austenite lath width in the WQ sample was used for d γ rather than the ECD grain size in Table 3. The solid solution strength of each phase was calculated according to the following empirical equations [70,71]: σ s α = 5000(X α C ) + 44.7(X α M n ) + 138.6(X α Si ) + 70(X α Al ) (9) σ s α = 413 + 1720(X γ C ) where X i j is the concentration of element j, in mass percent, in phase i. The evolution of dislocation density with strain, dρi d i , was determined by calculating the rate of dislocation storage and annihilation in each phase using the Kocks-Mecking model given as 1 : where P i is a coefficient related to the grain size, Λ i is the dislocation mean free path, k i 1 is the dislocation storage coefficient and k i 2 is the dislocation annihilation coefficient of phase i. Here, we approximate the strength of α -martensite to remain constant with strain based on the findings by Latypov et al. [37] and set k α 1 and k α 2 to zero. The term P i is defined as the probability for a dislocation to not be absorbed into a grain boundary [52] and is given as: where d i c is a critical grain size of phase i, below which the rate of dislocation annihilation at the grain boundaries is larger than the rate of dislocation storage and vice versa. The dislocation mean free path of ferrite, Λ α , and α -martensite, Λ α , was assumed to be equal to the average grain size. However, for the austenite phase, the dislocation mean free path, Λ γ was determined as: where λ T is the mean twin spacing and is given as the following equation: 1 It should be noted that Equation 11 contained a typographical error in the original reference [19]. We have verified this with the authors of that paper and rectified this in the current paper.
Units Austenite Ferrite Martensite
where c T is the twin thickness and f T is the volume fraction of twins. The evolution of twin volume fraction with strain can be expressed as: where f 0 is the twin saturation volume fraction, α is the coefficient associated with the formation of a twin nucleus when perfect dislocations intersect and m is the exponent associated with the probability of perfect dislocations intersecting. Finally, since the evolution of α martensite with strain no longer relies on the prior formation of twins, a simple Avrami equation [49][50][51] was used to describe the evolution of martensite fraction with strain: where a and b are Avrami constants. Since precise measurement of martensite fraction was not possible, the Avrami constants were fitted by minimising the χ 2 of the modelled and experimental strain hardening data using a gradient search. Values for the parameters used for the model are given in Table 4 and 5.
|
2022-04-13T01:16:17.432Z
|
2022-04-11T00:00:00.000
|
{
"year": 2022,
"sha1": "5272a1af15d2588810b53ec37e6006c52019bcd3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5272a1af15d2588810b53ec37e6006c52019bcd3",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
165869219
|
pes2o/s2orc
|
v3-fos-license
|
Reconstructing syntheses in Romano-British cremation
This paper focuses on archaeological reconstructions of Romano-British cremation, re-examining the syntheses of evidence, inference and terminology that inform our current understandings of this form of ritual action. In particular, I will look at two main areas, each with their own specific discourse, namely ‘the cremation process’ (in relation to ‘pyre technology’), and ‘ bustum burials’.
matter largely dependent, for McKinley, on degrees of fusion of cranial sutures (ibid; or else, significantly, on human intervention in the process, see below).
In fact, automated cremators now constitute a computerised and pre-set response to this problem (with programmes for 'light ', 'standard' and 'heavy' cadavers), provided that the coffin is 'charged into' the cremator in the correct fashion. In three of the cremations that I observed (March 2004) a jet of air, directed at the right sphenoid and temporal areas, meant that the skull was in each case sufficiently agitated for the brain to be exposed (or at least accessed) and combusted. In this way cranial sutures looked to be opened as a result of internal pressure, as much as anything else. Interestingly however, each skull responded differently to this treatment, with one of the crania remaining intact far longer than the others, the brain matter in this case eventually erupting from the disturbed temporal region.
Thus variability between different corpses is also a significant factor that the cremator operator or pyre technician, or automated cremator needs to be able to deal with. Because different bodies tend to vary in terms of the quantity, quality and location of the fat deposits required for cremation, general trends can be postulated: 'females will cremate more easily than males because of their slightly heavier and different fat deposits; the very old and the immature are more difficult to cremate as they usually carry less fat' (McKinley 1994b: 72;see also Wells 1960: 35); techniques to respond to and overcome such variables are therefore required of the cremator, whatever technology is being used.
Fascinatingly, it would seem that no completely predictive model for how particular bodies burn can yet be established. For instance, McKinley recorded one 'unexplained' case, 'charge 5a', 'which was, in size, age and sex, equivalent to charge '5b' but, for some unknown reason, proved very difficult to cremate. Whereas '5b' needed no gas heat [i.e. furnace temperature was sufficient that 'firing' was unnecessary], '5a had continuous heating throughout the process but still proved most difficult' (McKinley 1994b: 74, 72-74). Even in the latter day automated cremators, a manual override is available and still necessary on occasion. The cremation of a particular 'charge' weighing more than '35 stone' required such intervention on the part of the operator interviewed by myself (Darren Caldicott), who extended the duration of the firing in question to nearly three hours, applying a lower and more steady heat; the same informant mentioned that the embalming of corpses often necessitates considerable intervention in the automated process.
It is surely in dealing with such variability that the specialised skill of the operator or the pyre technician is so important. But in what way, specifically, must they become involved in the process (or indeed control it) in order to react to, and therefore overcome, the problems presented by the varied nature of the human body? The answer to this question may lie in a further paradox of the body, arguably not clearly solved by either Wells or McKinley. This important ambiguity is inherent in the fact that while certain parts of the body will have more fat which aid combustion and dehydration, these same parts are also likely to have more soft tissue in general, which will impede combustion of the bone: '(I)f oxygen reaching the bone is impeded by the presence of soft tissue, the bone will not burn' (McKinley 1994b: 75). Moreover, some bones, having a higher organic content, will intrinsically take longer to burn than others (ibid).
The operator or pyre technician (or cremator designer) must know how to strike and maintain a balance between utilising the heat generated by fat ignition in order to remove water and combust non-fatty soft tissue, and concurrent and/or consecutive exposure of the bone to sufficient oxygen as well as heat. Cremator operators and designers, and pyre technicians, need to control conditions through actively modifying temperature and particular application of the heat source and through deliberate manipulation of the human remains.
As has been stated, the role of the operator in largely mechanised (and indeed recently automated) cremation would seem to be as a result relatively reduced, but this role should nonetheless be considered as much more than merely a 'further variable' (McKinley 1994b: 74). In the 1990s, operators not only controlled furnace temperatures, but also airflow around the chamber, to ensure that heat was applied to all parts of the body (especially where it was needed most at any given point in the cremation), and were on hand to 'provide turbulence to aid the breakdown of remains ' (ibid: 72, my italics). This intervention, albeit indirect (i.e. using air jets as a tool), manipulating those parts of the body that require more than just heat, was surely an important part of the work at that time; in fact McKinley stated that the 'skill of the operator, using the various air flows, will ensure complete combustion' (ibid). In the automated cremators now in use, control of airflow has largely passed to the computer settings and built in functions, although, as has been stated, manual override is still an option (Darren Caldicott, pers. comm.).
The operator is responsible for further processing of the remains, chiefly through agitation ('raking down' also results in destruction of the skull vault [McKinley 1994b: 74], see above). It is this agitation that causes the bone to fragment along fissures produced through dehydration: '(T)he bone is rendered brittle, especially whilst hot, when any movement will result in increased fragmentation along the dehydration fissures' (McKinley 1994c: 339). The 'modern' sorting and collection of coffin nails etc from the material causes further fragmentation, and is an interesting inversion of the picking out of bone from a water-quenched pyre (see below). Finally, granulation of remains takes place in the 'Cremulator' (I am informed that before mechanisation this procedure was also manual, using 'a brick'; Darren Caldicott, pers. comm.).
Manipulation and agitation of the human remains, then, formerly the province of the cremator operator throughout, but latterly only in the raking down, sorting and other final stages, is chiefly responsible for fragmentation of the burnt bone in mechanised crematoria.
'Pyre Technology'
A 'common sense' reconstruction of the difficulties attendant on pyre cremation shows (not unexpectedly) the requirement for a high degree of involvement on the part of the pyre technician in the firing process, if the work is to be successful. The open pyre obviously demands a far more manual control of conditions (and, as a result, a more intimate experience of cremation?), requiring manipulation and agitation of both fuel and human remains. The particular difficulties of pyre cremations are thus especially inherent in the need to use solid fuel (wood, in the main), while at the same time maintaining a clear flow of the oxygen required for combustion (McKinley 1989: 67;1994b: 79): this in addition to dealing with the problems inherent in the human body outlined above.
Covering the human remains, either with 'pyre goods' or more fuel during cremation, will decrease airflow and increase the level of difficulty. Moreover, build up of fuel ash, premature collapse of the pyre structure, parts of the body falling to less accessible areas of the pyre and being covered by debris, variability of temperature in different areas of the pyre (with the centre more likely to have higher temperatures than the periphery), and even variation in the weather at the time of cremation (an open firing may take seven or eight hours), affecting degrees of draught available as well as possible inhibitors, such as heavy rain, have all to be taken into account (McKinley 1989: 66-67;1994b: 78-79).
Bearing in mind such a long list of possible variables, the necessary human element of pyre cremation is thus indicated as 'tending' of the pyre; 'tending' or maintenance of the pyre can simply be defined as the pyre technician's specialist response to the inherent difficulties of open pyre cremation. Thus the work will of necessity involve not only correct timing and placement of additional fuel, but also intervening in order to 'stir up the pyre occasionally, to allow oxygenation and to return any rogue bone or wood, which would result in considerable movement of the bone'; in open pyre cremations in the past, then, 'much fragmentation would have taken place on the pyre (McKinley 1989: 72), with bone being broken '… as the pyre collapsed in the later stages of the cremation or if the pyre was tended to any degree, e.g. reinstating bones which had fallen out of the main body of the pyre, or slight stirring late in the process to re-oxygenate the pyre …' (McKinley 1994c: 340). This description however, by using such careful language, once again rather underplays the degree of human activity in the process; even the word most often used for pyre maintenance activity, 'tending', is loaded with technical and cultural overtones, suggesting a largely supervisory role, a 'careful' mode of action. Perhaps as a result of such attitudes there would seem to be some degree of (culture-centred?) hesitation on the part of researchers as to exactly what form such 'tending' might take, or what degree of 'tending' might be considered acceptable in any given cremation context.
For although considerable and vigorous manual agitation of the pyre, in order to maintain the required relationships between fuel, heat, oxygen and human remains, would seem to be an obvious explanation for much of the fragmentation that characterises archaeological cremated bone deposits, experts have historically avoided giving such activity prominence in the cremation 'process'.
It is important to note with McKinley that with archaeological deposits of cremated bone 'fragment sizes presented in the reports should be regarded as post-excavation fragment sizes' (1994c: 339), i.e. that we need to remember the effects not only of the 'pyre technology' (ibid: 340), but also of 'burial, excavation and post-excavation treatment' (ibid: 342; we should also add disturbance of the deposit and any other post-depositional processes to this list). And yet the examples of apparently largely undisturbed cremated bone deposits cited in support of this argument are surely still fragmented to a degree sufficient to pose questions of the original cremation and/or collection process; for example, does not a 'majority' of fragments being over 30mm, and a maximum of 140mm (ibid: 342) still argue for rather profound fragmentation of the skeleton during the original process? (ibid: see figures 3 and 4).
A culture specific approach to the definition of 'tending' may well have informed experimental archaeology in this area. McKinley for example citing her own research firing experimental busta, reports no clear details as to the types and levels of 'tending' deployed, or the degrees of fragmentation of bone recovered (McKinley 1997: 65-67;2000: 40). It is interesting to note that McKinley reports 'large quantities of charred soft tissues -noticeably lung, intestine, bowel and spinal longitudinal ligament -in experimental pyre cremations, remaining on the ash bed of the pyre up to eight to nine hours after cremation had commenced…' and that '(E)ven in next day recovery of material, some charred tissues may remain, particularly ligament ' (2000b: 269): all of which strongly suggests that the body on the experimental pyre in question was not rigorously 'tended' to any significant degree.
Gaitzsch and Werner, even though they express surprise that bones from archaeological busta show such a high degree of fragmentation (1993: 59-60), mention nothing about the degree of fragmentation of pig bones in their own experimental pyre; moreover, no reference is made to 'tending', other than the need to place more fuel around the more fleshy parts of the pig (ibid: 66). Arguably, an easier way of dealing with the problem that such areas of the body pose would have been more vigorous 'tending' or 'stoking' in order to separate the soft tissues from the bone and allow the application and circulation of oxygen and heat.
The expectation of a broadly 'laissez faire' attitude to the pyre seems also to have had implications for the use of ethnographic analogy in cremation studies. Once again McKinley is the authority, concluding that, while 'pyres may have been tended…there is no indication of additional fuel being added once the cremation is underway', and that '(D)eliberate fragmentation of the bone is only documented in some of the Aboriginal cases' (McKinley 1994b: 81).
However, McKinley's assertion is apparently derived purely from an account given by the nineteenth century traveller George Augustus Robinson referring to the practice of first leaving the body to burn on a pyre without tending. Yet Robinson seems simply to state that: 'If a corpse was not destroyed by the initial firing the remains were raked into a heap and refired… or bashed so that they were more easily consumed by the pyre' (quoted McKinley 1994b: 80).
Untended pyre cremations are highly unlikely to produce completely mineralised bone; Robinson does not appear to be describing particular or 'rare' cases per se, but rather a pattern of human intervention in the firing in order to be sure of its 'completion'. Actually, in his own descriptions of Tasmanian cremation, Robinson shows himself to be a far from squeamish observer of important details: ' … (T)hey continued to apply fuel to the pile. The body was now seen on the pile, when one of the of the men, HEEDEEK, got a long pole and broke the head. The brains was in a perfect state, but the skull and flesh was burnt. Others of the men got long poles and poked the body until the whole was consumed to ashes …' (Robinson, 31 July 1832[ed. Plomley 1966).
We should note the way in which the particular difficulty of the cranium (see above) was overcome in this instance. The cranial fragments frequently analysed for possible indicators of sex or age in archaeological cremation deposits might also be diagnostic of such actions in the past. In fact, Wells long ago noted that a particular type of fracturing of '… the medial part of the petrous temporal bones…' in cremation deposits that he had examined '…does not seem to occur under modern conditions of cremation…' (Wells 1960: 33), a remarkable observation in light of anthropological analogies; further research in this area is undoubtedly called for (see Weekes forthcoming).
Significant new ethnographic comparison is afforded by detailed accounts of Hindu pyre cremations from India and Bali. Robinson's account of 'bashing' of the remains now has more weight. Consider, for example, this description of 'tending' in Banaras on the Ganges in Northern India: 'Mid-way through the cremation, the chief mourner performs kapal kriya, 'the rite of the skull', by cracking open the cranium of the deceased with a bamboo pole. Often kapal kriya in fact consists of a general breaking up of the partly incinerated corpse, and a stoking of the fire so that it is more completely consumed' (Parry 1994: 177).
And such evidence can be further corroborated. I am informed for example that a particular group of chandala ('untouchable') pyre technicians, Dalits in the Southern Indian states of Tamil Nadu and bordering areas of Andhrapradesh, are locally called Kattiyakarans, meaning 'men with sticks', because of the way in which they actively stoke the pyres, 'bashing' and maintaining the correct position of corpses within pyre structures, etc. (skull and spine are apparently recognised as areas requiring special attention; R. Peniel Jesudason Rufus, pers. comm.).
A Balinese example of latter day pyre technicians in action is clearly recorded by Jane Downes: '… one or two men assisted the body to burn more quickly by poking it with long sticks and lifting it up to help the air circulate. The manipulation and fragmentation of the body during burning also serves to aid the spirit to escape the body. When the flesh had burnt off and the bones had been reduced through agitation to fairly small fragments, the pyre was quickly quenched with water brought up in large buckets by the women… the bone fragments were rapidly picked out of the ashes by the women…' (Downes 1999: 23).
It would be hard to find an account that more clearly shows how significant the human action of 'tending' can be for the process of cremation (in this case informing ideas about the metaphysical results of the process as well); the diagnostic qualities of archaeological cremated bone deposits, even if the vicissitudes of deposition, post-deposition, excavation and postexcavation are taken into account (McKinley 1994c), frequently seem to indicate that just such actions were carried out by the modern pyre technician's ancient counterparts.
The quenching of the Balinese pyre, and rapidity with which bone fragments were reportedly picked out of the ashes is also worthy of note; in the same way that small 'unwanted' objects such as coffin pins can be manually removed from bone residues in mechanised crematoria using a hand held magnet which causes further fragmentation of the bone, so it would seem that (at least the well burned/oxidised/white?) bone fragments are readily identifiable and retrievable from the quenched pyre residues in this example. Presumably, this might also apply to the selection of recognisable pyre goods.
Before considering evidence of pyre practice in antiquity, a final note should be made of the much higher degree of 'intimacy' inherent in the tactile experience of pyre cremation than we might see in the use of more mechanised and/or automated technologies. In the latter situation, for example, '(D)iscretion requires that modern cremation incinerates efficiently, without the production of smoke' (McKinley 1994b: 72). On the open pyre, smoke, and with it the smell of burning flesh, is an obvious feature of the nature of the technology and its use; thus adding perfumed oils to a pyre in India not only serves 'to aid the initial combustion' (ibid: 78), but also serves to disguise the smell (as do the addition of other spices, the use of sandalwood etc, see Parry 1994).
Further aspects of the experience of pyre cremations would seem to suggest the requirement of a special attitude on the part of pyre technicians to the burning of human remains, perhaps very different from that which a 'modern western' observer might assume. Quite apart from the action of stoking the pyre, the perceived results of the work on the human remains must be a significant factor. Some flexing of the limbs is to be expected early in the firing as dehydration affects tendons and muscles (Mckinley 1994b: 74;Mays 2000: 207). Then, as Mays points out, there will sometimes be a swelling of the abdomen resulting from the expansion of gases (Mays 2000: 207). This seems to be something like the effect reported by Gaitzsch and Werner, who noted that the pig carcass they used on their experimental pyre ruptured after about 15 minutes, and the innards became visible (Gaitzsch and Werner 1993: 64). Mays goes on to point out that the skin and muscles of the corpse split (a contraction of skin and muscles through dehydration, perhaps combining with gaseous expansion?), gradually revealing soft tissue and part of the skeleton (Mays 2000: 207). Arguably, this part of the cremation is where the action of actively stoking the pyre and agitation of the remains is of paramount importance. Mckinley's report of viewing un-burnt internal organs and ligaments in her apparently lightly tended experimental pyre is again of relevance (2000b: 269). Finally, my own observations (March 2004) of intact brains rolling from 'opened' crania, and of brain matter erupting from the side of the head during automated cremation might be invoked, although, as we have heard, pyre technicians might have recourse to more 'involved' methods for 'dealing with' brains. Breaking up of the bone to aid combustion is attested by the ethnographic sources.
Above all then, pyre cremation should be seen as a human, physical and conceptual effort as well as technical; the specialised knowledge, skill and experience of 'pyre technicians' should not be underestimated.
'Busta'
Recent work has developed new terminology for the 'types' of pyre in the Roman period that might be might be encountered in the archaeological record, in the shape of 'busta', 'one-off' pyre sites and 'ustrina' (Struck 1993;McKinley 2000a;Polfer 2000;Pearce 1999), as well as for the provision of items for consumption with the human remains on the pyre: 'primary gifts' (Pearce 2002: 374) or 'pyre goods' (McKinley 1994a). In all these areas, however, some questions need to be asked of the relationships between evidence and inference commonly used to produce such categories or archaeological classes of feature/find, (and, by implication, 'types' of ritual action). Here I will focus specifically on the 'bustum' concept.
Identification of 'busta' often seems to be based on a frequently invoked passage from the Latin writer Festus (though not always quoted/translated either fully or accurately, see Polfer 2000: 30;McKinley 2000a: 38). It has been argued that Festus (or rather an eighth century excerpt from the work of that writer) seems to draw a significant distinction between two general terms referring to types of pyre facility: Bustum proprie dicitur locus, in quo mortuus est combustus et sepultus, diciturque bustum, quasi bene ustum; ubi vero combustus quis tantummodo, alibi vero est sepultus, is locus ab urendo ustrina vocatur; sed modo busta sepulcra appelamus (De Verborum Significatu: 29), Which can be translated as: (A) Bustum is properly called a place in which a dead person is burned and buried, and it is called bustum, as being 'well burnt'; where however someone is indeed burned, but is in fact buried elsewhere, that place is called the ustrina from the act of burning; but we only call busta sepulcra.
The exact link between this statement and current archaeological theory relating to busta (formulated by Struck 1993), and ustrina (delineated by Polfer 2000; further explored in detail by Pearce 1999: 48-51) however, is actually somewhat unclear.
The prevailing assumption about the 'bustum' is perhaps exemplified by the following explanation from McKinley: 'the inferred technique in this instance being to let the pyre burn down into the pit then bury the remains in situ, i.e. the feature represented both pyre site and the grave. This type appears to be that defined by Festus…' (McKinley 2000a: 39).
But is this really what the Festus excerpt means? Leaving issues of provenance for these ideas to one side, Festus' perhaps rather too 'aetiological' derivation of 'bustum' being from 'bene ustum' may give some cause for concern (Tucker [1931: 38] and the Oxford Latin Dictionary [1968: 245] give different etymologies, neither of which agree with Festus). More significantly, however, what does the writer mean by 'locus'? This word may indeed mean 'exact same spot', but could also, and perhaps more sensibly, refer to a more general 'place' in which burning and burial constitute separate and sequential acts (a 'mortuary area' designated for both the burning of pyres as well as subsequent deposition of cremated bone?). Moreover, a further fragment of Festus seems to link 'bustum' more closely with a place of burial, or sepulchre, with no mention of burning (De Significatione Verborum: 456). To infer the ritual specialism of letting the pyre burn down into a pit and burying the remains in situ from the Festus excerpt is unwarranted.
In the wider literary context, an examination of the sources by Pearce has shown that 'Festus' distinction seems artificial in comparison to attested literary usage…'; Pearce has found that pyres are most often referred to in the literature as a rogus, or pyra, or ignis, and even ara (Pearce 1999: 48; 'ara' is particularly interesting in comparison with some Hindu concepts of the pyre as 'the last sacrifice', see Parry 1999: Chapter 5). Moreover, Pearce could find no reference 'where bustum in a literary source actually refers to in-situ cremation and burial', the word tending to denote 'the tomb or ensemble of tomb and monument' (Pearce 1999: 49;48-49).
Several further observations by Pearce on alternative distinctions of 'busta' and 'ustrina' in the epigraphic record are also worth noting: that (B)ustum more often refers to the tomb than the pyre…', for example, that '…(S)ome inscriptions explicitly contrast the rogus as pyre…' and that '…(A)n epitaph from Rome (CIL VI 10237) contrasts ustrina and bustum as pyre and tomb' (ibid). Do these last points perhaps throw new light on the final part of the Festus quote, that modo busta sepulcra appelamus: 'we only call busta sepulcra'?
From another perspective, simply using 'bustum' to mean 'in situ burning down into an under pyre pit' in the archaeological record carries with it exactly the same interpretive dangers as using other Latin words in the same way. Past experience should provide sufficient warning about the evidential weakness of uncritical application of Latin terminology in archaeological contexts (think of villa, for example, see Reece 1988: 80); such words are loaded with complexes of meaning that may well be alien in, and a false projection onto archaeological contexts.
As a consequence, the term 'busta' , whether relating to 'Grubenbusta' (Struck 1993: 82-83;McKinley 2000a: 39-40; the main type, broadly defined as a feature resulting from 'allowing' the burning down of the pyre into an under pyre pit and covering over) or 'Flächenbusta' (Stuck 1993: 83-84;McKinley 2000a: 40; another 'type' resulting from the simple heaping of a mound over the remains of the pyre on the ground surface) should be considered an archaeological concept, rather than anything necessarily reflecting terminology, 'typology' or category in the thoughts and actions of original pyre technicians or anyone else in antiquity.
By way of example, might we not consider at first glance the Homeric account of heaping up of a barrow over the pyre of Patroclos to represent some sort of Flächenbustum (Iliad: xxxiii, 255-7)? Yet immediately prior to this and apparently as part of the same ritual sequence, attendants have already gathered the 'white bones', for placement in a golden urn (ibid: 252-3). Whether or not we treat the Homeric text as an accurate account (although remarkably careful observance of ritual sequence and detail is iterated by 'Homer' elsewhere) the important point here is perhaps that the actions of barrow building over the pyre site and collection of some of the bone for alternative deposition have been allowed to exist side by side in the text. (Incidentally, is the 'white' of the bones here merely an idiomatic adjective used like an epithet, or is it also a technical term for bone which is more fully oxidised, and therefore recognisable and considered 'suitable' for collection?).
Of course, the raising of a mound over a pyre site (or, for that matter, the covering over of a pit full of pyre debris) is not exclusive of first gathering at least some of the human remains (and any identifiable 'pyre goods'?) for separate deposition. But non-removal of cremated human remains after burning is surely a definitive element of the 'bustum' concept. Archaeological evidence for a 'bustum' of either sort therefore would necessarily require, in situ, the practically complete cremated remains and pyre debris from one cremation event, either in an under-pyre pit or on a buried ground surface; without any real evidence for complete non-removal of human bone from putative 'busta' (i.e. not even a 'token' amount) prior to back filling or mound building, the whole 'bustum' concept is called into question.
A decided lack of sufficient cremated human bone in several 'bustum'-like features from St Stephens, St Albans has suggested that an alternative interpretation of them must be sought, leading both McKinley and Pearce to consider the possibility of these features being 'one-off' pyre sites (see Pearce 1999: 48;McKinley 2000a: 40). And yet it has to be said that 'busta' not infrequently are found to contain far less burnt human bone than we might expect from an adult cremation where all the remains have been 'left' in situ. The weight of cremated human bone that we might expect from an undisturbed 'bustum' burial (i.e. where all the remains as well as pyre debris had simply been covered over in situ) of an adult, according to McKinley's more recent estimate, is between 1000g and 2400g 'with an average of c 1650g' (McKinley 2000b: 269).
But it would seem that convincingly large deposits are not the norm in these contexts. As Pearce points out: 'the expected amount has rarely been recovered in the few busta from which the human bone has been analysed and is often lower than in other types of cremation burial…'(ibid: 43). McKinley successfully questions many of the recorded features designated 'busta' on just these grounds (2000: 40). Of course, it should be noted that factors such as postdepositional processes, excavation technique and methods employed in post-excavation and reporting have all to be taken into account (Pearce 1999: 43; a point comparable with that of McKinley concerning degrees of bone fragmentation [1994c]); given the nature of pyre cremation, it may also be suggested that insufficiently burnt bone in these contexts has decomposed while the mineralised bone has not. Even so, without any firm evidence of a total lack of bone collection from these features prior to filling in or covering over, the question remains: do 'busta' (in the sense commonly meant by archaeologists) actually exist? Or are these features simply various examples of pyre sites, with or without under-pyre pits for ventilation purposes and debris collection, that have been 'closed' by being covered over (along with objects often interpreted as 'grave goods' in such contexts) after the 'right' sort and/or amount of cremated human bone has been collected in each case? It would indeed seem wise to retreat to Pearce's conclusion that '(T)he archaeological remnant of Roman period pyre sites comprises mostly the pits over which the pyre would have been constructed to provide for ventilation and, if the pyre site was used only once, as a repository for pyre debris …' (Pearce 1999: 51).
Conclusion
Relationships between evidence, inference and terminology must be constantly reflected upon in order to delineate and understand our projection of meaning into interpretations of archaeological data. In the case of 'the cremation process' and 'pyre technology', it might be 'modern western', 'detached', or 'clinical' attitudes that have informed the picture. With 'Busta', it is perhaps a more familiar archaeological need for category. Nevertheless, it is important to be conscious of the cultural component of our syntheses of discrete analytical frameworks and archaeological evidence.
|
2019-05-27T13:20:20.286Z
|
2005-03-31T00:00:00.000
|
{
"year": 2005,
"sha1": "7b5c9a1b780c2ff8fbb512b90d62d5bcb426f7f8",
"oa_license": null,
"oa_url": "https://doi.org/10.16995/trac2004_16_26",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ceeab6688cf246ff454cc1cc070ca4a36cf54393",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
}
|
4771505
|
pes2o/s2orc
|
v3-fos-license
|
Thoracic Ectopia Cordis in an Ethiopian Neonate.
BACKGROUND
Ectopia Cordis is defined as complete or partial displacement of the heart outside the thoracic cavity. It is a rare congenital defect with failure of fusion of the sternum with extra thoracic location of the heart. The estimated prevalence of this case is 5.5 to 7.9 per million live births.
CASE PRESENTATION
We had a case of a 16-hour-old male neonate weighing 2.9kg with externally visible, beating heart over the chest wall. Initial treatment included covering the heart with sterile-saline soaked dressing, starting systemic antibiotics and supportive care. A staged surgical approach to this defect with the initial aim of replacement of the heart to the thoracic cavity was opted. The neonate died twenty minutes after the surgical intervention due to cardiogenic shock despite adequate resuscitative measures.
CONCLUSION
This case report underscores the missed opportunity of antenatal ultra-sonographic diagnosis and the challenge of Ectopia Cordis treatment in Ethiopia.
INTRODUCTION
Ectopia Cordis (EC) is one of the rare congenital anomalies characterized by complete or partial displacement of the heart outside the thoracic cavity. The estimated prevalence of EC is 5.5 to 7.9 per million live births (1). Prenatal diagnosis is established by ultrasonography by visualizing the heart outside the thoracic cavity (2)(3)(4)(5).
EC is classified into cervical, thoracic, thoracoabdominal and abdominal. Pentalogy of Cantrell is considered when a combination of thoracoabdominal EC, anterior diaphragmatic hernia, lower sternal defect and midline supraumblical defect occurs. Although surgical techniques have evolved, the prognosis and survival are limited; thoracic type has the worst prognosis while the thoracoabdominal EC has a better prognosis (6)(7)(8).
An EC case neonate was seen and intervened at Hawassa University Referral Hospital, Hawassa, Ethiopia. It necessitated
CASE PRESENTATION
A 16-hour-old, vaginally delivered full-term male neonate, born to a 29 years old Para IV mother was referred from a primary hospital after exposed beating heart was found over the chest wall with the patient experiencing difficulty of breathing. The delivery was attended at home by a traditional birth attendant. All siblings were alive and healthy. There was no history of consanguinity, infection, radiation, drug or any known herbal exposure. There was no family history of any such or related congenital heart diseases. Previous pregnancy courses were uneventful. Health center antenatal care was uneventful although no ultrasonography study was done.
Physical examination showed a full-term neonate of 40 weeks with heart rate-124/ min, respiratory rate-60/min and SpO 2 -77%. The upper sternum was deficient with the heart lying outside the thoracic cavity and without pericardial protection with cephalic orientation of its apex ( Figure 1, Figure 2). The abdomen was intact. Laboratory test revealed: hemoglobin-19g/dl, total leukocyte count-20.5x 10 3 /ml, differential count: L-25.5 %, P-64.3 %, Platelets count-173x10 3 /ml, blood group of O +ve and imaging studies were not done.
The patient was kept in a temperature regulated room. Initial prompt treatment included covering of the heart with sterile-saline soaked dressing, systematic treatment and supportive care. Then, a staged surgical approach to this defect with the initial aim of replacement of the heart to the thoracic cavity was chosen. The neonate died twenty minutes after the surgical intervention due to cardiogenic shock despite adequate resuscitative measures. Postmortem examination could not be processed due to familial reasons.
DISCUSSION
EC is a rare congenital abnormality with reported point prevalence of 5.5 to 7.9 per million live births (1). The burden of the disease is inadequately known in Africa due to limited reports (2)(3)(4)(5). It is classified into Cervical, thoracic, thoracoabdominal and abdominal (6). Only few patients with thoracic type have survived and the thoracoabdominal EC has a better prognosis. Our case was compatible with the thoracic type (6)(7)(8).
Thoracic EC was explained embryonically by the rupture of the chorion at 3 weeks of gestation with resultant compression of the thoracic cavity and failure of descent of the heart at this stage. The possibility of amniotic bands is also ascribed. EC may occur in isolation or in association with other ventral body wall defects (7).
Antenatal ultrasound during the first trimester helps in the diagnosis of EC. In our case, ultrasonography study was not performed during pregnancy. EC was diagnosed postnatally after a home delivered neonate was brought to our hospital (2,3,9).
There was no history of consanguinity, similar history or any congenital heart disease in our case. Although the genetic causes for EC are not exactly known, certain associations with chromosomal abnormalities have been reported (7). Testing facilities for such cases are nonexistent in Ethiopian Settings.
Survival of thoracic EC case is limited despite advances in care, and its management is challenging. Aggressive surgical procedures are recommended to increase the survival (7,10). In our case report, death occurred within twenty minutes of the initial surgical intervention. Earlier studies showed a lethal course of the thoracic EC.
In conclusion, this report underscores the missed opportunity of antenatal ultrasound diagnosis and the challenges in the management of EC in Ethiopia.
|
2017-10-27T20:41:55.031Z
|
2017-03-01T00:00:00.000
|
{
"year": 2017,
"sha1": "83808af3aae05b26e72c1d6a0d1789e586fb9fd1",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ajol.info/index.php/ejhs/article/download/153158/142749",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83808af3aae05b26e72c1d6a0d1789e586fb9fd1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219748340
|
pes2o/s2orc
|
v3-fos-license
|
Temporally Forward Nonlinear Scale Space for High Frame Rate and Ultra-Low Delay A-KAZE Matching System
SUMMARY High frame rate and ultra-low delay are the most essential requirements for building excellent human-machine-interaction systems. As a state-of-the-art local keypoint detection and feature extraction algorithm, A-KAZE shows high accuracy and robustness. Nonlinear scale space is one of the most important modules in A-KAZE, but it not only has at least one frame delay and but also is not hardware friendly. This paper proposes a hardware oriented nonlinear scale space for high frame rate and ultra-low delay A-KAZE matching system. In the proposed matching system, one part of nonlinear scale space is temporally forward and calculated in the previous frame (proposal #1), so that the processing delay is reduced to be less than 1 ms. To improve the matching accuracy a ff ected by proposal #1, pre-adjustment of nonlinear scale (proposal #2) is proposed. Previous two frames are used to do motion estimation to predict the motion vector between previous frame and current frame. For further improvement of matching accuracy, pixel-level pre-adjustment (proposal #3) is proposed. The pre-adjustment changes from block-level to pixel-level, each pixel is assigned an unique motion vector. Experimental results prove that the proposed matching system shows average matching accuracy higher than 95% which is 5.88% higher than the existing high frame rate and ultra-low delay matching system. As for hardware performance, the proposed matching system processes VGA videos (640 (cid:2) 480 pixels / frame) at the speed of 784 frame / second (fps) with a delay of 0.978 ms / frame.
Introduction
High frame rate and ultra-low delay are the most essential requirements in building excellent computer vision based human machine interactions, such as projection mapping [1], simultaneous localization and mapping (SLAM) [2], and automatic driving [3]. Currently, on the other hand, a common video is about 60 frames/second which is much lower than the requirement of high frame rate; on the other hand, the processing delay of most of the vision systems are relatively long, such as 0.1 second/frame or 0.01 second/frame, which also cannot satisfy the requirement of ultra-low delay. An important existing work which also aims at building a high frame rate and ultra-low delay matching system was proposed by Hu and Ikenage [4]. As for this work [4], although frame rate is high enough and processing delay is also low, its robustness is not high enough, because it uses several fixed template to handle objects' scale change. Hu's work [4] first detects keypoints in each video frame, and then matches the keypoints detected in each video frame with the keypoints detected on some templates prepared in advance. The keypoint detection module is based on FAST corner detection [5], and the matching algorithm is based on a famous local feature descriptor named oriented FAST and rotated BRIEF (ORB) [6], because FAST and ORB are efficient and hardware friendly. With FPGA based hardware implementation, the image process core of Hu's work [4] processes VGA resolution (640 × 480) video sequences faster than 1000 fps, and the processing delay is 0.8083 ms/frame. The processing speed and delay satisfy the requirements of high frame rate and ultra-low delay, but its robustness is not high enough because it uses three size-fixed templates to do matching in order to solve the problem of objects' scale change. For example, when scale of the target object greatly changes, the matching accuracy is dynamically decreased.
In order to improve the matching performance, especially for scale change, this paper focuses on designing a high frame rate and ultra-low delay nonlinear scale space based on accelerated KAZE (A-KAZE) algorithm [7]. A-KAZE algorithm [7] is a state-of-the-art matching algorithm which generates nonlinear scale space to acquire more scaleinvariant features. Compared with other local feature extraction algorithms, such as SIFT [8] and SURF [9], nonlinear scale space is able to keep more boundary information of the original image, leading to higher matching accuracy. Meanwhile, since the descriptor of A-KAZE algorithm is binary, it is much faster than other algorithms. Because of the above reasons, A-KAZE algorithm is more suitable for a high accuracy, high frame rate, and ultra-low delay matching system. The process of A-KAZE algorithm is that for each input image, it generates nonlinear scale space firstly, then detects keypoints and generates descriptor for each keypoint, and finally uses these descriptors to do matching. Nonlinear scale space generation is the most important module in the whole A-KAZE algorithm, because this part influences the matching accuracy mostly. However, to achieve high frame rate and ultra-low delay with hardware implementation, some problems need to be solved. First, nonlinear scale space generation needs to do complex iterations for many Copyright c ⃝ 2020 The Institute of Electronics, Information and Communication Engineers times, as the input of each octave is the output of the previous octave. So each octave needs to wait for the finish of previous octave. The delay of such kind of processing is obviously more than one frame, while high frame rate and ultra-low delay matching system requires to finish the processing of each frame within one frame delay. This is a serious problem preventing the realization of high frame rate and ultra-low delay matching system. Second, for each sublevel, nonlinear diffusion needs to be implemented. It is widely known that in the nonlinear diffusion equation, there are derivatives and divisions which are all not hardware friendly. Third, A-KAZE algorithm adopts unfixed number of iterations to approximate the results of nonlinear diffusion equation step by step. The unfixed number of iterations is not able to be directly implemented in hardware. What's more, each sublevel uses previous sublevel's results to do nonlinear diffusion, data dependency also exists between octaves. As a summary, because long time delay, complex calculations, unfixed number of iterations and data dependency exist, the original nonlinear scale space of A-KAZE is difficult to be implemented for a high frame rate and ultra-low delay matching system. This paper proposes temporally forward nonlinear scale space for high frame rate and ultra-low delay A-KAZE matching system. In the proposed system, to remove complex calculations and unfixed number of iterations, the HFD algorithm [10] is utilized to replace nonlinear diffusion equation. All the calculations of HFD algorithm just include addition, subtraction, multiply and bit-width displacement. Furthermore, HFD algorithm does not require unknown times of iterations. To solve the problem of data dependency between octaves and to meet the requirement of ultra-low delay for the high frame rate video, the structure named temporally forward nonlinear scale space (proposal #1) is proposed. The main idea of this proposal is that one part has been processed in the previous frame. What's more, to improve the matching accuracy, pre-adjustment of nonlinear scale space (proposal #2) and pixel-level pre-adjustment (proposal #3) are proposed. The relations among the three proposals are as follows. Proposal #1 is the top level structure designed for the purpose of high frame rate and ultralow delay. Although proposal #1 makes the goal of high frame rate and ultra-low delay possible to be achieved, the matching accuracy is decreased. Proposal #2 is designed to recall the matching accuracy decreased by proposal #1. Proposal #3 further improves proposal #2 to make the matching accuracy to be higher. The work described in this paper is an extension of our previous conference paper [11], with significate new contents including both new proposals and new experimental results.
The rest of this paper is organized as follows. Section 2 presents the three proposals one by one. Section 3 presents the hardware structure of the proposed high frame rate and ultra-low delay system. Experimental results on both of software and hardware are reported and analysed in Sect. 4, followed by conclusion and future works given in Sect. 5.
Proposed A-KAZE Matching System
The framework of the whole proposed matching system is shown in Fig. 1. For each input image to do matching, the proposed system builds nonlinear scale space firstly, and then performs keypoint detection and descriptor generation. The basic matching system which just contains keypoint detection step and descriptor generation step was designed in conventional work [4], [12]. Harris Corner Detector [13] is used to detect the corners of the input image as keypoints. For all these keypoints, binary descriptors are generated by binary robust independent elementary features (BRIEF) [14]. The obtained descriptors are used to do matching. The matching mechanism is brute-force matching measured by Hamming distance, i.e. if the Hamming distance is smaller than threshold, the corresponding descriptors are matched, otherwise they are not matched.
Keypoint detection is to identify locations that are invariant to scale change. This can be achieved by searching for stable keypoints across all possible scales, using a continuous function. All the possible scales constitute a scale space. Following the definitions in SIFT [8], a scale space is composed of several octaves, and an octave is composed of several sublevels. Different algorithms use different schemes to construct octaves and sublevels for building scale space. For example, in the well-known SIFT [8], for each octave of scale space, the initial image is repeatedly convolved with Gaussians to produce a set of scale space images. Adjacent Gaussian images are subtracted to produce the difference-of-Gaussian images. After each octave, the Gaussian image is down-sampled by a factor of 2, and the process repeated. In A-KAZE algorithm [7], the octaves and sublevels are obtained through a nonlinear way which does not perform down-sampling at each new octave.
For each frame, nonlinear scale space contains two frame-level processes. In the condition of original A-KAZE algorithm's nonlinear scale space structure, there exists data dependency between the first frame-level process and the second frame-level process. In particular, each frame-level Fig. 1 Framework of the proposed high frame rate and ultra-low delay A-KAZE matching system, in which P1, P2, and P3 are short for proposal #1, proposal #2, and proposal #3, respectively. process needs to wait for the finish of the previous framelevel process. As a result, the delay of this structure is very long, so it is unable to meet the requirements of ultra-low delay. However, for the proposed structure of nonlinear scale space, the second frame-level process of current frame uses the results of the first process of previous frame and just with some pre-adjustments to keep the matching accuracy. It is obviously to know that the speed of nonlinear scale space of the proposed structure is twice faster than the original algorithm. The process time for each frame of the proposed structure is less than one frame time. It therefore meets the requirements of high frame rate and ultra-low delay matching system. It is worth noting that the proposed concept not only can be applied for A-KAZE algorithm's nonlinear scale space to implement in hardware and achieve ultra-low processing delay, but also can be applied for all this kind of algorithms which need several frame-level processes caused by data dependency.
In the step of nonlinear scale space generation, HFD [10] algorithm is utilized. In the proposed structure, octave 0 of current frame is generated in the step of nonlinear scale space, and this octave 0's information will be used. Through the process of the pre-adjustment, this octave is utilized as the octave 1 of next frame. In the process of pre-adjustment, pixel-level pre-adjustment is adopted. The structure of finishing a part of current nonlinear scale space in previous frame is proposal #1, which temporally forwards nonlinear scale space. Doing some adjustment on previous octave 0 to obtain current octave 1 is proposal #2, i.e. preadjustment of nonlinear scale space. What's more, to further improve the matching accuracy of pre-adjustment, proposal #3, i.e. pixel-level pre-adjustment is proposed at the same step.
Furthermore, to make clear the parallelism of the whole proposed structure, it is needed to be explained with more details. Octave 1 of current frame can be obtained before finishing the process of the octave 0. And the pre-adjustment for the octave 1 of the next frame is parallel with the step of keypoint detection of current frame as well. As a result, there is no extra delay of pre-adjustment step. The proposed structure is able to save a lot of time compared with the original A-KAZE algorithm's nonlinear scale space structure.
Proposal #1: Temporally Forward Nonlinear Scale Space
Proposal #1 completely changes the originial structure of the nonlinear scale space generation to meet the requirements of high frame rate and ultra-low delay. Proposal #1 utilizes the property of high frame rate video that temporal coherence is strong between continues frames. Because the frame rate of input video reaches 784 fps, the differences between each adjacent frames are small. The similarity between each adjacent frames is high even for object translation or rotation. Because of the high similarity between adjacent frames, the previous frame can be used as an approximation of the next frame, without decreasing much accuracy.
The comparisons of the original structure and the proposed structure is shown in Fig. 2. In the original structure, octave 0 and octave 1 are processed sequentially and both of them need one frame time, while in the proposed structure, octave 0 and octave 1 are processed in parallel. Also, keypoint and descriptor parts are parallel with them, the whole process of nonlinear scale space finishes in one frame time. For the original structure, both octave 0 and octave 1 are generated using the information of current frame k. But in our proposal #1, only octave 0 is generated by frame k. The octave 1 of frame k is generated by the information of frame k − 1. Because there is no data dependency between the proposed nonlinear scale space's octave 0 and octave 1, octave 1 has no necessity to wait for the processing of octave 0. As a result, problem of the conventional work is solved and delay is decreased to be less than one frame time.
The detailed process of proposal #1 are as follows. To achieve high robustness to scale change, the mechanism of nonlinear scale space requires each octave to be smaller than the previous one. To this end, the process of proposal #1 is that octave #1 of current frame is the downsampled octave #0 of previous frame, and octave #0 of current frame need to be generated. The downsampling is calculated through averaging the pixels in within a local region. Formally, for s times of downsampling, the result value is where p k is the downsampled result, I i denotes the pixel intensity in the local region win(k) of size s 2 .
Proposal #2: Pre-Adjustment of Nonlinear Scale Space
Pre-adjustment of nonlinear scale space is achieved through motion estimation. Motion estimation [15] is a general concept widely used in many computer vision related field, such as video compression and object tracking. The basic process of motion estimation is shown in Fig. 3. Since there exists correlation between the adjacent frames, the best match can be found in the search area in the reference frame. The position differences between the current block and the best match block are the motion vector. The process of obtaining a motion vector is called motion estimation. In this section, the blocks for motion estimation are defined as nonoverlapping grids of 16 × 16 pixels in the image.
Selective Gray-Coded Bit-Plane Based Low-Complexity Motion Estimation
There are a lot of kinds of motion estimation algorithms. From many motion estimation algorithms, selective gray-coded bit-plane based low-complexity motion estimation [16] stands out because of its high processing speed and good performance. The processing speed of this motion estimation method meets the requirements of high frame rate and ultra-low delay matching system. This algorithm firstly generates gray-coded bit-planes [17] of input image. The K-bit gray code of a pixel value is computed by [17] where a means the binary code of a pixel value, g means the gray code of this pixel value, "⊕" denotes XOR operation.
The meaning of gray-coded bit-plane is as follow: g k means that the pixel value of this bit-plane is just the k-th position value of the whole gray code of this pixel. The reason why choose gray-coded bit-plane instead of binary coded bit-plane is that gray code is robust as successive gray code words differ in only one bit position. Secondly, this motion estimation algorithm chooses three bit-planes of highest position, because they are considered to contain the most significant information of the input image. The higher position is, the more important the binary code is. Because binary code of higher position has stronger influence on the pixel value than lower one does. At the same time, it will be less detailed. For example, if a 7 changes from 1 to 0, the pixel value will be 125 lower. But if a 0 changes from 1 to 0, the pixel value will only be 1 lower.
The importance of position in gray code is similar with the binary code. The 8-bit gray-coded bit-planes are shown in Fig. 4. Through the analysis of gray coded bit-planes, it can be found that although g 7 takes the most important position, it does not provide any motion information, as is just a large white or black area. What's more, the lowest four bit-planes include too much detailed information, the background does not influence the object too much to obtain accurate motion information. As a result, three lower position gray coded bit-planes, i.e. g 6 , g 5 and g 4 , which obtain accurate motion information are chosen to do motion estimation in the proposed system. After obtaining three gray-coded bit-planes of the input image, XOR calculations is performed to find the most similar motion for motion estimation. The equation of matching criterion (MC) is [17] MC (m, n) = where NT B denotes the number of truncated bits. When NT B = 5, it means that only the three most important bitplanes are utilized in the matching process. m and n mean the position of this pixel. N is the size of search window. g c k is the gray-coded bit-plane in the current frame, while g r k is the gray-coded bit-plane in the reference frame. Because of all these calculations are very simple, the processing speed of this motion estimation algorithm is very fast and hardware friendly.
Process of Pre-Adjustment
Because of the differences between the previous frame and the current frame, if only proposal #1 is utilized, the matching performance of the whole proposed matching system decreases, although the aim of proposal #1 is not to achieve high accuracy. To solve this problem, the purpose of proposal #2, i.e. pre-adjustment of nonlinear scale space, is to heighten the similarity between the octave 1 calculated by previous frame and the octave 1 of current frame. And then, the matching accuracy is raised. At the same time, to meet the requirements of high frame rate and ultra-low delay matching system, the process of adjustment should be finished before the current frame comes. Pre-adjustment is therefore proposed. Previous two frame's information are utilized to do motion estimation to predict the motion vector from previous frame to current frame. Adding the motion vector to the calculated octave 1, the predicted octave 1 of current frame is obtained.
Selective gray-coded bit-plane based low-complexity motion estimation is used to do motion estimation and get the motion vector. The comparisons between the original octave 1 of proposed nonlinear scale space and adding preadjustment of nonlinear scale space are shown in Fig. 5. If just proposal #1 is utilized, the information of frame k − 1 is directly used to do downsampling and get the octave 1 of frame k. However, after adding the pre-adjustment, the octave 1 of frame k is obtained by getting the motion vector through the motion estimation calculated by frame k − 1 and frame k − 2. And then using this motion vector to do motion prediction to predict the motion between frame k and frame k − 1. Predicted octave 1 of frame k is obtained by adding the predicted motion to downsampled octave 0 of frame k − 1.
The coordinate of the pixel in predicted octave 1 (x prediction , y prediction ) is calculated by (x prediction , y prediction ) = (x ori + x motion , y ori + y motion ), (5) where x ori and y ori denote the original coordinates, and x motion and y motion denote the motion vectors. For border problem, replication is used when the situation that pixel moves to other place and there is no pixel in this position happens.
Proposal #3: Pixel-Level Pre-Adjustment
For the original motion estimation method utilized in the pre-adjustment of proposal #2, the whole block is used to calculate motion vector. And then, resulting in that each pixel in this block has the same motion vector, they are moved in the same direction. But this method has a problem, as shown in Fig. 6. The motion vectors for pixels which are in the same block have different motions in real-world situations, especially in the situation of rotation. As for the weakness of proposal #2, because different pixels are assigned the same motion vector and background pixels move as the same as object pixels, the accuracy of original proposal #2 is not high enough. To solve these problems, proposal #3, i.e. pixel-level pre-adjustment, is proposed. In proposal #2, we follow the standard selective gray-coded bit-plane based motion estimation [16], so the blocks are non-overlapping. But in proposal #3, to estimate motion vector for every pixel, the blocks are overlapped.
The comparisons between the original pre-adjustment method and pixel-level pre-adjustment are shown in Fig. 7. For the original method, the whole block pixels are used to do motion estimation and the obtained motion vector is distributed to all the pixels in this block. In the situation of proposal #3, the motion vector got by calculating the whole block's pixels is just assigned to the pixel in the center of this block. So, each pixel has its own independent preadjustment result. The pre-adjustment becomes more pre- 9 Graphical illustration of the hardware structure of the proposed high frame rate and ultra-low delay A-KAZE matching system. cise from block-level to pixel-level.
Moreover, changing pre-adjustment method from the whole block to only the center pixel does not cause any additional delay to the matching system. Because the whole pre-adjustment part is processed in parallel with previous frame's keypoint detection part. Also, due to the very simple calculations and high parallelism of pre-adjustment, the processing speed of this improved pre-adjustment method meets the requirements of high frame rate and ultra-low delay. Figure 8 shows the detailed process of proposal #3. The step size of original pre-adjustment is a block length, the calculation of motion estimation is block-by-block. However, for proposal #3, the step size of motion estimation is just a pixel, it is calculated pixel-by-pixel. As a result, each pixel has different and independent motion vector. The accuracy of pre-adjustment is therefore improved a lot.
Hardware Structure of the Matching System
High frame rate and ultra-low delay matching system consists of three parts including PC, high frame rate camera, and FPGA board, as shown in Fig. 10. Each image captured by high frame rate camera is transported to FPGA in form of pixel stream. In the FPGA board, matching algorithm is performed, and the matching results are transported to PC. Figure 9 shows the specific hardware structure of the proposed matching system. The image information captured by high frame rate camera is received and processed by the camera link receiver module. And then the pixel intensity information is transported to image processing core which contains modules of nonlinear scale space, pre-adjustment, keypoint detection, descriptor generation and matching. The five main modules are connected with register access. What's more, the register access is attached with USB 3.0 interface. Through this way, PC is able to communicate with FPGA to adjust relevant parameters and threshold values which are used in image processing core.
In nonlinear scale space module, octave 0 is calcu- lated. And this module access to memory, the calculated octave 0 of current frame is saved in the memory to do preadjustment for octave 1 of the next frame (proposal #1). Also previous two frames are saved in the memory too to do the pixel-level motion estimation for the preadjustment (proposal #2 and proposal #3). The nonlinear scale space module and the pre-adjustment module are just connected with register access and memory access separately, and there is no data dependency between these two modules. But saving all data of nonlinear scale space module and pre-adjustment module in the memory of FPGA board causes the problem of memory insufficiency. Downsampling is therefore demanded to be implemented before saving data to memory. After that, the data saved in the memory is just one quarter. Via memory controller, the output matching results are written into double-data-rate three synchronous dynamic random access memory (DDR3-SDRAM). Output results are read by PC through DDR3-SDRAM.
Evaluation Environment
To evaluate the feasibility and matching accuracy of the proposals, experiments on both software and hardware are performed. The matching accuracy is evaluated through software experiments. The comparisons of proposals' matching performance and previous work's matching performance are evaluated in software as well. The software evaluation environment is Visual Studio 2013 with OpenCV 2.4.11 on a PC of Windows 10 professional operating system. The experiments on hardware are used to evaluate the processing speed and delay to find out whether the designed matching system meets the requirements of high frame rate and ultra-low delay or not. At the same time, hardware resource utilization is also evaluated to find out whether the proposals are feasible to be implemented for real applications. Hardware environment consist of three parts as shown in Fig. 12, i.e. Xilinx Kinte-7 XC7K325T FPGA borad, BASLER acA2000-340 high frame rate camera, and a PC of Core i7-4790 3.6GHz CPU. The general specifications of FPGA board is shown in Table 1. What's more, logic synthesis and implementation of FPGA board are performed by Vivado 2017.2.
The dataset which is utilized to evaluate the matching accuracy of the proposed matching system contains four kinds of test sequences. These test sequences are all captured by high frame rate camera. The frame rate is 784 fps. What's more, the resolution for each frame in these sequences is 640 × 480 pixels. There are 1200 frames in each sequence. The evaluation dataset contains representative situations, such as translation, rotation, illumination change, and scale change. The typical frames of these sequences are shown in Fig. 11. Translation in Fig. 11(a) means the object moves from top to bottom. In the situation of rotation, the object is in the center of the images and rotates in the plane. Illumination change means that the object does not move but the lighting condition is changed, such as from dark to bright. For scale change, the size of the object is changed by moving towards the camera from a distance. Before match- ing, the keypoints detected in the two frames are manually observed to count the number of ground-truth matches. After matching two frames by our proposals and other comparison algorithms, the keypoints in the two frames are connected. By displaying the connections one-by-one, all the matches are manually observed to distinguish and to count correct ones and false ones. The widely used metric F-score [18] is adopted to evaluate the matching accuracy of the designed high frame rate and ultra-low delay matching system. The definition of Fscore is based on two fundamental metrics Precision and Recall, i.e. Precision = # correct matches # correct matches + # false matches (6) and Recall = # correct matches # total matches in ground truth , where Precision is defined as the percentage of correct matches in all the matches. Higher precision means that the system makes less mistakes in matching correspondence keypoints. Recall is defined as the percentage of correct matches in ground-truth matches. Higher Recall means that the system losses less correspondence keypoints. Considering Precision and Recall simultaneously, F-score is defined as Higher F-score means that the system obtains more true matches and lost less true matches. Therefore, higher Fscore indicates higher performance.
Software Evaluation Results
The matching performances of proposals, original A-KAZE [7] and existing high frame rate and ultra-low delay matching system [4], [12] are evaluated on four test sequences. The evaluation results are reported in Table 2. Proposal #1 is that just change the structure of nonlinear scale space to make it temporally forward. Proposals #1 & #2 mean that pre-adjustment is added to proposals #1. Proposals #1 & #2 & #3 refer to the whole proposed matching system with preadjustment is changed from block-level to pixel-level. The algorithms of keypoint detection and descriptor generation part of all the proposed matching system are the same with existing matching system. Firstly, compared with the existing matching system, original A-KAZE algorithm shows a much better performance for all cases. The average F-score of A-KAZE algorithm is 7.69 % higher than the existing matching system. When proposal #1 is implemented only, i.e. just uses previous frame's information to calculate the octave 1 of current frame, the matching accuracy decreases a lot, especially in the situation of rotation. The average F-score of proposal #1 is 4.88% lower than original A-KAZE. Furthermore, in the case of rotation, it is 7.90% lower than original A-KAZE algorithm. When adding proposal #2, average F-score increases about 1%. But the matching performance for the situation of rotation is still low, as it is still 89.98%. To further improve the matching accuracy, especially for rotation, proposal #3 is added. The evaluation results show that, after adding proposal #3 to the designed nonlinear scale space, the matching performance improves a lot in all cases. Finally, the matching accuracy of the whole designed system is 95.58%. The matching accuracy for rotation and scale change become 5.76% and 6.59% higher than the existing matching system [4].
Meanwhile, compared with the original A-KAZE algorithm, the proposed matching system's matching accuracy for the situation of rotation is still a little low. There are two main reasons. First, the descriptor generated by BRIEF algorithm does not have high robustness for rotation. Second, the motion vectors between two frames have relatively-large differences, using previous two frames to do motion estimation to predict the motion of current frame leads to the low robustness for rotation. To solve these problems, on one hand, it is possible to use more rotation-robust descriptors; on the other hand, more accurate motion estimation meth-
Hardware Evaluation Results
The proposed high frame rate and ultra-low delay matching system is implemented on FPGA to evaluate this system's feasibility in real applications. Table 3 reports the utilization of hardware resources by the proposed matching system. One can conclude from Table 3 that most types of the utilized hardware resources are less than half of their total amount on the FPGA board, which means the implemented high frame rate and ultra-low delay A-KAZE matching system is resource-saving. One can also find that the utilization of LUT is 96%, because generating sublevels and octives need to save much data in the RAM. What's more, because HFD algorithm needs to do great quantity of calculations whose input is a 7 × 7 matrix. There are several potential solutions to reduce the utilization of LUT. Currently, two same sized RAMs are used to save one image to decrease delay, if this structure changes from parallel to sequential, using single port RAM to replace dual port RAM, the memory used to save the data can be a half of the current one. Of course, the processing delay will have one more clock delay because of the time consuming of reading and writing. Also, there are many intermediate data needs to be saved in the calculation of HFD algorithm, finding other simpler nonlinear diffusion filter algorithm is possible to save more LUTs. Table 4 reports the hardware performance of the proposed A-KAZE matching system. The input frequency and the input frame rate are 100.00 MHz and 784 fps, respectively. The processing delay of the proposed A-KAZE matching system is 0.978 ms/frame. Since the delay is less than 1 ms/frame, it satisfies the requirement of ultra-low delay. As a summary, the proposed matching system processes video of 784 fps with the delay of 0.978 ms/frame, it successfully achieves the goal of high frame rate and ultra-low delay.
Conclusion and Future Works
A temporally forward nonlinear scale space for high frame rate and ultra-low delay A-KAZE matching system has been proposed in this paper. In the proposed matching system, one part of nonlinear scale space is temporally forwarded and calculated in the previous frame, so that the processing delay is reduced to be less than 1 ms. To improve the matching accuracy affected by temporally forwarding, previous two frames are used to do motion estimation to predict the motion vector between previous frame and current frame. For further improvement of matching accuracy, pixel-level preadjustment is proposed. The pre-adjustment changes from block-level to pixel-level, each pixel is assigned an unique motion vector. Experimental results show that the proposed matching system achieves high matching accuracy and processes VGA videos at the speed of 784 fps with a delay of 0.978 ms/frame.
In future works, more rotation-robust descriptors and more precise motion estimation methods will be investigated to further improve the robustness of the proposed high frame rate and ultra-low delay matching system. In particular, proposal #3 straightforwardly extending block-level motion estimation to pixel-level motion estimation, which makes a balance between speed and robustness. In our future work, it is an interesting direction to investigate how to further improve the robustness of motion estimation with simple calculations and hardware-friendly operations.
|
2020-06-11T09:10:14.864Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "4a13487640f54f6468239386605687940cfb48f8",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/transinf/E103.D/6/E103.D_2019MVP0019/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e4ee924d1977f982779cef4a4d64404c9cb03300",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
}
|
216163615
|
pes2o/s2orc
|
v3-fos-license
|
Development of the radio frequency quadrupole proton linac for ESS-Bilbao
The Radio Frequency Quadrupole (RFQ) linear accelerator for ESS-Bilbao is described. This device will complete ESS-Bilbao injection chain after the ion source and the LEBT. The design was finished in 2015 and machining of the accelerator cavity started in 2016. The RFQ is a 4-vane structure, aimed to accelerate protons from 45 keV to 3.0 MeV and operating at 352.2 MHz in pulsed mode with a duty cycle up to 10%. Total length is about 3.1 m divided in 4 segments. Each segment is itself assembled from four components, named vanes, by using polymeric vacuum gaskets with no brazing among them. Notable aspects of the design are the constant mean aperture R0, vane radius ρ and thus ρ/R0 ratio and also uniform inter vane voltage. Novel procedures for the design of the modulation and integrated beam dynamics and electromagnetic design have been developed for this task. In this paper, the complete design procedure and its results are presented, including beam dynamics, RF cavity design, field flatness and frequency tuning, cooling and thermo-mechanical design.
Introduction
ESS-Bilbao [1] is an Spanish public Consortium of the Central and Basque Governments. It is the institution selected to supply Spanish in-kind contribution to the European Spallation Source ERIC, ESS [2]. The contributions are focused in different areas: accelerator (the complete MEBT (Medium Energy Beam Transport) section and the RF systems of the warm section), target and neutron instruments (MIRACLES). In addition to these contributions, local projects are also under development. The ESS-Bilbao injector is a proton injector that consist on an ECR (Electron Cyclotron Resonance) ion source and a LEBT (Low Energy Beam Transport) system, already in operation [3]. This injector will be completed with the RFQ linac presented in this communication. The 3 MeV can then be used as the linac of a compact acceleratordriven neutron source.
The ESS-Bilbao RFQ design was carried out by a local team. The ISIS-FETS [4] and Linac4 [5] (also ESS [6], based on IPHI [7]) RFQs were taken as references and initial state-of-the-art models. However, the frozen design incorporates mixed selection of characteristics. The rounded lobe shape (the so-called Montgolfier shape), external geometric characteristics and, more importantly, the assembly procedure without brazing were adapted from ISIS-FETS. The tuning approach (based on movable tuners) and the cooling strategy of the FETS RFQ were replaced by more standard cooling based on longitudinal drilled channels in the vane, and fine tuning by cooling water temperature. But due to the external characteristics (two minor and two major vanes with vacuum pump grids) there are no longitudinal cooling channels in the body of the cavity, so the approach to tuning is different than in other RFQs. The complete design process has been collected in a Technical Design Report (TDR) [8]. This paper discusses the different approaches and the design route chosen and the corresponding tools that have been used. The conclusions can be useful to other groups attempting to design an RFQ linac for use in a CANS facility.
The non-conventional approach to the design of the modulation and the resonant cavity needed the design of home-made computer tools or to adapt existing ones. The software used for each design step will be described in the corresponding section. The RFQ is currently under fabrication, and first tests are expected to start during 2019.
The RFQ is divided in four segments of around 800 mm in length. Each segment is itself an assembly of four components, two major vanes and two minor vanes, as depicted in Fig. 1. There are no coupling structures between segments. The vanes are assembled by means of polymeric vacuum gaskets (3D O-rings), using no brazing or welding. Finger strips are also added in all copper contact surfaces to improve RF contact. This strategy was adapted from the ISIS-FETS RFQ [4], and has been also proposed for other projects [9]. The vacuum ports location (in the upper and lower major vanes) is also taken from FETS RFQ. Cooling scheme is different from this RFQ though, as drilled channels instead of cooling-pockets, are used for the vacuum grid and along the vane tips.
Cavity tuning and field stabilization will be provided by static plunger tuners, while cooling water temperature will be used to fine tune the RFQ operation. Movable plunger tuners can, nonetheless, be used if needed. Cooling channels are drilled only along the vane tips (and not in the structure body). This was decided by mechanical and fabrication criteria. This means that the frequency control by the cooling water temperature will operate only on vane-tip channels, contrary to other RFQs where two water circuits (vane tip and body) are available for tuning ( [5-7, 10, 11]). Extensive thermo-mechanical and electromagnetic simulation studies have been done to proof that the operational control can be performed this way. Tests with the first segment will verify this approach. The RFQ is designed to accelerate protons from 45 keV to 3.0 MeV. It is a pulsed machine, operating at 352.2 MHz with an expected duty cycle in operation of 5% (designed up to 10%). Notable aspects of the design are the constant mean aperture R 0 , vane radius ρ and thus ρ/R 0 ratio. Inter-vane voltage is also uniform, with a value of 85 kV. Main characteristics of the RFQ can be found in Table 1.
In each segment there are 16 tuner ports ( 37 mm) that can be used for static plunger tuners. The power coupler flange also fits in these ports, so that they can be inserted in any position. Additionally, 8 16 mm ports are built in order to be used for pick-ups or other sensors needed at any moment. Vacuum grid ports are designed for standard 210 mm flanges. All ports are machined in the major vanes.
Modulation design
The ESS-Bilbao RFQ modulation is the result of an optimization process. The modulation is designed for an inter-vane voltage of 85 kV, uniform throughout the entire length. Vane radius (ρ) is also constant, so to obtain a uniform local frequency and field flatness the mean aperture R 0 should also be constant. Modulation follows the shape of a 2-term expansion of the inter-vane voltage [12] and has been designed using a modified version of RFQSIM code [13].
The aim of the modulation design was to obtain an RFQ shorter than 4λ = 3.4 m to simplify tuning operations. Also, copper blocks 800 mm long were already available, so an additional target was to fit the total length of the RFQ in four segments, instead of five, reducing inter segments joints and the risks associated. A final length limit of 3.1 m was selected.
Another figure of merit target for optimization was the Kilpatrick factor (ratio of designed maximum electric field value with respect to the theoretical maximum for copper at the operating frequency), that was chosen to be below 1.85.
Modulation optimization process
The optimization procedure starts by obtaining a modulation geometry, shorter than the maximum length allowed, that fulfils the beam dynamics criteria (transmision and beam losses). The main tool for this task is a modified version of RFQSIM code that uses a 2-term voltage expansion of the vanes geometry. Electromagnetic calculations for surface electric field and RF electromagnetic field for particle tracking were then performed. These computations used the actual geometry of the vane tips in FEM codes. The initial parameters were then modified according to the FEM results, and a new optimization run starts. This procedure continues until an optimal solution is obtained.
The input parameters that define a particular modulation are (using the standard parameter naming convention [14]): synchronous phases and particle energy at the end of each section (Shaper (φ sh , W sh ), Gentle Buncher (φ gb , W gb ) and Accelerator (φ f , W f )), clear aperture (a gb ) and modulation (m gb )) parameters, and input and output matcher sections radius.
In order to obtain a short modulation the range of parameters during optimization was oriented towards this target, with preference over other goals such as maximizing the transmission or maintaining a conservative Kilpatrick factor. In this sense, we concentrated the parameter space search on areas with lower clear aperture a gb and energy W gb , and less negative synchronous phase φ gb (all referred to the end of the Gentle Buncher). Particle tracking results obtained with RFQSIM presented transmissions below 90 % for modulations found in this region of the parameter space. However, later simulations with other codes presented much better results. This strategy also produced higher surface fields, with a peak at the end of the Shaper typically above 1.8 times the Kilpatrick limit. Also, a progressive reduction of the aperture in the Acceleration section was implemented in order to shorten the modulation. RFQSIM originally built the 2-term based Acceleration section using a constant aperture and modulation factor, following the rules proposed by Kapchinskiy-Tepliakov [12]. This produces slowly decaying fields along z, due to the increased cell length. We modified the part of RFQSIM that creates the 2-term modulation to incorporate custom aperture reduction strategies in the Acceleration section, as well as the ability to set a goal total length for the modulation.
The evolution of the main parameters as a function of cell number is shown in Fig. 2, while the final modulation shape and the corresponding accelerating field are shown in Fig. 3.
Concerning the high energy end of the modulation, it has been designed with the aim of optimizing the transmission through the designed MEBT and DTL. After the last regular cell of the accelerator section, a transition cell is included so both X and Y vanes end with the same aperture. A circular output matcher ends the modulation. The transition cell has a length of 17 mm and the output matcher has a radius of 14 mm. The last cells are shown in Fig. 4.
Surface field calculations
Although RFQ design software packages provide a calculation of Kilpatrick factor (surface electric field) we preferred to perform these calculations in an external finite element software, in order to gain control and accuracy over the process. The modulation description provided by RFQSIM consist on the modulation amplitude at the beginning and end of each cell (of length βλ/2). The actual shape of the modulation spine is built cell by cell by performing the 2-term interpolation in a Matlab script. The output of this script is a pair of curves V x (z) and V y (z) for horizontal and vertical vanes, respectively. The cell 3D geometry vane region is then built as a parametric surface for each vane in COMSOL Multiphysics FEM software. For example, the geometry of the profile of an horizontal vane cell between z coordinates z 0 and z 1 is defined as a parametric surface dependent on parameters u ∈ [0, 1] and v ∈ [−π/2, π/2] as defined in Eq. 1. Vertical vanes are defined in a similar way.
The surface field is computed running an electrostatics simulation. To avoid border effects the model is built for three consecutive cells, but only the results for the central one are considered. The process is fully automated using Matlab/COMSOL scripts. The result for a particular modulation is the curve E s,max (z), that is the maximum value of the surface field for the cell starting at coordinate z. The overall maximum value of this curve is taken as the figure of merit characterizing the particular modulation. Using this approach we managed to scan a huge and fine set of parameter ranges in an automatic, brute-force approach. An example of such electrostatics model is shown in Fig. 5.
The results obtained for the final modulation are shown in Fig. 6. The peak that can be seen at around z = 0.5 m is caused by the transition between Shaper and Gentle Buncher sections, while the gradual increase from z = 1.3 m is a consequence of designing for higher accelerating fields in order to reduce modulation length.
Surface field calculations have also been used to determine the shape and position of gaps between the segments. The modulation is described as a continuous function from the beginning of the input matcher to the end of the output matcher. But the segmentation of the RFQ structure forces cuts in the modulation at certain z coordinates corresponding to the end of each segment. In order to reduce the perturbation that these cuts cause on the electric field, extensive simulation studies were made [8]. In these studies the position of the end-beginning of segments were considered between the cell-end position and the Lloyd position [15]. In the first case the longitudinal accelerating field is zero for all particles in the gap, but not the transverse components. In the second case, the field vanishes when the bunch center crosses the gap. Also, the rounding of the vane tips in the cut was studied, finally selecting an elliptical shape instead of a circular one. These studies included surface field calculation and also beam dynamics simulations. The overall best results, considering figures of merit and segment mechanical length conclusion was to use gaps 200 µm long placed at the end-cell position; the end of the vanes are rounded in an elliptical shape with long semi-axes of 2 mm length and short semi-axes having 0.75 mm (see Fig. 7).
Beam dynamics results
From the modulation description obtained by using the optimization procedure, the full-vane 3D geometry of the RFQ was built in COMSOL Multiphysics software. Electrostatic simulations were run and the electric field was then exported in an external file. Particle tracking analysis using GPT [16] code were then run with the precise field map as input. Several optimization runs were made, exploring different regions of the parameter space until the final modulation was selected. Results were then crosschecked using different codes (Toutatis [17], PARMTEQ / RFQGen). Additional characteristics of the modulation can also be found in Table 1.
Beam dynamics simulations have been performed using different codes for comparison. All the simulations were performed using the same input beam characteristics: 45 kV input beam energy, 60 mA current intensity and 0.25 π mm mrad transverse emittance. The C-S (Courant-Snyder [14]) of the input beam varied slightly from code to code, depending on the optimal match calculated in each case, with values of alpha typically a little above 1, and beta about 0.03 m/rad.
The codes used for comparison of results were: GPT + COMSOL, RFQSIM, TOUTATIS and RFQGen (an improved fork of PARMTEQ). The implementation of our design in RFQGen is not completely accurate, since this code did not include an option to import a cell-by-cell description of the modulation at the time. So in consequence, the current RFQGen results must be considered as approximate. Table 2 presents the main results of the particle tracking simulations. Results obtained with RFQSIM, however, have a poorer transmission than the other three codes, which is mainly due to a predicted transverse loss of almost 10 % of the beam. This is due to the method of field calculation in each code [18]. The other three codes represent the actual transmission of the RFQ more accurately, given that TOUTATIS and RFQGen (PARMTEQ) are the most widely used codes in the field of RFQ beam dynamic and that the field maps provided by COMSOL comes from a finite element simulation with accurate representation of 3D vane geometry, and the tracking is performed with GPT, a code extensively used beam dynamics simulations of other accelerator elements. Fig. 8 shows the transverse losses (due to particles impacting with the vanes) along the RFQ, by depicting the beam power lost per centimeter of length. The curves are very similar in the Shaper and Gentle Buncher regions (first 1.2 m), with slightly higher power loss predicted by GPT + COMSOL in the Acceleration section. The total power lost (integral of the curves) remains within reasonable values in both cases, since most of the lost particles impact the vanes with energies in the hundreds of keV.
In summary, the results presented above prove the validity of the proposed vane modulation in terms of beam dynamics, as tested with different particle tracking and field map calculating methods. Although some of the new design constraints (especially the ∼20 % length reduction) were initially expected to reflect negatively on the beam transport performance, this is only apparent in the RFQSIM results, with the other three codes presenting transmissions of ∼94 %.
Local frequency profile
The vane modulation changes the local capacitance per unit length of the ideal quadrupole resonator. As a consequence there is a perturbation on the voltage profile V(z), that no longer is constant. This perturbation can be modeled by considering a local resonant frequency computed in a cell-by-cell approach. Considering a modulation cell that is located between z i and z i+1 , the local frequency f q (z i ) is the frequency of the resonator built from a slice of RFQ volume between z i and z i+1 , considering perpendicular magnetic field boundary conditions at both ends and perfect electric boundary condition at the rest of the surfaces. This frequency profile should be as flat as possible because voltage profile in the real cavity will be dependent on this perturbation [14]. For the first (2013) design of ESS-Bilbao RFQ a sinusoidal (instead of 2-term) vane profile was used. This resulted in a very perturbed frequency profile [8]. For the final design a 2-term modulation shape (meaning that the geometry mimics the profile of an expansion of voltage truncated to two terms) was used. This resulted in the rather flat profile shown in Fig. 6. Deviation of local frequency around the 2D ideal design frequency is of the order of 0.1 MHz, while with a sinusoidal shape it was of several MHz (see the TDR[8] for details).
Cavity cross-section design
The cross section of the cavity is based on the circular lobe approach used by ISIS-FETS [4], but using straight (vertical or horizontal) segments in the vanes, to aid the alignment verification operations later on. The machining of the rounded shape of the lobes was tested by fabricating different models in copper and aluminum [19]. The cross section is uniform for all the length of the RFQ. Although a variable lobe diameter along length is common in the current state-of-the-art, the uniform approach was selected as a conservative decision concerning mechanical engineering. For future developments, a variable approach would be probably preferred.
The optimization of the geometry was made by parametric simulations. The optimum values were selected by minimization of the power losses (or maximization of the quality factor, Q 0 ). The range of parameters available during the optimization procedure was restricted by mechanical constraints, like the total width of copper blocks, cooling channels diameter and corresponding wall thickness. In the optimization process, one of the parameters was always free, so it could be adjusted later to obtain the right frequency. The design frequency for the 2D models was chosen to be 348.6 MHz (several MHz below operating frequency) to avoid problems due to machining. The frequency will be raised to the operational one by means of plunger tuners. The average modulation aperture (3.438 mm) was used as vane tip aperture in the 2D models. A sketch of the cross-section is shown in Fig. 9, while in Fig. 10 electric and magnetic field maps for whole cross section in the quadrupolar mode are shown.
3D cavity design
The 3D body of the cavity is constructed by extruding the cross-section, adding the vane modulation solids, the input and output regions and other details like tuner and vacuum ports. Again, the shape has been optimized by finite element simulations aiming at reducing power deposition. An overall longitudinal cross section is shown in Fig. 11. The whole structure is an assembly of the four segments. Each segment has its own set of channels for cooling of the vanes and for cooling of the vacuum grid region.
The input and output undercut sections were designed so they have the same frequency as the bulk of the RFQ, contributing in this way to maintain the field flatness. The optimization looked for the optimum set of parameters that reduced power losses and kept electric voltage flatness. An schematics of the input section with the optimization parameters is shown in Fig. 12. Parameter values can be found in Table 3.
The field flatness was verified by 3D simulations of the whole length of the RFQ (not only the input or output undercut regions), and by mathematical transmission line models of the structure. One of the parameters is the distance between the vane tips and input (or output) lid (see Fig. 12). This distance can be easily modified after fabrication and testing of the RFQ, as it only involves changes in the lid, and not in the main body of the cavity. The vacuum port grid also changes slightly the local frequency of the cavity. In order to compensate this detuning, the solid part of the grid penetrates the cavity as ridges, that have been also designed to provide the right frequency. These ridges are cooled by the vacuum port cooling channels. The ridges have been rounded to facilitate the machining process.
3D ELECTROMAGNETIC SIMULATIONS AND TUNING
Electromagnetic FEM simulations of the whole length of the RFQ including the details of the modulation and undercuts have been performed to verify the design. These simulations require a lot of resources in preparation of the models and computation time. For this purpose, models with a simplified geometries were used whenever possible.
Simulations have been solved efficiently using COMSOL Multiphysics eigenvalue solver. A more detailed description can be found elsewhere [20]. The vane tip region must be defined very finely in order to catch all the details of the modulation. The geometric construction of the vanes themselves is challenging, and we couldn't do it in a reliable way using the CAD packages that we had available, so a home made software that makes use of OpenCascade 3D technology [21] was coded and used for this. The tool allows to import the file describing the modulation for the whole length with a resolution of 50 points per cell. The vane modulation CAD files produced this way were also used for the final solids supplied to the manufacturer.
The electrostatics (ES ) simulations with constant voltage boundary condition were used during the design of the RFQ modulation. ES simulations were also used to scale the electromagnetic fields obtained in an eigenvalue simulation (that have no power input reference). This procedure is detailed in [20]. An example of the results of the 3D simulations is shown in figure 13, where power losses Table 4. Frequency spectrum of the RFQ (in MHz), computed from FEM simulations of the whole structure. Q D1 D2 0 351,9651 339,9509 339,9814 1 355,2243 346,2343 346,309 2 364,8562 356,3018 356,423 computed from the fields in the surface, and adequately scaled to an innervate voltage of 85 kV, are depicted. A mathematical model based on a transmission line description of the RFQ cavity [14] has also been developed. The model has been adjusted using the FEM calculation results and implemented in a computer tool to assist in the tuning operations. A comparison between the innervate voltage computed with the transmission line model and extracted from the FEM models is shown in Fig. 14.
Mode spectrum
From the 3D computer simulations of the whole length the frequency spectrum of the RFQ cavity can be obtained. Previously, this was estimated by extrapolation from only first segment simulations. Computed modes frequencies are grouped in table 4. Modes near the first quadrupolar one are well separated in frequency. This will contribute to an easy tuning and stabilization of the RFQ.
Static tuning
Static tuning of the RFQ will be provided by a set of plunger tuners. For each 800 mm segment there are 4 sets of 4 tuner ports, so a maximum of 64 tuners could be installed in the whole length (two ports will be used by the power couplers). The static tuners will increase the cavity frequency from the 348.6 MHz of the design to a value close to the operational frequency of 352.21 MHz. The voltage profile (field flatness) should be kept as uniform as possible, and this will probably require that the tuners penetrate the cavity in a non-uniform way. The combination of FEM and mathematical models will be used for this once the actual voltage profile is measured in the final cavity. An automatic procedure will calculate the tuners penetration that correct the measured profile and result in a flat one. This has been implemented in a software tool and tested with numerical examples, as shown in Fig. 15.
COOLING DESIGN AND DYNAMIC TUNING
The RFQ is water cooled. The cooling removes the excess heat and also is used to fine tune the RFQ cavity during operation by controlling the thermal expansion driven frequency changes. For each segment, there are cooling channels near the vanes and also in the vacuum grid area.
Heat load
The heat load coming from the RF standing wave excited to the RFQ cavity walls is computed from the electromagnetic fields at the surface [14,20], using a copper surface resistance of R s = 0.0052 Ω. The eigenfrequency problem is solved and the fields scaled as previously described. A power loss of about 17 kW is estimated at 5 % duty cycle.
Cooling channels and thermo-mechanical results
There are two types of cooling channels in the structure. Longitudinal channels run along the vane tips (with inlets and outlets in the external surface), while transversal channels cool the vacuum pump grid region. These channels can be seen in Fig. 16. Thermo-mechanical simulations have been done using a simplified geometry, in order to increase the speed particularly for transient simulations. Simulations are done also using COMSOL Multiphysics, taking the power losses density maps as input heat flux in the internal surfaces. Coupled heat transfer, CFD and thermo-mechanical models of the cavity are then solved. The deformation of the vacuum region of the cavity allows to compute again the resonant frequency. In this way, the detuning caused by the thermo-mechanical deformations can be studied in any condition: steady state or transient simulations, with different RF duty cycles as power inputs and different water input temperatures and flows.
An example of the temperature distribution in the solid and the mechanical deformation can be seen in Fig. 16. For a duty cycle of the 5 % and the temperature of the input water fixed at 25 • C, the temperature of the copper is below 31 • C. In Fig. 17 the dependence of cavity frequency (first segment) with cooling water temperature is shown for steady state simulations. The detuning is very small (about 19 kHz/ • C. This is due to the thickness of the walls. A similar detuning of 9.3 kHz/duty is obtained by changing the duty cycle percentage. These results point out the stability of the operation when an adequate water flow is used. Transient conditions have also studied this way, in order to study the fine control of frequency during operation of the cavity. As an example of these calculations, in Fig. 18 the dynamic change of frequency during a power on step (and the opposite power off step) is shown. These results are used to tune a control model of the cavity frequency, where a mixture of cold and warm water can stabilize the frequency of the RFQ easily. These results are beyond the scope of this paper and will be duly presented elsewhere. Additionally, movable plunger tuners can be used at certain lengths if needed. One plunger tuner per quadrant will provide a tuning range of 20 kHz/mm, enough for tuning the cavity in a LLRF loop. This solution is a back-up plan.
FABRICATION
The fabrication of the first segment of ESS-Bilbao RFQ started in 2016. As mentioned before, the RFQ is a 4vane structure. It has a total length of about 3.1 meters, divided in 4 segments of about 800 mm length each. The segment length is determined by the machining equipment available for fabrication of the vanes. Each segments is an assembly of four elements, 2 major and 2 minor vanes, assembled together by using polymeric vacuum gaskets instead of brazing or other welding system. Material is copper OHFC (Cu 10100) quality.
Mechanical model
Starting from the electromagnetic design of the RFQ cavity a CAD model of the structure was built. The ports for tuners, vacuum grid and all other mechanical features were implemented in the model. RF and thermo-mechanical simulations were then ran again to validate the design. As described in a previous section, the vane modulations solids were built up using a home made CAD tool to avoid certain issues with the interpolation of the modulation curve [22].
Raw material
The material for the fabrication of the RFQ was supplied in blocks of two different sizes: 270x140 mm for major vanes and 115x140 mm for the minor ones. Length of all blocks is 830 mm. The copper grade selected is Cu OFE C10100.
Squaring and deep drilling
The first step in the fabrication process is to evenly square the copper blocks, to assure that each face is parallel to the opposing one and perpendicular to the others. Marks to fix the position of the cooling channels are machined in the corresponding faces.
A first rough machining is done, leaving an excess of about 2 mm, followed by a stress-relieve annealing. The deep drilling of the longitudinal cooling channels is then done. This is made during the first steps of the fabrication process, and channels will then serve as reference for the rest of the geometry. In this way, the effects of a possible deformation due to the machining is minimized as major machining is done afterwards. A model showing the channels is shown in Fig. 19. Fig. 20 shows a detail of the first fabricated vane after this stage.
At this stage, the two channels are connected and open in the back side (higher energy) of the RFQ segment. They will be sealed in a later step.
EBW of the cooling channels plugs
Cooling channels are then sealed by inserting copper plugs and welding them. This is carried out by our staff at ESS-Bilbao's Advanced Welding Facility by means of electron beam welding (EBW). Three plugs are inserted and welded, one to separate the two channels and the other two to seal them from the external face (Fig. 21). In the final steps of the process this face will be machined and no external evidence of the welding of the plugs will be visible.
Fine machining
The last fabrication step for a vane is the fine machining, where all the final details are included. Particularly, the vane modulation needs a careful process in a temperature controlled machine to avoid over-heating that could give rise to deformations. Fig. 22 shows a major vane in the milling machine.
The milling of the modulation is made using a CAM controlled, 3-axes, HERMLE C800V machining center. Many displacements of the tool on the copper surface are used, removing a very thin layer each step. This increases milling time but also provide excellent surface quality (roughness 0.8 Ra and mechanical tolerances around 0.005 mm.
Metrology
The final step is the metrology of the fabricated piece (Fig. 23). Special care is taken in the measurement of the vane modulation profile. After control of the first major vane, a deviation of the modulation measured with respect to the design one was detected, despite the attention paid to the machining process. To correct this, a second machining of the modulation profile was performed, lowering the height from the bottom face by 100 um. The contact faces between major vanes and the two minor vanes were also machined removing the same height of material, so the four vane tips have the correct distribution after assembly.
Apart from this issue, the metrology results of the first vane of the first segment is very satisfactory (Fig. 24).
Assembly and vacuum strategy
The vacuum strategy for the cavity is based on the use of polymeric vacuum gaskets at the unions between majorminor vanes, and also on the contact faces between the assembly of four vanes and the cover or inter-segment ring. RF contact seals (so-called finger strips) are considered for all contacts between surfaces. Once the four vanes are assembled (as in Fig. 1) and the alignment verified, the groove for the O-ring in the front side will be machined. This approach will allow assembling or disassembling the RFQ in case of misalignment or other problems. This strategy will be thoroughly tested with the first segment, in terms of vacuum levels and other issues. If results are satisfactory, the same procedure will be used for the rest of the segments. Otherwise, the strategy will be revisited and brazing of vanes will be considered. The results of these tests (expected for the last quarter of 2019) will be presented in future UCANS meetings.
CONCLUSIONS
ESS-Bilbao RFQ design has been summarized, including details on the beam dynamics, cavity electromagnetic design, cooling and thermo-mechanical studies. Fabrication of the first segment of the RFQ is ongoing. The whole segment will be received, and tests performed on it, before the end of 2019.
|
2020-03-12T10:28:56.056Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "8b69e3f3f90c946b940d80d28d048c99cfe72762",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2020/07/epjconf_ucans82020_02001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "975c0f2482e20c14ec444a4de79b4bd9da64551d",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
12133854
|
pes2o/s2orc
|
v3-fos-license
|
Interleukin-32α inactivates JAK2/STAT3 signaling and reverses interleukin-6-induced epithelial–mesenchymal transition, invasion, and metastasis in pancreatic cancer cells
Interleukin (IL)-32 is a newly discovered cytokine that has multifaceted roles in inflammatory bowel disease, cancer, and autoimmune diseases and participates in cell apoptosis, cancer cell growth inhibition, accentuation of inflammation, and angiogenesis. Here, we investigated the potential effects of IL-32α on epithelial–mesenchymal transition, metastasis, and invasion, and the JAK2/STAT3 signaling pathway in pancreatic cancer cells. The human pancreatic cancer cell lines PANC-1 and SW1990 were used. Epithelial–mesenchymal transition-related markers, including E-cadherin, N-cadherin, Vimentin, Snail, and Zeb1, as well as extracellular matrix metalloproteinases (MMPs), including MMP2, MMP7, and MMP9, were detected by immunofluorescence, Western blotting, and real-time polymerase chain reaction. The activation of JAK2/STAT3 signaling proteins was detected by Western blotting. Wound healing assays, real-time polymerase chain reaction, and Western blotting were performed to assess cell migration and invasion. The effects of IL-32α on the IL-6-induced activation of JAK2/STAT3 were also evaluated. In vitro, we found that IL-32α inhibits the expressions of the related markers N-cadherin, Vimentin, Snail, and Zeb1, as well as JAK2/STAT3 proteins, in a dose-dependent manner in pancreatic cancer cell lines. Furthermore, E-cadherin expression was increased significantly after IL-32α treatment. IL-32α downregulated the expression of MMPs, including MMP2, MMP7, and MMP9, and decreased wound healing in pancreatic cancer cells. These consistent changes were also found in IL-6-induced pancreatic cancer cells following IL-32α treatment. This study showed that reversion of epithelial–mesenchymal transition, inhibition of invasiveness and metastasis, and activation of the JAK2/STAT3 signaling pathway could be achieved through the application of exogenous IL-32α.
Introduction
Pancreatic cancer, a common malignant neoplasm of the digestive system, is an extremely malignant tumor that is characterized by locally advanced or metastatic disease at diagnosis. Its 5-year survival rate is 5%. 1 Only ~10% to 20% of people undergo resection, yet the majority of them (~80%) still achieve 2 years' median survival after surgery. 2 At present, researchers are devoting great attention to identifying biologically targeted therapies and chemotherapeutic drugs to develop a comprehensive treatment for pancreatic tumors.
Interleukin (IL)-32 was previously reported as natural killer transcript 4, which has been described. 3,4 This cytokine is primarily elevated in activated T lymphocytes, submit your manuscript | www.dovepress.com Dovepress Dovepress 4226 chen et al natural killer cells, and epithelial cells. The gene encoding IL-32 is located on human chromosome 16p13.3 and comprises a 705 bp coding sequence that is organized into eight exons. There are six major splice variants (IL-32α, IL-32β, IL-32γ, IL-32δ, IL-32ε, and IL-32ζ), and a particular subtype has been reported. IL-32α is the most common transcript. [5][6][7][8][9] This protein has various roles in inflammation, cancer, and autoimmune diseases and participates in cell apoptosis, cancer cell growth inhibition, accentuation of inflammation, and angiogenesis. Several studies have indicated that IL-32 is a typical pro-inflammatory cytokine that enhances the secretion of IL-1β, tumor necrosis factor-α, IL-6, and IL-8 through the p38 mitogen-activated protein kinase, nuclear factor-κB, and JNK signal transduction pathways. 10,11 However, the role of IL-32α in pancreatic cancer invasion and metastasis has not yet been elucidated.
Metastasis of pancreatic tumor cells is associated with epithelial-mesenchymal transition (EMT), and epithelial cells obtain new mesenchymal features through EMT. EMT is a key biological process in cancer progression by which incipient tumor cells lose their apical-basal polarity, dissolve cell-cell junctions, gain invasive and migratory properties, and increase in drug resistance, resulting in escape from the preinvasive neoplasm, invasion to the edge of the normal tissue, and migration to distant areas. 12,13 Previous studies have shown that EMT is an essential process in invasion and metastasis in various human epithelial carcinomas, including pancreatic cancer. The loss of E-cadherin expression and the overexpression of many mesenchymal markers, such as Vimentin, N-cadherin, Snail, Slug, Twist, Zeb1, and Zeb2, are generally regarded as markers of the EMT process. 14,15 It is well known that tumor invasion and metastasis occur through an intricate and multistep process. Dissolving the extracellular matrix is also a key sign of invasion and metastasis in pancreatic cancer. Extracellular matrix metalloproteinases (MMPs) can degrade the cell basal lamina and extracellular matrix, and these enzymes maintain a balance with tissue inhibitors of metalloproteinases, which play vital roles in the invasion and metastasis of malignant tumors. 16,17 Signal transducer and activator of transcription 3 (STAT3) plays an important role in EMT. STAT3, a potential therapeutic target in pancreatic cancer, results from the phosphorylation of a conserved tyrosine residue. Two STAT3 monomers form a homodimer (p-STAT3) through reciprocal phosphotyrosine-SH2 tyrosine interaction. The dimer translocates into the nucleus and binds to response elements, thus regulating the transcription of target genes and modulating fundamental cellular processes, such as apoptosis, invasion, and metastasis. 18,19 Several extracellular signals, including some cytokines and growth factors, can trigger the JAK/ STAT3 signaling pathway and induce a cascade of biological processes. 20,21 IL-6 secretion increases remarkably in the pancreatic tumor microenvironment, and this increase is significantly related to the invasion and metastasis of pancreatic cancer. 22 Moreover, JAK/STAT3 signaling is constitutively activated by IL-6 and frequently observed in various human cancers, including pancreatic cancer. [23][24][25] In this context, we attempted to assess the function of IL-32α in pancreatic cancer. Our results show that IL-32α can inactivate novel JAK2/STAT3 signaling, with the additional effect of reversing IL-6-induced EMT, invasion, and metastasis in pancreatic cancer cells.
cell lines and culture conditions
The human pancreatic cancer cell lines PANC-1 and SW1990 were obtained from the Cell Bank of the Chinese Academy of Sciences (Shanghai, People's Republic of China). The PANC-1 cell line was cultured with DMEM, and the SW1990 cell line was maintained in RPMI-1640 supplemented with 10% fetal calf serum containing 100 µ/mL penicillin and 100 µg/mL streptomycin. Cancer cells were cultured in a humidified incubator with 5% CO 2 and at 37°C. No ethical approval was required for this set of experiments because the experiments were performed on commercially available cell lines, and the ethical committee of the First Affiliated Hospital of Wenzhou Medical University deems approval unnecessary for such studies.
Quantification by real-time PCR
Total RNA was isolated from pancreatic cancer cells according to the manufacturer's instructions for the TRIzol Reagent. Subsequently, a spectrophotometer was used to determine the purity and concentration of the RNA. Singlestranded cDNA was then synthesized using a One-Step RT-polymerase chain reaction (PCR) kit (Thermo Fisher Scientific, Waltham, MA, USA). We obtained the amplified cDNAs by PCR using an ABI Prism 7500 real-time system (Applied Biosystems). The RT-PCR thermal cycler protocol consisted of initial denaturation at 95°C for 10 minutes, denaturation at 95°C for 15 seconds, and annealing and extension at 62°C for 60 seconds. The primer sequences are listed in Table 1.
Western blotting analysis
Pancreatic cancer cells were collected and lysed in RIPA Lysis Buffer (Beyotime, Shanghai, People's Republic of China) after treatment. Subsequently, protein concentrations were determined using a protein assay kit (Beyotime). Lysates containing 50 µg of protein were dissolved in loading buffer with SDS and heated for 5 minutes at 100°C. Next, the sample was separated by 8%-12% SDS-PAGE at 55 V for 30 minutes and 110 V for 90 minutes. Then, the protein was transferred onto PVDF membranes by wet blotting. The PVDF membranes were incubated in Tris-Buffered Saline with Tween 20 (TBS-T) buffer containing 5% defatted milk and TBS-T at room temperature for 1.5 hours, after which the membranes were immersed in solutions containing the primary antibodies, including anti-p-JAK2 and anti-JAK2 (1:1,000 dilution), anti-p-STAT3, and anti-STAT3 (1:2,000 dilution), anti-MMP2 and anti-MMP9 (1:1,000 dilution), anti-E-cadherin, anti-N-cadherin, anti-Vimentin, anti-Snail, and anti-ZEB1 (1:1,000 dilution), and anti-GAPDH (1:1,000 dilution) at 4°C overnight.
After incubation with primary antibodies, the membrane was washed three times with TBS-T for 5 minutes each time. The membranes were probed with a horseradish peroxidase-conjugated rabbit IgG or mouse IgG secondary antibody (1:5,000 dilution) for 1.5 hours at room temperature. After washing with TBS-T, the signal was detected using Amersham TM ECL TM Prime, and the expression levels of the specific proteins were quantified and captured using Image Quant TM 400.
Immunofluorescence staining
The pancreatic cancer cell lines PANC-1 and SW1990 were maintained on cover glasses in an incubator for 24 hours according to a previously described method. Then, the cells were washed with phosphate-buffered saline (PBS) and fixed in 4% paraformaldehyde for 15 minutes at 37°C. Cell membranes were perforated using PBS containing 0.3% Triton X-100 for 10 minutes. After the cells were saturated with PBS containing 10% goat serum (Beyotime) for 1.5 hours, they were incubated with anti-N-cadherin (1:200 dilution) and anti-Vimentin (1:200 dilution) in PBS overnight. After that, the cells were washed three times with PBS at room temperature, followed by incubation with secondary antibody conjugated with AlexaFluor 488 to detect Vimentin and N-cadherin. The samples were washed with PBS three times, and the cells nuclei were stained with 4′,6-diamidino-2-phenylindole (Beyotime). Finally, the glass slides were photographed using an automated upright microscope system (Leica, DM4000B Leica Microsystems, Wetzlar, Germany).
Wound healing assay
To assess cell motility, confluent cells were seeded in 6 cm culture dishes, and after the cells were grown to 80%-90% confluence, a wound ~500 µm was created as a linear scratch using a sterile p200 pipette tip. The cells were washed twice with PBS and incubated for 24 hours in serum-free DMEM (for PANC-1) or serum-free RPMI-1640 (for SW1990) containing 25 µg/mL IL-32α and/or 100 ng/mL IL-6. The images of cell migration were captured microscopically at 0 and 24 hours after wounding. The wound area was captured using Image Pro Plus. The migration rate = (wound area/wound height after 0 hour -wound area/wound height after 24 hours)/24.
Transwell invasion assay
The invasive abilities of PANC-1 and SW1990 were investigated using a particular invasion chamber (BD Biosciences, San Jose, CA, USA), and an 8 µm porosity polyethylene terephthalate membrane with basement membrane matrix was inserted (Corning Incorporated, Corning, NY, USA). Culture medium containing 10% fetal bovine serum (FBS) was placed in the lower compartment as a chemical inducer. After treatment with IL-32α (25 µg/mL) and/or IL-6 (100 ng/mL), 10 5 pancreatic cancer cells (PANC-1 or SW1990) that had been cultivated in 0.2 mL medium RPMI-1640 (for SW1990) or DMEM (for PANC-1) with 0.2% bovine serum albumin were placed in the upper compartment and incubated at 37°C for 24 hours. Then, the cells on the upper surface of the filter were removed by scraping. Subsequently, the filters were washed with PBS three times and fixed for 20 minutes using 4% paraformaldehyde. Afterward, the cells in the lower compartment were stained with crystal violet. Finally, the cells that invaded across the basement membrane to the lower compartment of the filter were counted under an automated upright microscope system (Leica, DM4000B).
statistical analysis
All experiments were repeated three times, and data are expressed as the mean ± standard deviation. Student's t-test was used to assess significance by checking the statistical correlation of data between groups. Differences with P0.05 were considered to be statistically significant. All analyses were performed using SPSS 19.0 software (IBM Corporation, Armonk, NY, USA).
Results
inhibition of eMT by il-32α in a dosedependent manner in pancreatic cancer cell lines In our experiment, we used serum-free medium to avoid the growth factors that are contained in serum. We treated PANC-1 and SW1990 with 0, 10, 25, and 50 µg/mL concentrations of IL-32α, respectively. Then, the EMT of pancreatic cancer cells was assessed after treatment with IL-32α for 24 hours. First, we investigated EMT-related genes in pancreatic cancer cell lines. The mRNA levels of E-cadherin gene (CDH1), N-cadherin gene (CDH2), Vimentin, Snail, and Zeb1 were quantified by real-time PCR as shown in Figure 1A and B, and the results suggested that IL-32α reduced the mRNA expression of N-cadherin gene (CDH2), Vimentin, Snail, and Zeb1 and increased the mRNA expression of E-cadherin gene (CDH1) in a dose-dependent manner. However, the expression of Snail mRNA was increased when treated with IL-32α in SW1990 cell lines.
We also detected several protein levels by Western blotting, including the epithelial-like marker E-cadherin and the mesenchymal-like markers N-cadherin, Vimentin, Snail, and Zeb1. As shown in Figure 1C-F, N-cadherin, Vimentin, Snail, and Zeb1 expression levels are reduced, while the protein levels of E-cadherin are increased after treatment with IL-32α in a dose-dependent manner.
To verify that IL-32α suppressed the process of EMT, the cellular protein levels of N-cadherin and Vimentin were examined by immunofluorescence, and the results showed that these two proteins, which can inhibit EMT, were increased after IL-32α treatment in a dose-dependent manner ( Figure 1G and H). N-cadherin and Vimentin are well-defined mesenchymal-like markers for EMT. The progression of EMT is always associated with expression levels of N-cadherin and Vimentin. Taken together, our findings revealed that IL-32α was able to reverse the EMT process in the PANC-1 and SW1990 cell lines.
effect of il-32α on the expression of genes associated with invasion and metastasis in pancreatic cancer cell lines Cadherin switching (from E-cadherin to N-cadherin) and increased expression of MMPs are related to the acquisition Figures 1A and B and 2A and B). Decreased mRNA levels of MMP2, MMP7, and CDH2 in combination with elevated mRNA levels of CDH1 and MMP9 after IL-32α treatment suggested that IL-32α can reverse EMT in pancreatic cancer cell lines. IL-32α also suppressed the protein levels of MMP2 and MMP9 in a dose-dependent manner ( Figure 2C-F). Furthermore, IL-32α upregulated the protein expression of E-cadherin, whereas the N-cadherin protein level was decreased ( Figure 1C-F). It is well known that E-cadherin can increase the ability to form cell-cell junctions and, thus, adhesion. As shown earlier, our results indicated that IL-32α partially inhibited metastasis and invasion by affecting proteolytic activation and adhesive activity. il-32α in eMT, invasion, and metastasis in pancreatic cancer cells suppression of activation of JaK2/sTaT3 signaling proteins following exogenous il-32α treatment in pancreatic cancer cell lines Panc-1 and sW1990 The JAK/STAT3 pathway is known to be involved in the EMT process and MMP expression in pancreatic cancer. Our Western blotting results showed that exogenous IL-32α inhibited JAK2, p-JAK2, and p-STAT3 in a dose-dependent manner but had little effect on STAT3 expression ( Figure 3A-D).
exogenous il-32α inhibits il-6-induced eMT of pancreatic cancer cell lines Panc-1 and sW1990 Many studies have shown that IL-6 can induce EMT in various cancers. Our results also revealed that human recombinant IL-6 (100 ng/mL) induced EMT in PANC-1 and SW1990 ( Figure 4A-F). We observed changes in cell morphology; in particular, a transition to a spindle-shaped morphology was detectable in the PANC-1 and SW1990 cell lines after treatment with IL-6 for 24 hours ( Figure 4G and H). Our studies indicated that EMT characteristics and JAK2/STAT3 activity were both inhibited by IL-32α (Figures 1 and 3). Growing evidence has suggested that stimulation of the JAK/STAT3 signaling pathway might enhance the process of EMT in cancer cells. 21,25 IL-6 has been used to activate the JAK2/STAT3 signaling pathway in vitro. 24 Hence, we attempted to suppress the JAK2/STAT3 signaling by treatment with IL-32α and activate it by treatment with IL-6 in pancreatic cancer cells. Then, we observed the effects on EMT-related molecular markers. We found that IL-32α stimulation decreased levels of p-JAK2, JAK2, and p-STAT3 in cells induced with IL-6, although total STAT3 levels did not change in cells treated α α Using real-time PCR, we also determined the mRNA levels of E-cadherin, N-cadherin, Vimentin, Snail, and Zeb1 in IL-6-induced PANC-1 and SW1990 cells after treatment with IL-32α for 24 hours (Figure 4A and B). The real-time PCR results were in line with the Western blotting results, which indicated that the mesenchymal markers N-cadherin, Snail, Vimentin, and Zeb1 presented the same variation on this trend as p-STAT3, whereas E-cadherin changed oppositely. Taken together, these data showed that exogenous IL-32α inhibited IL-6-induced EMT as well as inactivating the JAK2/STAT3 signaling pathway in the pancreatic cancer cell lines PANC-1 and SW1990.
exogenous il-32α inhibits il-6-induced migration and invasion of pancreatic cancer cell lines Panc-1 and sW1990
Our results also demonstrated that IL-6 can enhance the expression of genes that facilitate metastasis and invasion, Figure 6A-F). Additionally, to evaluate the effects on the metastasis and invasiveness of pancreatic cancer cells treated with IL-32α and IL-6, we performed wound assays and invasion assays in vitro. Our experimental results indicated that IL-6 not only enhanced the migration rate but also promoted the invasiveness of the pancreatic cancer cell lines PANC-1 and SW1990 (P0.05) ( Figure 7A-D). Subsequently, we investigated whether IL-32α could weaken the IL-6-induced invasiveness and metastasis in PANC-1 and SW1990 cells induced with IL-6. After treatment with IL-32α, the mRNA levels of MMP2, MMP7, and MMP9 and the protein expression of MMP2 and MMP9 were significantly decreased in IL-6-induced PANC-1 and SW1990 ( Figure 6A-F). According to the migration rate determined for each image and the number of invasive cells found on every filter, IL-32α markedly reduced the metastasis and invasiveness of PANC-1 and SW1990 cells (P0.05) compared with IL-6-treated cells ( Figure 7A-D). Taken together, our study results indicated that IL-32α was able to weaken IL-6-induced migration and invasion in the pancreatic cancer cell lines PANC-1 and SW1990.
Discussion
Previous studies indicated that IL-32α was involved in many tumor biological processes, including promoting inflammation, angiogenesis, and cell apoptosis. However, its effects on pancreatic cancer metastasis and EMT-relevant signaling pathways had not yet been revealed. In this study, we found that exogenous IL-32α could deactivate JAK2/STAT3 signaling and suppress EMT and MMP secretion in pancreatic cancer cells in a dose-dependent manner. IL-6 was used to induce EMT and facilitate invasiveness and metastasis in pancreatic tumor cells in our study, and we found that IL-32α could reverse IL-6-induced EMT, invasiveness, and metastasis in pancreatic cancer cells in vitro. Pancreatic carcinoma is one of the most malignant tumor diseases. It is believed that the high invasiveness of pancreatic cancer cells plays a critical role in the disastrous prognoses associated with this disease. EMT is a pivotal biological process in cancer progression that enables the initial tumor cells to obtain invasive and metastatic properties. 12 It has been reported that EMT has close relationships with lymph node metastasis, portal vein invasion, and long-term survival in pancreatic carcinoma. 26,27 Therefore, EMT is becoming an increasingly important potential clinical target of pancreatic cancer therapy. In our research, IL-32α was shown to reverse the EMT phenotype of pancreatic cancer (Figure 1). We believe that this finding indicates that IL-32α has the potential power to influence the biological properties of pancreatic cancer. Another significant factor was MMPs, which have been regarded as effective regulators of invasion and metastasis in pancreatic cancer. 28 MMPs are a family of endopeptidases with proteolytic activity to degrade the basement membrane in the process of EMT. MMP secretion has been verified to increase invasiveness, induce chemoresistance, and promote angiogenesis in pancreatic cancer. [29][30][31] In our research, we confirmed that the pancreatic cancer cell lines PANC-1 and SW1990 secrete high levels of MMP2, MMP7, and MMP9 and that these MMPs were downregulated by IL-32α. Regarding the influence of IL-32 on EMT, migration, and invasion, researchers hold different or even opposite views. Jeong et al 32 reported that dysregulation of IL-32β stimulates migration through the VEGF-STAT3 signaling pathway. IL-32 has also been reported to facilitate MMP2 and MMP9 expression in primary lung adenocarcinoma via nuclear factor-κB activation. 33 Interleukin-32 contributes to invasion and metastasis of primary lung adenocarcinoma via NF-kappaB induced matrix metalloproteinases 2 and 9 expression. Interleukin-32β stimulates migration of MDA-MB-231 and MCF-7cells via the VEGF-STAT3 signaling pathway. First, we thought that different types and amounts of IL-32 receptors are expressed in different types of cancer, which could explain the different effects caused by IL-32 isoforms. Moreover, we accidentally found that high concentrations of IL-32α can increase the expression of proteins associated with apoptosis in vitro (data not shown). Thus, different concentrations of IL-32α were considered to be another factor influencing EMT, migration, and invasion.
It has been reported that secretion of IL-6 by pancreatic cancer tissues was obviously higher than that of tissue adjacent to carcinoma. 34 Therefore, our research employed IL-6 to induce EMT via the activation of the JAK2/STAT3 signaling pathway in pancreatic cancer cells to simulate the tumor microenvironment in vivo. Numerous studies have indicated that elevated levels of IL-6 protein and mRNA in serum and tumor samples from patients with pancreatic cancer are related to increased tumor size, lymphatic metastasis, distant metastasis, and tumor progression. 35,36 In our study, IL-6 also induced EMT and enhanced the expression of MMPs, including MMP2, MMP7, and MMP9, in vitro. IL-6, which is secreted by cancer cells and macrophages in the tumor microenvironment, directly binds to its receptor to activate JAKs through certain downstream signaling pathways; in turn, this process increases the activation of STAT3, which is involved in pancreatic cancer initiation and metastasis. Our results showed that IL-6 activation of the JAK2/ STAT3 signaling pathway was suppressed by IL-32α, which was consistent with its effects on the expression of genes associated with metastasis and invasion. Moreover, it has submit your manuscript | www.dovepress.com Dovepress Dovepress 4236 chen et al been confirmed that the IL-6/JAK2/STAT3 pathway plays an important role in pancreatitis-induced, Kras-dependent pancreatic carcinogenesis. 37,38 As a potent inhibitor of JAK2/ STAT3 signaling, IL-32α also shows the potential ability to block pancreatic cancer initiation.
Moreover, during our study, we assessed the autonomous phosphorylation of STAT3 (p-STAT3) in four pancreatic cancer cell lines: SW1990, PANC-1, AsPC-1, and BxPC-3. All four cell lines showed high levels of autonomous p-STAT3. Although it is not considered to be a classic pathway interacting with EMT, a recent study confirmed that STAT3 was also shown to contribute to EMT through comprehensive alterations of transcription factors, such as Zeb1. 39,40 Our study, however, found that Zeb1 expression levels were increased by the elevation of p-STAT3 signaling in PANC-1 and SW1990 cells, which was in accordance with previous studies. We thought that the significant suppression of high levels of autonomous p-STAT3 by IL-32α could explain its effects of inhibiting EMT and invasiveness in pancreatic cancer but promoting these properties in some other types of cancer. High levels of p-STAT3 may play an important role in sustaining the EMT state and invasiveness in pancreatic cancer cells. Finally, we cannot rule out the possibility of other latent mechanisms conducive to the suppression of IL-6-induced EMT, migration, and invasion by IL-32α. Based on the earlier evidence, the relationship between the effect of IL-32α on EMT and the effect of IL-32α on tumor migration and invasion still requires further investigation.
Our study suggested that IL-32α might have the potential for clinical application as an adjuvant treatment for pancreatic cancer. Clinical pancreatic cancer often shows high invasiveness and a very short course before death. The inhibition of EMT and MMPs by IL-32α might postpone the progression of pancreatic cancer. As IL-32α is an endogenous cytokine, we deem that its upregulation would be a potential strategy to suppress invasion, metastasis, and chemoresistance. This approach would have some distinct advantages compared to traditional chemotherapeutic drugs. For example, IL-32α lacks immunogenicity and has low cytotoxicity, and patients may experience reductions in most side effects and myelosuppression caused by traditional chemotherapeutic agents. Of course, all of these biological functions need further study in vivo.
In conclusion, we found that exogenous IL-32α deactivates JAK2/STAT3 signaling, reverses the process of EMT, and decreases MMP secretion in pancreatic cancer cells. Additionally, our investigation suggests that IL-6-induced EMT, migration, and invasion can be inhibited by IL-32α.
Thus, IL-32α may be a potential drug and the JAK2/STAT3 signaling pathway may represent novel potential targets for pharmacological intervention in the management of pancreatic cancer EMT, metastasis, and invasiveness in the future.
|
2018-04-03T00:26:30.370Z
|
2016-07-11T00:00:00.000
|
{
"year": 2016,
"sha1": "c0e2a335e1e35f88b2f2cc835bc2aa23df5feaec",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=31292",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7b085f325db0fecc9d773755599ce20d7e01028",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
42086319
|
pes2o/s2orc
|
v3-fos-license
|
Structural heterogeneity and diffuse scattering in morphotropic lead zirconate-titanate single crystals
Complementary diffuse and inelastic synchrotron X-ray scattering measurements of lead zirconate-titanate single crystals with composition near the morphotropic phase boundary (x=0.475) are reported. In the temperature range 293 K<T<400 K a highly anisotropic quasielastic diffuse scattering is observed. Above 400 K this scattering disappears. Its main features can be reproduced by model of inhomogeneous lattice deformations caused by inclusions of a tetragonal phase into a rhombohedral or monoclinic phase. This observation supports the idea that PZT at its morphotropic phase boundary is essentially structurally inhomogeneous.
Complementary diffuse and inelastic synchrotron X-ray scattering measurements of lead zirconatetitanate single crystals with composition near the morphotropic phase boundary (x=0.475) are reported. In the temperature range 293 K < T < 400 K a highly anisotropic quasielastic diffuse scattering is observed. Above 400 K this scattering disappears. Its main features can be reproduced by model of inhomogeneous lattice deformations caused by inclusions of a tetragonal phase into a rhombohedral or monoclinic phase. This observation supports the idea that PZT at its morphotropic phase boundary is essentially structurally inhomogeneous. Lead zirconate-titanate (PbZr 1−x Ti x O 3 , PZT) is one of the most technologically important ferroelectrics [1,2]. Being widely employed in practice PZT is also a model system representing ferroelectric solid solutions with a morphotropic phase boundary (MPB). Understanding of the mechanisms leading to very high dielectric and piezoelectric responses near the MPB is essential for strategic design of new and improved materials, particularly the ecologically friendly lead-free PZT counterparts [3]. Numerous theoretical and experimental studies performed on different lead-containing MPB ferroelectrics produce a highly complex and often controversial picture. Many of these aspects are covered in a 2006 review by Noheda and Cox [4] and references therein. More recent trends are briefly highlighted in a 2009 editorial review by Kreisel et al. [5]. Recently the morphotropic PZT single crystals of various compositions become available [6], which has opened up new experimental possibilities. The first single-crystal diffraction experiments [7,8] did not provide the final conclusion regarding the true microscopic structure of PZT, but allowed to make several important conclusions. On the basis of neutron diffraction study of morphotropic PZT with x=0.46, it was shown [7] that low-temperature monoclinic Cc phase should be ruled out as a ground state and a coexistence of rhombohedral and monoclinic Cm domains should be considered instead. High-resolution X-ray diffraction study [8] also supports a phase coexistence model for that composition. By using 2-dimensional single-crystal scattering maps instead of 1-dimensional powder spectra the authors succeeded in resolving otherwise overlapping Bragg reflections and demonstrated the presence of more than one phase. The idea of phase coexistence was also supported by recent studies by anelastic and dielectric spectroscopy [9] and neutron powder diffraction [10].
A number of times it was also pointed out that morphotropic PZT can be inhomogeneous on the nanoscale. This point of view is supported by observation of nanometric contrast fluctuations within micrometer-scale domains in PZT by transmission electron microscopy [11,12]. Twinned nanodomains were also considered as a cause for unusual optical isotropy [6] revealed in the tetragonal phase of PZT with x=0.46. From another point of view [13] the nanoscale heterogeneity in PZT can be connected with regions of short-range correlated monoclinic ionic displacements which on average produce the diffraction pattern compatible with rhombohedral and tetragonal symmetry at different sides of the phase diagram. It was also suggested on the basis of single-crystal inelastic X-ray scattering [14] that morphotropic PZT has similar to relaxors relaxational-type zone-boundary lattice dynamics and shares with them some intrinsic nanoscale inhomogeneity.
A powerful technique of studying structural and other types of inhomogeneities in crystals is diffuse scattering (DS). Particularly it proves itself useful in the studies of structural instabilities in ferroelectrics and related systems [15]. To the best of our knowledge, no publications have been available in which the diffuse scattering was observed or interpreted in PZT except the electron diffraction study by Glazer et al [13]. The authors report the observation of DS in both Zr-rich and Ti-rich compositions but not in the ones close to the morphotropic phase boundary. By using single-crystal synchrotron Xray scattering we show that the diffuse scattering indeed exists in morphotropic composition of PZT, evidencing its structural heterogeneity.
Morphotropic PbZr 1−x Ti x O 3 single crystals with PbTiO 3 content x=0.475 were grown by a top-seeded solution method. For the X-ray measurements a stickshaped sample with about 100 x 100 micrometer crosssection was prepared by slicing, polishing and subsequent etching in HCl acid. Diffraction and diffuse scattering measurements were carried out at Swiss-Norwegian beamlines at ESRF using KUMA (Oxford Diffraction) diffractometer with CCD detector. A locally constructed heat blower was used for heating up to 773 K.
At room temperature we obtained the diffuse scattering distributions in the (0 0 4) and (-3 0 1) zones (Figs. 1a, and 1c). They are highly anisotropic and appear to resemble the DS shapes in relaxors [16,17]. On heating this strong DS disappears at temperatures between 373 K and 423 K and only weak, but also anisotropic diffuse halo remains up to 773 K. Characteristic distribution of this high-temperature DS is shown in Fig. 1b. On cooling the strong DS reappears in the same temperature region, but according to our measurements is systematically less intense than before heating. The temperature dependence of the DS intensity is presented on Fig. 1d. The intensity points on that plot correspond to the values of parameter I 0 obtained by data fits described below.
A log-log plot of diffuse scattering profile along the high-intensity direction [0 1 -1] in the (0 0 4) zone is presented in Fig. 2. The data for q < 0.02 r.l.u. are spoiled distinguish the nature of this DS we additionally performed an inelastic X-ray scattering (IXS) experiment at ID28 ESRF beamline. The resolution was about 3 meV. The IXS maps along the diagonal (2-h,h,0), longitudinal (2+h,0,0) and transverse (2,k,0) directions are presented in Fig. 3. These maps apparently demonstrate that the maximum signal corresponds to elastic scattering (E=0) for diagonal and longitudinal directions but no elastic line is observed for transverse direction. This contrasts with the DS in relaxors where temperature-dependent DS exists in the transverse direction but is almost absent in the longitudinal direction [18]. Surprisingly, we do not see any increase of X-ray DS intensity near T =663 K, where the cubic-to-tetragonal transition takes place. In fact this transition is accompanied by a strong central peak in Brillouin scattering [19], most possibly associated with ferroelectric fluctuations. We do not see such fluctuations by X-rays near high-temperature transition and thus do not expect to observe them near low-temperature transition. This way we see the strong increase of DS intensity below 423 K as a sign of developing heterogeneity. A starting point for interpreting this heterogeneity can be set up on the basis of previous results, that indicate the simultaneous presence of multiple phases. We start from an assumption that a host phase of specific symmetry contains clusters of a different symmetry. In this case the DS can be described by the terms corresponding to the form factor of the clusters and the factor describing the impact of these clusters on the matrix [20]: The intensity is proportional to the number of defect centers N d , Debye-Waller factor e −2W and a sum of λ additives. Each additive describes the scattering due to particle of orientation α. The shape of particle is represented by Fourier transform of corresponding shape function s α ( q). ∆f represents the difference between the structure factors of the host phase and the clusters. The term f Q A qα describes the scattering due to elastic deformations caused in matrix by particles. This latter term is assumed to be dominant in comparison with ∆f since the structures of matrix and particles in PZT are expected to be very close. When particles are sufficiently small we may also neglect a form factor and focus our consideration on elastic deformations. Characteristic 2-D distributions of DS are tabulated in Ref. 20 for many simple defect symmetries and one may easily find that tetragonal defects in cubic matrix will cause DS very similar to the maps on Fig 1. We find that the best agreement is achieved when the symmetry of defects is described by characteristic tensor L with only non-zero elements L xx = L yy = −2L zz . The defects of this type tend to compress the surrounding lattice in x and y directions while elongating it in the remaining z direction or vice versa. The volume of unit cell tends to be preserved. The elastic constants of morphotropic PZT single crystals, needed for calculations, have not yet been determined experimentally, but ceramics data (see Ref. 21 and references therein) and theoretical estimations [22] are available. Pseudocubic elastic constants extracted from ceramics data [21] allow us to get a satisfactory qualitative description of DS distributions. However the best agreement we find with slightly changed constants c 11 = 135, c 12 = 75 and c 44 = 70 GPa. The corresponding comparison is presented in Fig. 4.
Despite apparent simplicity of our model we find it very reasonable. While we do not know the exact symmetry of the host phase we may assume it to be close to cubic. First because the deviations from cubic structure at least for compositions close to MPB are rather small. Secondly, after averaging over all possible orientations of monoclinic/rhombohedral domains the matrix will appear effectively cubic. The assumption of tetragonal symmetry of clusters is also well-founded since previous studies indicate the signs of tetragonal phase in the MPB region. And, at last, this model perfectly reproduces all the main peculiarities of the DS revealed by our experiments. It gives a zero-intensity transverse plane in high-symmetry (0 0 L) zones and a non-zero longitudinal scattering. Also it reproduces the observed q −2 scattering law. This combination could not be reproduced by purely form-factor based models such as the ones, proposed in Ref. 23. In summary, in this Letter we reported complementary diffuse and inelastic X-ray scattering measurements on a morphotropic PZT single crystal that in tandem allow us to precisely point to the most probable microscopic organization of the material below the morphotropic phase transition temperature T MPB . When the temperature falls below T MPB the tetragonal phase is not completely destroyed but remains in the form of local inclusions within the host phase of a different symmetry. We do not see any decline in the corresponding DS down to room temperature, which indicates a high stability of the proposed structurally heterogeneous state.
It is a pleasure to acknowledge P. M. Gehring and A. K. Tagantsev for many useful discussions and various suggestions. The work at SPbSPU was supported by Federal Program "Research and development on high-priority directions of improvement of Russia's scientific and technological complex" for 2007-2013 years and by grant of the St.-Petersburg government. The work at Ioffe institute was supported by RFBR grant 11-02-00687-a. The work at Institute of Physics was supported by the Czech Science Foundation (GACR P204/10/0616). The work at SFU was supported by the U.S. Office of Naval Research (Grants No N00014-06-1-0166 and N00014-11-1-0552) and the Natural Sciences and Engineering Research Council of Canada.
|
2012-04-26T10:35:20.000Z
|
2012-04-26T00:00:00.000
|
{
"year": 2012,
"sha1": "7d6d9fc9103591f61f87ccf3ec7a30466b229471",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.109.097603",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "7d6d9fc9103591f61f87ccf3ec7a30466b229471",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science",
"Physics"
]
}
|
269655650
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Profitability of Ukrainian Enterprises on Their Bankruptcy
analysis to assess the impact of the profitability of operating activities on the number of bankruptcy cases completed with the approval of the liquidator's report. Based on the statistical data from the State Statistics Service of Ukraine (SSSU) for the period from 2014 to 2021, the above-mentioned analysis shows a negative relationship between the profitability of operating activities of Ukrainian enterprises and the number of completed bankruptcy cases approved by the liquidator's report (r = -0.86; D = 0.74). It has been determined that operating activity of Ukrainian enterprises accounts for 74% of all factors affecting the number of bankruptcy cases completed with the approval of the liquida-tor's report. The validation of the constructed regression equation and the estimation of its parameters confirm its statistical reliability and alignment with real economic processes. Specifically, the Fisher transformation (F = 4.11) exceeds the tabulated value (Ft = 2.45), i.e. (F > Ft), Se = 0.45; C_95% = 1.96). Based on the constructed equation, the number of bankruptcy cases completed with the approval of the liquidator's report was forecasted as an important task in preventing financial difficulties faced by companies,
INTRODUCTION
In a market economy, bankruptcy is an integral part of companies' functioning and a mechanism that allows unprofitable and insolvent enterprises, including those with negative profitability, to be removed from the market.On the other hand, this mechanism, when accompanied by rehabilitation measures and amicable agreements, allows them to resume operations and ensures their continued development (Prusak, 2018;Civelek et al., 2022).Between 2021 and 2022, the business activity situation for the vast majority of EU enterprises deteriorated, as indicated by the Business Registration and Bankruptcy Index (BRBI).With an average BRBI value of 121.2% across the EU27 in 2022, the business activity situation significantly worsened in some countries.For instance, in France, this value is 1.55 times higher than the average, and in Romania, it is 1.51 times higher (Eurostat, 2023).The average value of the indicator of business demographics of enterprises, known as the Death Rate of All Enterprises (DRE), as of January 1, 2021, for 14 countries that are members of the Organization for Economic Cooperation and Development (OECD), was 6.99% (OECD, 2020).This indicates that, on average, every fourteenth enterprise is liquidated throughout the year, particularly through bankruptcy proceedings.In Ukraine, the situation is more severe; according to the DRE, on average, one in ten enterprises is liquidated (SSSU, 2019).
To prevent company bankruptcies and their subsequent liquidation, it is essential to explore additional tools that can effectively manage relevant financial indicators (Pardal et al., 2021;Kislovska & Tamosiuniene, 2022;Roshchyk et al., 2022).These indicators include solvency; liquidity; coverage ratio; expenses and financial results from core, operating, financial, and investment activities; net profit, profitability, etc. Indicators such as capital turnover and profitability, especially from operating activities, depend on how efficiently a company utilizes its assets.The increase in profitability of operating activities fosters equity growth, enabling companies to attract additional credit resources while minimizing the risk of insolvency and, consequently, reducing the likelihood of bankruptcy.Furthermore, the level of an enterprise's liquidity is so critical in determining the likelihood of bankruptcy that in Germany, proof of a liquidity (solvency) deficit of a legal entity serves as the basis for initiating insolvency proceedings based on § 17 of Insol-venzordnung (1994), signifying the existence of payment inability for the debtor.As evident from the court practice, a debtor's liquidity deficit of up to 10% is allowed, as stated by the German Supreme Court in its judgment of May 24, 2005 in case IX ZR 123/04 (URTEIL, 2005)."If the deficit is less than 10%, it is insufficient to prove insolvency."Additionally, it is noteworthy to mention the recent stance of the German Supreme Court, outlined in its judgment on June 28, 2022, in case II ZR 112/21 (URTEIL, 2022): "It is therefore considered acceptable to demonstrate insolvency by means of the liquidity status as of the key date, combined with a financial plan for three weeks after the key date, where daily deposits and withdrawals are compared...".Thus, the control over positive liquidity by German companies should now be strengthened.
Simultaneously, liquidity, solvency, and coverage ratio indicators will play a pivotal role not only in Germany but also in other countries.For example, in England, by virtue of the provisions of Article 123(1)(e) of Insolvency Act 1986(Insolvency Act, 1986), the court can initiate winding up proceedings if it's proven that the debtor will be unable to meet its obligations as they fall due.Moreover, considering that stagnation of mentioned indicators can result in payment suspension, and this circumstance, in line with the provisions of Art.L631-1 and Art.L640-1 French Commercial Code (French Commercial Code (2022), serves as the basis for initiating regular rehabilitation or liquidation procedures, respectively.Certainly, in Ukraine, payment inability also serves as a ground for initiating bankruptcy proceedings, and given an enterprise's low liquidity, the enterprise could easily become insolvent.Hence, the paper proposes conducting a study on the impact of profitability of operating activities on the number of bankruptcy cases completed with the approval of the liquidator's report.The aim is to identify potential areas for decreasing the probability of companies' bankruptcy and liquidation.
LITERATURE REVIEW
Financial difficulties of companies profoundly influence the risk of their bankruptcy.It has been found that profitability has a significant negative impact on financial distress (Dankiewicz, 2020;Oktari et al, 2023).Profitability refers to a company's operational efficiency, which is determined by its ability to generate profits (Susanto et al., 2022).Alongside other metrics like liquidity and solvency, profitability serves as a vital measure to assess a company's efficiency.This indicator is used to evaluate the likelihood of financial difficulties, including bankruptcy (Poliakov et al., 2023).The research conducted on the use of the profitability indicator (Albulescu, 2015), has confirmed its significant negative impact on financial difficulties, a conclusion supported by other scholars (Wibowo & Susetyo, 2020;Vu & Nwachukwu, 2021).Return on assets also has a negative impact on financial distress and stands as a crucial bankruptcy indicator (Putri & Sutrisno, 2023).
The causes of bankruptcy can stem from economic and financial factors, or a combination of both.The developed indicator, known as the bankruptcy index, which combines profitability and leverage of bankrupt firms, led to the conclusion that profitability influences the likelihood of bankruptcy.Consequently, this insight enables more effective management strategies.Viable firms can be reorganized to sustain profitability, while unviable ones can be liquidated (Aguiar-Díaz & Ruiz-Mallorquí, 2015).Revenue management is a crucial strategy for minimizing the risk of company bankruptcy.(Biddle, G. et al., 2020).The findings of the study have demonstrated that the likelihood of a company experiencing a financial crisis is contingent on various factors, including liquidity, profitability, asset productivity, market capitalization, and leverage.It is stressed that companies should carefully monitor their financial indicators, particularly operational profitability and market metrics, to mitigate the risk of bankruptcy.(Rachman, 2022).Based on the use of logistic regression, it is proved that financial indicators affect the prediction of financial difficulties, particularly bankruptcy of enterprises.Thus, the return on assets was found to have a positive impact on the financial distress of companies (Paramartha & Wiagustini, 2021;Kudej et al., 2021).
A study into the impact of profitability management on bankruptcy risk has shown that there is no connection, but if companies implement several business leadership strategies in their activities, this significantly reduces the risk of bankruptcy (Agustia et al., 2020).In addition, the risk of company bankruptcy significantly affects the decisions of all stakeholders (Lukason & Mifiano, 2019;Lesníková et al., 2022), especially through the use of models and financial ratios that allow it to be assessed.Thus, based on regression analysis, it was found that the profitability ratio has a negative impact on the financial difficulties faced by companies (Kalbuana et al, 2022).A study of the Indonesian Stock Exchange (when assessing the relevant statistical base for the period 2015-2017) based on correlation and regression analysis showed that return on assets has a significant negative effect on the financial difficulties of the analyzed companies (Moch, R. et al., 2019).Furthermore, conducted was a study employing correlation analysis, utilizing the banking sector of Iran as an illustrative case.Consequently, the study emphasizes the correlation between profitability, competition, and instances of bank failures (Badirkhani, 2019).
Using profitability as an intermediate variable, the impact of liquidity, operating capacity and leverage on financial distress, particularly bankruptcy, in manufacturing firms is determined (Kozlovskyi et al., 2020).Leverage and profitability have a significant impact on the financial difficulties of these companies.Profitability proved to be a partial mediator of the relationship between liquidity, leverage, and operational capacity to overcome financial difficulties.The study concluded that such indicators as return on equity, return on investment, and the debt-to-equity ratio of companies significantly affect financial challenges, especially bankruptcy.Promising recommendations were made for predicting bankruptcy, emphasizing that if operating costs are efficient, the profitability of operating activities will be higher and the risk of bankruptcy will be lower (Kadarningsih et al., 2021).
The correlation analysis of Malaysian companies in 2012-2014 proves that large companies with efficiently managed assets improve operating income and, therefore, ultimately improves operating profitability.It is concluded that there is no significant relationship between liquidity (current ratio and profitability), and a negative relationship between asset turnover and profitability (Alarussi & Alhaderi, 2018).When applying logistic regression, using the example of a study of companies listed on the stock exchange in Indonesia, we reached a conclusion that non-financial variables (corporate governance, market information, macro factors) do not have a direct impact on bankruptcy.However, they have a significant impact on return on equity (Kozlovskyi et al., 2023), which in turn has an impact on company bankruptcy (Nuraini et al., 2021).On reviewing the above literature on the problem of financial difficulties, in particular bankruptcy, we deduced that the profitability of companies is an important factor (Kozlovskyi et al., 2021), which significantly affects the future prospects of their development and profitability.The analysis of the studies done by the above-mentioned scientists allowed us to formulate the hypothesis that the profitability of companies significantly affects financial difficulties and the risk of bankruptcy.While reviewing the literature on the problem under study, we did not happen to find scientific works analyzing the impact of operating profitability on the probability of bankruptcy.Therefore, the paper proposes to study the impact of operating profitability (as the ratio of the enterprises' operating income to their operating expenses) on the number of bankruptcy cases of Ukrainian enterprises closed with the approval of the liquidator's report.
METHODOLOGY
The study includes the following steps: to analyze the value of the Business registration and bankruptcy index for the period 2016-2022 for individual EU-27 countries (according to the data available in the Eurostat database).Analyze the Death rate of all enterprises in 2013-2020 for the countries that are members of the Organization for Economic Cooperation and Development (OECD) (according to the OECD statistical database and the data available).Analyze the value of the Death rate of all enterprises in 2013-2019 in Ukraine (according to the State Statistics Service of Ukraine (SSSU) database).Analyze the judicial statistics of the results of bankruptcy cases in Ukraine for the period 2014-2022 (according to the statistical database Judicial statistics of the Supreme Court of Ukraine).Analyze the dynamics of the level of profitability of the general and operating activities of Ukrainian enterprises, in particular by their size.Investigate the impact of operating profitability on the number of bankruptcy cases completed with the approval of the liquidator's report based on the statistical database of the State Statistics Service of Ukraine and Judicial statistics of the Supreme Court of Ukraine.Build a correlation and regression equation of the impact of operating profitability on the number of bankruptcy cases completed with the approval of the liquidator's report with justification of its statistical reliability (Ilyash et al., 2020;Shevchuk et al., 2023).Forecast the number of bankruptcy cases completed with the approval of the liquidator's report, taking into account the built correlation and regression equation.
According to the methodological guidelines for using enterprise financial statements for statistical purposes (State Statistics Service of Ukraine, 2014), the sources enabling statistical analysis of operational profitability include the following financial statement forms: "Balance Sheet" (Form Number One), "Income Statement" (Form Number Two), and "Notes to the Annual Financial Statements" (Form Number Five).
The study uses the profitability (loss) indicator of operating activities of enterprises (excluding those primarily engaged in "Wholesale and retail trade; repair of motor vehicles and motorcycles").This indicator is calculated according to the formula (State Statistics Service of Ukraine, 2014): where: Rod is the profitability (loss) of operating activities of enterprises; FRod is the financial result from operating activities of enterprises; Cod is the expenses of operating activities of enterprises.
The impact of operating profitability (x) on the number of bankruptcy cases completed with the approval of the liquidator's report (Y) is determined by applying correlation and regression analysis.The correlation and regression analysis in assessing the impact of operating profitability on the number of bankruptcy cases (Halkiv et al., 2020) completed with the approval of the liquidator's report involves the construction of a correlation equation (formula 2): (Chatterjee et al, 2013): where Yx is a linear equation; а0, а1 are the parameters (coefficients) of the equation; x is the influence factor.
The unknown parameters of the regression equation (a0, a1) should be determined through the least squares method.To achieve this, a system of normal equations is established.The strength of the relationship is assessed using the linear correlation coefficient.The portion of variance in the analyzed performance attribute (Y) attributable to the factors (x) included in regression equation 1 is ascertained using the coefficient of determination (D).It is suggested to assess the reliability of the multiple correlation coefficient (as well as the correlation equation as a whole) by calculating the F-criterion (F).In addition to the closeness of the relationship, the following indicators are used to assess the adequacy of the regression equation (1) to real processes: sample correlation coefficient (z), standard error (Se), lower limit of the confidence interval of the correlation coefficient (rL), upper limit of the confidence interval of the correlation coefficient (rU).Fig. 1 shows the algorithm for identifying the impact of operating profitability on the number of bankruptcy cases closed with the approval of the liquidator's report.The adequacy and reliability of the constructed correlation-regression equation (formula 1) to determine the impact of the profitability of operating activities (x) on the number of bankruptcy cases closed with the approval of the liquidator's report was assessed using MS Excel.The functions of the MS Excel statistical package were used to calculate the F-criterion and determine its tabular value.
RESULT
This section analyses the value of the business registration and bankruptcy index for the period 2016-2022 for the EU-27 countries as a whole, including countries such as Belgium, Bulgaria, Denmark, Germany, Estonia, Ireland, France, Italy, Spain, Latvia, Luxembourg, Malta, the Netherlands, Poland, Portugal, Romania, Slovenia, Slovakia, Iceland, Norway and Spain.The article analyses the death rate of all Analysis of the results of the impact of operating profitability on the number of bankruptcy cases concluded with the approval of the liquidator's report enterprises in 2013-2020 in the countries that are members of the Organization for Economic Cooperation and Development (OECD) and in Ukraine for the period 2013-2019.The article examines the court statistics regarding the outcomes of bankruptcy case reviews in Ukraine from 2014 to 2022.Specifically, it focuses on various aspects, including the total number of cases completed; cases completed with the approval of the rehabilitation (restructuring) manager's report; cases completed with the approval of the composition agreement; cases completed with the approval of the liquidator's report; cases closed due to the fulfillment of all obligations to creditors.The dynamics of the level of profitability of general and operational activities of Ukrainian enterprises, in particular by their size (large, medium, small, micro) in 2010-2021 is considered.The influence of operating profitability on the number of bankruptcy cases closed with the approval of the liquidator's report is studied.A correlation-regression equation of the influence of profitability of business activity on the number of bankruptcy cases closed with the approval of the liquidator's report is constructed with the justification of its statistical reliability, and a forecast of the number of bankruptcy cases closed with the approval of the liquidator's report is made taking into account the constructed correlation-regression equation.
Analysis of business demography and bankruptcy statistics
The statistical basis for the analysis of the business registration and bankruptcy index is taken from the official Eurostat website (Eurostat, 2023).The base year is 2015.A summary of the business registration and bankruptcy index in the EU countries is presented in Table 1.According to the BRBI, more cases of negative values were recorded in such countries as France, Romania, Slovakia, Latvia and Estonia.For example, in the dynamics of France, the BRBI shows a steady upward trend from 106.9% in 2016 to 188.1% in 2022.In other words, in 2022 the BRBI value will increase by 88.1% compared to 2015, which is four times higher than the BRBI value for the 27 EU countries.In Romania, the BRBI value is 95.7% in 2016 and 182.5% in 2022.Estonia, an EU country, has a similar dynamic, with a BRBI of 107.1% in 2016 and 148.7% in 2021.In 2022, however, the BRBI drops significantly to 123%, which is as close as possible to the European average.The Netherlands also shows a similar trend, with a value of 103.8% in 2016 and this BRBI rising to 142.2% in 2022.Other countries show unstable dynamics in the development of the BRBI.The BRBI value deteriorates significantly in 2021-2022, mainly due to the impact of the COVID-19 pandemic in the EU and globally.The statistical data is derived from various sources including the Organisation for Economic Co-operation and Development (OECD) and the State Statistics Service of Ukraine (SSSU).It utilizes the business demography indicator for enterprises, specifically the "death rate of all enterprises" (DRE), and the corresponding values can be found in Table 2.The analysis of the DRE values shows that the highest death rate in the total number of enterprises was recorded in Lithuania.In 2019, the DRE was 18%, which means that almost every fifth enterprise is closed down.In OECD countries such as Denmark, Estonia, Finland, Germany, Hungary, Latvia, Poland, Slovakia, Slovenia, Spain, and the United Kingdom, the DRE is close to 10%, that is about 10% of enterprises are liquidated annually.Turkey has a much higher value, with the DRE of about 12%.The lowest DRE values are found in the following countries: Austria, Belgium, France, Greece, and Norway.For example, in 2020, Norway had the lowest number of liquidations, with only one in 40 companies being liquidated, while in Belgium and France, approximately one in 28 companies was closed down.
Comparing the dynamics of the DRE in Ukraine, it exhibited an unstable trend from 2013 to 2020.In 2020, the DRE in Ukraine stood at 10.2%, meaning that approximately 10 enterprises were liquidated, whereas in 2013, only about 15 enterprises were liquidated.In 2019, the DRE in Ukraine was considerably higher compared to the rates in other countries listed in Table 2.For instance, it is 1.96 times higher than in Austria, 3.5 times higher than in Belgium, and 1.24 times higher than in the Czech Republic.However, it is lower compared to countries such as: Estonia by 1.0098 times; Germany by 1.14 times; Iceland by 1.13 times; Lithuania by 1.76 times; Portugal by 1.32 times; Slovakia by 1.019 times.
By the number of employees, companies employing up to 9 people make up the largest share of the liquidated Ukrainian enterprises in a particular group (12.2% in 2012) (SSSU, 2019).It is noted that the larger the number of employees at the enterprises, the lower the DRE.Thus, according to the 2019 data, at enterprises employing from 10 to 49 people, the DRE was 1.8; similarly, for those with staff headcount from 50 to 249, it was 1; for those with 250 or more employees, it was 0.6.In other words, enterprises with more than 250 employees will have the DRE approximately 20.33 times lower compared to enterprises with fewer than 9 employees.Court statistics on the outcomes of bankruptcy (insolvency) cases in Ukraine are presented in Table 3. Source: Judicial statistics of the Supreme Court of Ukraine, 2023.
The number of cases that have been implemented in Ukraine is declining every year.Thus, in 2022, in comparison to 2014, they decreased by 43%, including 84.5% with the approval of the liquidator's report.Despite this trend, the number of cases completed with the approval of the restructuring report has shown instability, going from 4 cases in 2014 to 5 in 2020.In 2022, the number of cases completed with restructuring is 92.4 times fewer than liquidation and 77 times fewer than those approved with a settlement agreement.This trend can be attributed to the elimination of the institution (M.Draskovic et al., 2016) of a special amicable agreement as a judicial procedure with the entrance in force of the Code of Ukraine on bankruptcy procedures in 2019.Therefore, it is likely that this statistical indicator will continue to stagnate in the future.
Analysis of profitability of the total and operating activities
Table 4 provides figures on profitability (loss) of operating and total activities of Ukrainian enterprises during the period of 2010-2021, in particular by their size (large, medium, small and micro businesses).Statistics as of 10.10.2023 for 2022 are not available in the SSSU.The information presented in Table 4 gives reasons to draw conclusions that from 2010 to 2021, the profitability of operating activities exhibited unstable dynamics of development.The highest level of return on operating activities (ROA) was recorded in 2021 at 12.6%, and the lowest in 2013 at 3.9%.In 2014, the ROA was -4.1%.Analyzing the level of DER by enterprise size, we can see that the highest level is in large enterprises, with a rate of 17.1% in 2021.Micro-enterprises had a 5.1 percentage points lower DER in 2021 than large enterprises and 0.6 percentage points lower than the average.The ROA level is higher compared to the level of total profitability of enterprises (TPE).In general, the ROA level is 2.5 p.p. higher than the TPE level in 2021.Comparing the ROA in 2021 with the TPR by size, the following features can be observed.The ROA levels by size (large, medium, small, and micro enterprises) are higher than the TPR levels by 4.3 p.p., 0.3 p.p., 3.9 p.p., and 4.6 p.p. in 2021, respectively.It is important to note that during the period of 2010-2020, small enterprises and microenterprises had a predominantly negative TPE, except for 2019.
3.3
The impact of operating profitability on the number of bankruptcy cases that have been completed with the approval of the liquidator's report To examine the impact of operating profitability on the number of successfully completed bankruptcy cases with the approval of the liquidator's report, a correlation and regression analysis was employed to establish regression equations (Formula 1) and summarize the results in Table 5.Based on the conducted correlation-regression analysis (using the data from Table 5), a correlationregression equation was constructed: Y = 2308.54-132.9х (3) The developed correlation-regression equation (Formula 2) is described by the following parameters: r = -0.86;D = 0.74; Fisher ratio (F = 4.11) exceeds the normative (tabular) value (Ft = 2.45), i.e., (F > Ft); z = -1.29;Se = 0.45; C_95% = 1.96; rL = -0.97;rU = -0.39.approval of the liquidator's report, with other factors constituting the remaining 26%.This equation demonstrates a negative and inversely proportional relationship.In other words, a 1% increase in operating profitability results in a decrease of 132.9 cases of bankruptcy finalized with the approval of the liquidator's report.The constructed correlation and regression equation allows us to predict the decrease in the number of bankruptcy cases completed with the approval of the liquidator's report.Figure 2 illustrates the calculation of the percentage by which the number of bankruptcy cases completed with the approval of the liquidator's report has decreased or increased compared to the previous year (data from Table 5).
Using the computed data on fluctuations (both increase and decrease) in the number of bankruptcy cases completed with approval from the liquidator's report compared to the previous year, as depicted in Figure 1, we determined an average value of -18.5% using the MS Excel software.Furthermore, we similarly established the average percentage growth in operating profitability for the period spanning 2014-2021, which equates to 6.03%.In a hypothetical scenario, we aim to project the extent to which the number of bankruptcy cases completed with the approval of the liquidator's report will decrease between 2023 and 2028.In 2023, we anticipate an increase in operating profitability at a rate mirroring the calculated average of 6.03%, taking into account the potential impacts of the COVID-19 pandemic and ongoing conflicts.Subsequently, our forecasts predict a subsequent 18.5% growth in operating profitability compared to the previous level.The forecasted data, based on the correlation and regression equation (Formula 2), is visualized in Figure 3.The data show that at the operating profitability level of 6.03%, the number of bankruptcy cases completed with the approval of the liquidator's report will be 1512, and if the operating activity of Ukrainian enterprises grows to 14.09%, the number of bankruptcy cases completed with the approval of the liquidator's report will be 447.
DISCUSSION
The study showed that the situation in the field of business activity of companies in the vast majority of EU countries over the past few years has been unstable and worrying.This is evidenced by an increase in the Business registration and bankruptcy index (BRBI) in the vast majority of EU-27 countries compared to the corresponding value in 2015.The death rate of all enterprises (DRE) in the EU27 and Ukraine also has an unstable development trend.In Ukraine, almost every tenth enterprise out of the total number of enterprises is liquidated.In Ukraine, this value is about twice as high compared to such countries as Austria, Belgium, France, and Norway.This circumstance demonstrates the necessity to return the institution of a special amicable agreement to the bankruptcy procedure in Ukraine.It is also possible to highlight the low efficiency of the rehabilitation procedures available in Ukraine -both the classic rehabilitation procedure in a bankruptcy case and the pre-bankruptcy rehabilitation.It is also worth noting the practical effectiveness of the triad of procedures in the French insolvency procedure, since providing the debtor with access to the judicial rehabilitation procedure before the moment of its insolvency, namely in the presence of its low liquidity, as can be seen from the statistical data, allows to "save the life" of the company.
When assessing the likelihood of bankruptcy, various profitability indicators are considered, including return on assets (Paramartha & Wiagustini, 2021;Moch et al., 2019).However, the impact of operating profitability on the likelihood of bankruptcy has not been thoroughly investigated.Optimizing operating costs and increasing financial results from operational activities enable companies to generate sufficient net profit for maintaining regular operations (Kozlovskyi et al., 2019).This, in turn, reduces their reliance on borrowed capital, thus enhancing solvency and liquidity.Therefore, if operating costs are managed efficiently, operating profitability will be higher, and the risk of bankruptcy will be lower (Kadarningsih et al., 2021).This paper aims to examine the influence of operating profitability on the number of bankruptcy cases concluded with the liquidator's report approval.
Through calculations based on the statistical database (SSSU, 2022), a correlation and regression equation was formulated, enabling us to conclude that profitability of operating activity of Ukrainian enterprises accounts for 74% of all factors affecting the number of bankruptcy cases completed with the approval of the liquidator's report.The equation indicates that this influence is negative and inversely proportional.In other words, with a 1% increase in the profitability of operating activities, the number of bankruptcy cases completed with the approval of the liquidator's report is expected to decrease by 132.9 units.This constructed correlation and regression equation enables us to predict the decrease in the number of bankruptcy cases completed with the approval of the liquidator's report.
CONCLUSION
In line with the study's objective, the paper examines how the profitability of operational activities in Ukrainian enterprises affects the number of bankruptcy cases that are finalized with the approval of the liquidator's report.A negative correlation was found between the profitability of operating activities and the number of bankruptcy cases completed with the approval of the liquidator's report (r = -0.86;D = 0.74).This implies that in 74% of cases, operating activity of Ukrainian enterprises affects the number of bankruptcy cases completed with the approval of the liquidator's report; in the remaining 26% of cases, other factors dominate.The devised correlation and regression equation exhibits statistical reliability and adequacy to real economic processes, including a Fisher's coefficient (F = 4.11) exceeding the normative (tabular) value (Ft = 2.45), i.e., (F > Ft); z = -1.29;Se = 0.45; C_95% = 1.96; rL = -0.97;rU = -0.39.
The findings of the study will help Ukrainian enterprises to improve their management practices, particularly, to reduce the likelihood of their bankruptcy and liquidation.The devised correlation-regression equation enables the author to predict how the number of bankruptcy cases completed with the approval of the liquidator's report will change with fluctuations in the operating profitability of Ukrainian enterprises.If the level of operating profitability for Ukrainian enterprises remains at 6.03%, the number of bankruptcy cases completed with the approval of the liquidator's report is estimated to be 1512.With the growth of operating activities of Ukrainian enterprises to 14.09%, the number of bankruptcy cases completed with the approval of the liquidator's report will be 447.
Thus, the level of profitability in the operating activities of Ukrainian enterprises is an objective factor that affects the completion of bankruptcy cases with the approval of the liquidator's report.The management's primary objective is to prioritize the optimization of operating expenses (including material costs, labor costs, social contributions, depreciation of non-current assets, and other expenses), financial and investment costs, and to increase revenues from the company's core, financial, and investment activities.This strategy will lead to an increase in net profit and profitability, particularly in operating activities.
A sufficient amount of net profit will reduce Ukrainian enterprises' dependence on borrowed capital, thereby improving their solvency and liquidity.Effective management of operating profitability will lower the risk of bankruptcy and liquidation of enterprises, specifically reducing the number of completed bankruptcy cases with the approval of the liquidator's report.Furthermore, it will facilitate an increase in the number of cases related to the rehabilitation procedure and settlement agreements.
Figure 1 .
Figure 1.Algorithm for detecting the impact of operating profitability on the number of bankruptcy cases closed with the approval of the liquidator's report closeness of a relationship (r); calculation of the proportion of variation of the analyzed performance metric (D); calculation of the probability of the multiple correlation coefficient (F); calculation of the sample correlation coefficient (z), calculation of the standard error (Se), calculation of the lower limit of the confidence interval of the correlation coefficient (rL) and the upper limit of the confidence interval of the correlation coefficient (rU).Prediction of the level of reduction in the number of bankruptcy cases closed with the approval of the liquidator's report based on the constructed linear regression equation (1)
Figure 2 .
Figure 2. Decrease (increase) in the number of bankruptcy cases finalized with the liquidator's report (compared to the previous year), %
Figure 3 .
Figure 3. Forecast of the number of bankruptcy cases concluded with the approval of the liquidator's report for the period 2023-2028 (based on Formula 2)
Table 3 .
Court statistics on the outcomes of bankruptcy cases from 2014 to 2022
Table 4 .
Profitability of the total and operating activities of Ukrainian enterprises by size from 2010 to 2021
Table 5 .
Data on the impact of operating profitability (x) on the number of bankruptcy cases that have been completed with the approval of the liquidator's report (Y) : compiled on the basis of Table3 and Table 4-SSSU, 2021; Judicial statistics of the Supreme Court of Ukraine, 2023 Source
|
2024-05-11T15:44:21.480Z
|
2024-04-15T00:00:00.000
|
{
"year": 2024,
"sha1": "d8ff42253bac5f22859ff586a4e801b028ea690f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14254/1800-5845/2024.20-2.18",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "af33620bb260d5d49e031434bea87bc20d28dc53",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
}
|
111385899
|
pes2o/s2orc
|
v3-fos-license
|
Income , Economic Structure and Trade : Impacts on Recent Water Use Trends in the European Union
From the mid-1990s to the recent international economic crisis, the European Union (EU27) experienced a significant economic growth and a flat population increase. During these years, the water resources directly used by the EU countries displayed a growing but smooth trend. However, European activities intensively demanded water resources throughout the whole global supply chain. The growth rate of embodied water use was three times higher than the growth in water directly used by these economies. This was mainly due to the large upsurge of virtual water imports in the EU (e.g., about 25% of the change in water imports in the world was directly linked to the increasing imports in the EU27 countries). In this context, we analyze water use changes in the EU27 from 1995 to 2009, combining the production and consumption perspectives. To that aim, we use the environmentally extended input-output approach to obtain the volume of water embodied in domestic production and in trade flows at the sector and country levels. In the empirical analysis, we utilize multi-regional input-output data from the World Input Output Database. In addition, by means of a structural decomposition analysis we identify and quantify the factors explaining changes in these trends. We focus both on the role of domestic production and trade and estimate the associated intensity, technology and scale effects. This analysis is done for different clusters, identifying singular patterns depending on income criteria. Our results confirm the boost of demand growth in that period, the positive but negligible effect of structural change, and the decline in water intensity which, however, was not enough to compensate the effects on water associated to the economic expansion in the period. These findings also point at a gradual substitution of domestic water use for virtual water imports. More concretely, in most countries the food industry tended to reduce its backward linkages with the domestic agricultural sector, increasing the embodied water in agricultural imports from non-European regions.
Introduction
The intense economic growth and globalization happened in the world during the last decades has entailed large pressures on natural resources.Today, climate change has entered in the political and institutional international agenda, as one of the most daunting problems to be faced by humankind, being economic globalization an accelerator of this process.The impact of climate change in water availability has been widely documented in the literature [1][2][3] and the intensification of water scarcity in many regions is a global concern, given the confluence of growing water demands for economic and social uses in a context of uncertain supplies [4,5].This leads to an increasing competition among users, irregularities in the availability of water resources and lower quality of water flows.
In a context of globalized economies, resources, inputs, productions and final products are internationally interconnected, through global supply chains.As Yu et al. [6] recognize, globally distributed production activities are key drivers of environmental change, posing stress on local ecosystems.This entails a growing separation between the producer and consumer responsibilities as a result of the large integration of supply chains at the global level [7][8][9][10], in addition to severe regional and local water pressures related to the final consumption of goods long distances away.Thus, highlighting the links between consumers' behavior and environmental impacts through international trade is a necessary step towards more sustainable, responsible and probably fair societies.
In this line, a remarkable case of study regarding water is the European Union (EU27) which, despite representing approximately 8% of world population, is responsible for over 30% of the imports of water embodied in products exchanged through international trade, of which a notable share comes from non-European areas (non-EU).In fact, from 1995 to 2009 the EU27 countries represented approximately 2% of the increase in population at the global level, but 25% of the increase in imports of virtual water in the world.This means that the strong economic expansion happened in Europe during this period represented a significant pressure on water resources worldwide, embodied in the goods finally demanded in the EU27.These increasing pressures on foreign water resources were not homogeneous in all the EU27, but were mostly driven by the demands of high income countries.Moreover, domestic EU27 water resources were also affected.According to the European Environment Agency [11], Germany, UK, Italy, Malta, Belgium and Spain were the most severely affected countries by water stress (The EEA measures water stress using the Water Exploitation Index plus (WEI+), which is an indicator that evaluates total water use as a percentage of the renewable freshwater resources.) in Europe.Thus, the growing dependence on foreign water resources together with the increasing water scarcity in the EU27 countries made sustainable and effective water management practices more prevalent across Europe.In this context, the analysis of global supply chains and the implications of these international structures on natural resources are important in order to evaluate global economic and environmental dependencies, to identify potential bottlenecks and to achieve a better understanding on the links between economic activity and environmental impacts on a growing complex and interrelated world.
Our paper aims to examine water use linked to the EU27 activity during the period 1995-2009, deepening into the main driving forces of this process.These years are of particular interest as representative of the acceleration of the second globalization wave since the first nineties.We propose a multi-regional input-output model (MRIO) to estimate the water footprint of the EU27 countries (and the EU region as a whole), that is, the water embodied in the goods finally consumed within the EU27 countries.Environmentally extended MRIO models acknowledge the direct and indirect links between sectors and countries along the full supply chains, and allow connecting the production and consumption perspectives, also being possible to identify the contribution of domestic demand and production of each country and the role of imported goods and inputs.Expanding these relationships to water, we relate the final consumption of goods and services with direct and indirect water incorporated in the different stages of the production chain, identifying the sector and location where this use takes place.We will distinguish between the internal water footprint, namely, the consumptive use of domestic water resources to produce goods and services that are consumed domestically, and the external water footprint or virtual water imports (VWM), in other words, the consumption of foreign water resources embodied in goods and services imported from other nations [12], being also informative on the external dependence on foreign water resources.There is a broad literature studying the water footprint of nations [13,14] or regions [15][16][17][18] and also decomposing water intensity in an input-output framework [19], but to our knowledge none of them have tried to address the water use trends in the EU27 related to the distribution of income in Europe by means of a structural decomposition analysis (SDA).As stated previously, we use a multi-regional input-output model extended to water resources that allow tracking for international supply chains and represent differences in production technologies [20], being an accurate tool to address the consumption-based accounting of natural resources.That way, we study the main factors potentially driving the changes happened in the EU27 water footprint from 1995 to 2009, evaluating the influence of water productivity, technological change and demand.We also study the contribution of these factors to the changes in different components obtained considering the location of water use for production and final consumption (domestic and external use of water resources including imports and exports).
Our results highlight the role of the demand growth in the EU27 as the main factor boosting water use within the EU and abroad, a quite negligible effect of technological and structural change and a decline in water intensity, representative of an increasing productivity of water use.Additionally our findings suggest that the increasing integration of agriculture and food production in global supply chains considerably impacted on the use of resources worldwide, with a significant growth of the embodied water in agricultural imports from non-EU countries.
The rest of the article is organized as follows.Section 2 deepens into the methodological framework and the main data sources used for this study.Section 3 presents the main findings of our analysis and is divided into three subsections.Section 3.1 focuses on the trends of water consumption at the country level, Section 3.2 depicts the sectorial features and Section 3.3 shows the results on the structural decomposition analysis.Finally, Section 4 closes the paper with the main conclusions.
Materials and Methods
Methodologically our starting point is a MRIO model for the world economy [21][22][23] environmentally extended for water resources.The total water flows among countries and sectors, as well as the internal and external components of the water footprints can be obtained as follows: where Ω is a matrix with information on the water directly and indirectly used in the production and trade flows between the n countries and m sectors in the world.Ŵ is a diagonalized matrix of coefficients of the water used per unit of output for each country r and sector i. L represents the multi-regional Leontief inverse and Y is a matrix of block diagonals with information on the domestic and foreign demand for final consumption of households, non-profit organizations serving households and government, gross fixed capital formation and changes in inventories and valuables.More specifically, where each Ω r,r is a matrix of the water used in production in region (country) r to meet its own final demand for each sector i. ∑ r,r =s Ω rs is the water used in other regions production to support the final demand of region s (VWM of region s) and ∑ s,s =r Ω rs is the water used in r to support the final demands of other regions, virtual water exports (VWX of region r).Therefore, matrix Ω informs on the embodied water in trade flows and allows estimating the internal, external and total water footprints of countries.All these calculations are obtained in a disaggregated matrix by country and sector.If we focus on the parts corresponding to the EU countries, we get the evaluation of the footprints of these areas within a global MRIO framework.As it is shown in Equation (2), water flows can be explained on the basis of three different components: water intensity, which proxies water productivity, technological production structure (which is captured with the Leontief inverse) and final demand.Variations in water productivity, structural and technological change and demand growth condition the water use patterns in the world and in the different regions in our MRIO model.
We get the changes in Ω between t 0 (1995) and t 1 (2009): Then, we apply SDA to quantify the factors that explain changes in the water footprint or embodied water of countries from 1995 to 2009.The SDA separates a time trend of a variable into the drivers that could act as accelerators or retardants [24][25][26].Following Dietzenbacher and Los [24] that prove that the simple average of the two polar decompositions is a good approximation of all the exact decompositions, we take the average of the two polar forms in (3) which yields: We obtain the intensity effect (IE), which measures the contribution of changes in water intensities (m 3 of water per $) to water use trends.Secondly, we get the technology effect (TE) that links changes in water use with changes in the technology of production.Finally, we have the scale effect (SE), which quantifies how much of the change in water use is due to changes in the final demand.
As it is well known, production can be described as a chain of processes that, departing from some primary inputs generates intermediate inputs used in subsequent processes until the generation of final demand.This is the basis of the vertically integrated production.When this production chain is established in a MRIO model, the different countries and technologies contribute to the generation of the final demand of a country.Therefore, intensity, technological and scale changes along the entire production chain will condition the volume of water embodied in a specific final demand.Each of the three previous effects can be decomposed into variations of the internal and external determinants.Thus, we will explain changes in the European water footprint according to the domestic and backwards linkages, considering the level of development (high, medium and low income) of European countries and their commercial partners (EU27 regions and non-EU areas).
We use the MRIO tables provided by the World Input Output Database (WIOD) [27,28] as the main database.They summarize the economic information on the production and commercial exchanges for 35 sectors in 40 countries and a region called Rest of the World (ROW).27 of these countries belong to the EU, whereas the rest of them are non-European areas (non-EU).As we are comparing and explaining water use in 1995 and 2009, the MRIO table for 2009 is deflated and expressed in 1995 constant dollars.Following the approach developed by Junius and Oosterhaven [29] and improved by Lenzen et al. and Temurshoev et al. [30,31], we apply the Generalized RAS (GRAS) adjustment to obtain a balanced MRIO table.Data on the water used per country and sector have been taken from the Environmental Accounts of the WIOD [32] for the period 1995-2009.We utilize the information on green and blue water, which is aggregated for the presentation of the results [12].Thus, the WIOD database provide homogenous economic and environmental data that allow calculating the impact on water resources generated through global supply chains for different periods.Despite these advantages, water use data present some uncertainties that must be acknowledged.As an example, Genty el al. [32] indicate that industrial water use was distributed using information from the EXIOPOL database.Besides, as Timmer and Genty et al. recognized [28,32], they estimated agricultural water use using crop and livestock water intensities from Mekonnen and Hoesktra [33,34] and data on crop production and livestock from the statistics of Food and Agriculture Organization of the United Nations (FAOSTAT).Hoekstra and Mekonnen [10] recognize the uncertainties on their data stemming from the source statistics on production and trade, from precipitation, crop and irrigation maps or from assumptions as for example on planting and harvesting dates.However, taking in mind these drawbacks and being cautious when looking at specific areas and products, these data offer a good approximation to analyze general trends and making overall comparisons on water impacts at the macro level.
As we explained before, in order to have a better idea of the relationship between economic structure, income patterns and water footprints, EU27 countries are classified depending on their level of per capita income into three groups: low-(EU-L), middle-(EU-M) and high-(EU-H) income countries (see the list of countries in the Supplementary Information).This classification has been made on the basis of the real gross domestic product for 2009 taken from Eurostat.We consider as high income countries France, Germany, Belgium, Finland, United Kingdom, Austria, Sweden, Netherlands, Ireland, Denmark and Luxembourg.Middle income areas are mostly Mediterranean countries in the south of Europe as Malta, Portugal, Slovenia, Greece, Cyprus, Spain and Italy.Finally, the low income countries are Eastern European countries and the Baltic republics, namely, Bulgaria, Romania, Latvia, Lithuania, Poland, Estonia, Slovakia, Hungary and Czech Republic.This breakdown will be used to analyze the contribution of the different income groups to the size and composition of the water footprint and its evolution, as well as to study the role of intensity, technology, structural change and demand composition in the evolution of European water footprints.
Results
The methodology presented above allows us linking the final demand of countries with all the water incorporated in the different steps of the production chain, that is, in the production of all the inputs needed to satisfy the final demand of the EU27 countries, being an approximation to their water footprint.First, we present the main changes on the overall EU27 over the period studied considering the variations experienced in the internal and foreign components, also distinguishing by blocks of countries.Then in Sections 3.1 and 3.2, these trends are particularized by countries and sectors, respectively.Once the main global, country and sector features have been described, Section 3.3 goes into the factors driving the EU27 water footprint (i.e., the results for the SDA presented in Equation ( 4)), deepening in the meaning and significance.
From 1995 to 2009 water use followed a growing trend worldwide.Whereas the rise in the EU27 water footprint represented 7% of the increase at the global level, this region accounted for more than 25% of the rise in the water embodied in imports in the world (see Table 1).On the whole, these countries tended to increase their virtual water imports (from EU27 and non-EU countries), reducing at the same time their internal water footprint.In fact, the water footprint increase (This growing trend is not conditioned by the selection of two specific years (1995 and 2009).As it is shown in Figure S1 in the Supplementary Information, the increase in water use was continuous during the whole period.) in the EU reached 175 km 3 , 84% of it corresponding to the growth of water embodied in products coming from countries outside the EU27, 24% to the virtual water exchanged within the region and 6% to the decline in domestic water use.These figures indicate that the EU27 countries kept externalizing pressures on water resources during the period, consuming goods produced in foreign countries that were mostly located outside the borders of the EU27.Within the EU27, 66% of the increase in the water footprint took place in those countries with the highest income.More concretely, the most developed areas were the largest importers of water resources both from EU27 countries and non-EU nations, followed by far by middle income areas.Domestic water use, that is, water used to produce goods that were domestically consumed, decreased in high income, but especially low income countries.Nevertheless, it experienced a notable increase in middle income countries.In these regions, mostly Mediterranean areas largely specialized in water intensive activities as agriculture and the food industry, the water footprint grew both as a result of more virtual water imports and larger impacts on internal water resources.These results are in line with other results in the literature.For instance, Steen-Olsen et al. [16] and Arto et al. [18] found that the EU displaced water pressures to the rest of the world through imports of products, with Spain outstanding as the largest net exporter of freshwater in the EU.Similarly, Duarte et al. [35] also for Spain, showed the importance of water embodied in imports linked to the agri-food complex in Spain.
Trends at the Country Level
If we look at the areas with the highest income, the largest increase in the water footprint took place in Great Britain, France and Germany (see Figure 1), together accounting for about 44% of total water footprint in the EU27 and mostly associated to agriculture, the food industry, hotels and restaurants as well as to the health and social work sector.Within this group it is possible to find two different patterns.On the one hand, we observe nations where domestic water use was largely replaced by virtual water imports (Austria, Belgium, Germany, Luxembourg, The Netherlands and Sweden).In most of these countries the largest increase in water imports was determined by water coming from non-EU areas.As an example, large volumes of water were imported from Brazil embodied in agricultural products that were transformed by the German food industry or used in the German hotels and restaurants sector.Besides, the textile sector in Germany also imported water through agricultural raw materials from China.On the other hand, Denmark, Finland, France, Great Britain and Ireland increased their water footprint as a result of both the growth of virtual water imports and to a lesser extent of domestic water use.
As for middle income areas, embodied water increased in all the countries but Portugal, where water use declined from 1995 to 2009.As it is shown in Figure 1, the largest water footprint increase took place in Spain (representing 74% of total water footprint growth in middle income areas), followed by Italy (accounting for 18% of the growth in the water footprint).In this regard, Spain was the only middle income country where both the internal WF and virtual water imports (particularly from non-EU countries) grew.Domestically, Spain used large volumes of water to produce agricultural inputs that were mostly used in the food industry as well as in the hotels and restaurants sector.Water was virtually imported by means of agricultural products mostly from Brazil, China, Indonesia or ROW.This can be explained given the strong growth of the agri-food industry and its exports in this period.For instance, during these years the meat industries notably expended, boosting the demands of food for livestock [36] and therefore of water resources embodied in these goods.Besides, interregional virtual water flows were also important.In this line, Spain, Greece and Italy imported significant volumes of water resources from European low income areas as Bulgaria and Hungary.
Finally, the increase in water use in low income European countries was driven mostly by Poland, Latvia and Lithuania (Figure 1), representing about 6% of the total water footprint increase in the EU27.Whereas Poland offset the decline in domestic water use increasing virtual water imports mostly from China and ROW; Latvia and Lithuania raised domestic and imported water use from Poland and ROW, among others.Quite the opposite, the water footprint fell in Bulgaria, Hungary and Rumania between 1995 and 2009 because of the large drop of domestic water use in agriculture and the food industry sector.
Trends at the Sectorial Level
The MRIO model proposed also offer relevant information when we analyze the composition of water footprints by sectors.As can be seen, nearly 50% of the increase in total European water footprint was associated to agriculture and the food industry (see Table 2).Water is a necessary input of both rain-fed and irrigated agriculture, the sector with the largest direct water use in the world.Besides, the food industry, a sector that largely developed in Southern European countries, presents a large footprint given the strong interdependencies with agriculture.Other sectors account for an important share of the increase in the water footprint.This is the case of hotels and restaurant (9%) and health and social work (According to the United Nations ISIC rev.3 classification, the "health and social work sector" includes hospital, medical and veterinary activities as well as activities that are directed to provide social assistance to children, the aged and special categories of persons with some limits on ability for self-care) (6%).Both sectors show important backward linkages with water intensive activities as agriculture and the food industry.This picture is quite similar to the pattern at the global level but we find some differences.For example, worldwide the increase in water use associated with agriculture and the food industry was higher (65% on total global water use increase).Besides, the construction sector depicts a notable share (7%), larger than the 3% shown in the EU27.
As it is observed in Table 2, within the EU it is possible to find some divergences among income groups.Agriculture and the food industry accounted for more than 50% of the total water footprint increase in high and middle income areas.Looking by geographical areas, the share of agriculture on the water footprint is particularly important in Mediterranean economies, which accounted for 36% of the total water embodied in EU27 demand.Note that the good climatic conditions (sunshine hours, mild climate, etc.) of these regions together with the development of irrigation made agriculture a dynamic and export oriented sector.The case of Spain is surely the best example of how the dynamic performance of its agricultural exports has contributed decisively to the increase in its water footprint [37,38].Regarding the food industry, it displays a significant share in the water footprint of high income areas.On the whole, the food industry tended to reduce the domestic backwards linkages with the agricultural sector, but increased the virtual water imports from other EU regions and particularly from non-EU countries.The hotels and restaurants sector was an
Trends at the Sectorial Level
The MRIO model proposed also offer relevant information when we analyze the composition of water footprints by sectors.As can be seen, nearly 50% of the increase in total European water footprint was associated to agriculture and the food industry (see Table 2).Water is a necessary input of both rain-fed and irrigated agriculture, the sector with the largest direct water use in the world.Besides, the food industry, a sector that largely developed in Southern European countries, presents a large footprint given the strong interdependencies with agriculture.Other sectors account for an important share of the increase in the water footprint.This is the case of hotels and restaurant (9%) and health and social work (According to the United Nations ISIC rev.3 classification, the "health and social work sector" includes hospital, medical and veterinary activities as well as activities that are directed to provide social assistance to children, the aged and special categories of persons with some limits on ability for self-care) (6%).Both sectors show important backward linkages with water intensive activities as agriculture and the food industry.This picture is quite similar to the pattern at the global level but we find some differences.For example, worldwide the increase in water use associated with agriculture and the food industry was higher (65% on total global water use increase).Besides, the construction sector depicts a notable share (7%), larger than the 3% shown in the EU27.
As it is observed in Table 2, within the EU it is possible to find some divergences among income groups.Agriculture and the food industry accounted for more than 50% of the total water footprint increase in high and middle income areas.Looking by geographical areas, the share of agriculture on the water footprint is particularly important in Mediterranean economies, which accounted for 36% of the total water embodied in EU27 demand.Note that the good climatic conditions (sunshine hours, mild climate, etc.) of these regions together with the development of irrigation made agriculture a dynamic and export oriented sector.The case of Spain is surely the best example of how the dynamic performance of its agricultural exports has contributed decisively to the increase in its water footprint [37,38].Regarding the food industry, it displays a significant share in the water footprint of high income areas.On the whole, the food industry tended to reduce the domestic backwards linkages with the agricultural sector, but increased the virtual water imports from other EU regions and particularly from non-EU countries.The hotels and restaurants sector was an important water consumer in both middle and high income areas (8%), its water consumption was indirectly but strongly related to the agricultural sector.The health and social work sectors also entailed an increase in water use (8% in high income and 5% in middle income countries).Finally, the textile sector was significant in middle income economies (6% on total water use increase).We can observe a reduction in the linkages with the domestic agricultural sector but an increase with the non-EU agricultural sector.The pattern in low income areas was quite different (Table 2).The most significant sectors contributing to the increase in the water footprint were hotels and restaurants (28%) highly related to the domestic agricultural sector, textiles (18%) linked mostly through VWM from non-EU countries, electricity, gas and water supply (18%), real estate activities (13%) and construction (12%).However, the agriculture and food industry decreased their water use from 1995 to 2009.It was particularly important in the case of the food industry that notably contributed to the deceleration of the water footprint increase and the retail trade sector, also moderating the growth in the water footprint.
Determinants of Water Footprint Trends
Table 3 shows that the scale effect, that is, the increase in demand was the main driver of the increase in the water footprint during these years.In other words, all other things constant, the increase in the size of the economies (mostly explained by population and income growth) would have represented even higher demands for domestic and foreign water resources, given the dynamic economic situation for most EU27 countries during the period studied.
This effect is outstanding for all the magnitudes analyzed and has a remarkable incidence in the imports from non-EU countries, meaning that the increasing demands in the EU27 were met at the expense on the production and water used outside the EU27.The changes in the technology of production also had a positive effect in the water footprint increase, but were negligible compared to the boost of demand.Note that the technology effect computes the effect on water demands of changes in the Leontief inverse, that is, on the structural and technological composition of production.Our results suggest that the contribution of these changes on the water footprint evolution were small, but in any case, they also played in the direction of rising water footprints.Note that the period analyzed was not characterized by visible and relevant changes in the production structures of those water intensive sectors; in other words, global production structures kept relatively stable in this expansive period, without a significant influence on the changes observed for water demands.Finally, the intensity changes, that is, variations in the water necessary per dollar produced, contributed to a partial levelling off of the water footprint in the EU27.On the whole, the water necessary per dollar of gross domestic product (GDP) decreased, that is, the efficiency or water productivity increased.However, this growth was not enough to make up for the large growth of the scale effect.The increasing imports of high income countries notably contributed to the large scale effect, particularly given the rise of VWM from non-EU areas (69%).This was the case of the VWM of the food and textile industries in Germany, France and Great Britain from the Chinese agricultural sector.Again, we find evidence of the externalization of water pressures on developing areas driven to a large extent by the growing demands in the most developed areas.Secondly, the scale effect was also determined by the impacts of increasing domestic demands of European countries, reaching a contribution of 41% (see Table 3).It was rather distributed among high, low and middle income areas, but was most significant in the least developed areas in the EU27, probably as a result of their lower integration in international markets relatively to the wealthiest regions.In this regard, the most significant countries were Romania (because of the water use in the internal food industry), Poland (due to the domestic food industry water use), France (given the water use of the national agriculture and food industry) and Spain (as a result of the internal water use in agriculture and hotels and restaurants).Finally, virtual water imports of middle income economies from non-EU members were also significant, representing 25% of total water footprint increase in the EU27.The increase in imports of Spain and Italy from China (as a result of Chinese raw agricultural products used in the Spanish and Italian textile sector) or USA (mostly agricultural products used in the food industry) were also remarkable.
As we said before, the technology effect contributed to the increase in the water footprint, but it was insignificant compared to the scale effect.It shows a positive sign in the case of imports of every European sub-group.That is, the growing imports of water through intermediate inputs that were processed in European supply chains moderately contributed to the increase in the EU27 water footprint.In this regard, the most important component was the increase in imports of intermediate goods of high income from non-EU areas, reaching 21% of the total water footprint increase.It was mostly concentrated in imports of the food sector in countries as Germany, Great Britain and The Netherlands from Brazil, China and ROW.Despite the technology effect considered as a whole shows positive sign, some of its sub-components depict negative sign.This was the case of the domestic technology effect (−25% on total water use increase).This means that technological changes in the production of intermediate inputs in Europe contributed to a partial levelling off in the growing trend seen for the water footprint between 1995 and 2009.This effect was particularly relevant in low income economies (−17%), especially in the food industry and agriculture of Poland and Romania.
Finally, Table 3 displays that the intensity effect was the factor contributing to the moderation of the water footprint increase.Without these efficiency improvements, the water footprint would have grown even more (as much as 136 km 3 ).This effect was chiefly associated to the decreasing intensity (growing efficiency) of the products imported by high income EU countries as Germany, France or Great Britain from non-EU areas like Brazil, China and India (−35% on total water use increase).Besides, high income (France) and low income (Poland and Romania) areas also produced using domestically less water per unit of GDP, especially in the agriculture and food sectors (−23% on total water use growth).On the contrary, middle income areas as Spain showed a small but positive domestic intensity effect (1% on total water use).
Discussion and Conclusions
From 1995 to 2009 the water footprint notably increased in the EU27.The sustained European economic growth over this period, which on average reached 2% every year, induced intensive demands for water resources throughout the whole global supply chains.That is, the growing final demands of the EU27 countries boosted production but also water resources depletion worldwide.As an example, the water used to meet European final demands tripled the volume of water resources directly used in production activities.This gap can only be explained looking at the external components of the water footprint, that is, virtual water imports.
The objective of this paper has been to analyze the water use changes in the EU27 during a period of economic expansion.We acknowledge the multisectoral character of the economies, the increasing role of trade in economic growth and the need of combining the production and consumption perspectives for better addressing the relationship between production activities, pressures on natural resources and the final destination of goods and services.To that aim, we make use of an environmentally extended input-output approach to obtain the changes in the volume of water embodied in domestic production and in trade flows at the sector and country levels.Moreover, we use a structural decomposition analysis clustered by income blocks to evaluate the factors responsible for water footprint changes.With this technique we want to evaluate to what extent changes in the size and economic composition of the EU27 countries have contributed to the increase in water demands.
Our results show the large positive impact of demand growth in that period, the direct but negligible effect of structural change, and the decline in water intensity.Moreover, these findings also point at a gradual substitution of domestic water use for virtual water imports.More specifically, the growth in embodied water was mostly driven by the important upsurge of virtual water imports from non-EU countries.In high-and low-income areas the use of domestic resources was significantly replaced by water embodied in products coming from abroad, involving a significant externalization of water pressures.However, in middle-income countries both the internal and external water footprint rose given their relatively important agri-food-based character, intensive in the use of water resources and with important connections with other economic activities.The water footprint growth was largely explained by countries as Spain, United Kingdom, Germany and France.On the whole, the strong linkages of agriculture with the food industry as well as with some other sectors as hotels and restaurants and textiles can explain this growing WF.In this context, despite notable improvements in water productivity-it was necessary to use less water to get a dollar amount of GDP-this effect was not enough to make up for the great boost of domestic and foreign demands, which were the main drivers of the growing trend of the EU27 water footprint.
Accordingly, European countries tended to consolidate externalization of the pressures on water resources from 1995 to 2009.The food industry reduced its backward linkages with the domestic agricultural sector in most countries, increasing the embodied water in agricultural imports from non-European regions.In other words, we observe a progressive substitution of domestic inputs through the supply chain.These areas imported primary agricultural inputs that were processed or used in other sectors that are placed higher in national and international supply chains.This has also important implications in terms of the dependence of European countries on foreign water resources, indirectly taking the risk of any environmental, economic or institutional shock affecting their main commercial partners.Other Mediterranean areas such as Spain kept importing foreign water resources, but also exerting significant impacts on local resources, chiefly as a result of the growing importance of the agri-food complex in its economic structure, notably boosted by exports of high value added and water intensive agri-food products from the mid-1990s [39].In Spain, the relative good climatic conditions (sunshine hours, mild temperatures, etc.), despite the associated aridity, and the hydraulic infrastructure to store and distribute water resources have also favored the development of the agri-food system [40,41] involving pressures on water.Multi-regional input-output models appear as an important methodological framework to identify and distinguish responsibilities on the use of natural resources at the macro level.In our view, it is essential to go beyond indicators of direct water use, utilizing measures that enable to get an overall perspective of the water use of countries and sectors considering the consumption approach.In this line, our analysis offers a comprehensive assessment of the impact that the changes in the economic activities in the EU27 had in domestic and foreign water resources and the way in which these effects are transferred through international supply chains.The results also suggest different patterns of water demand in Europe and a quite different composition of the global supply chains associated to agricultural and food production, with relevant implications on the water pressures worldwide.In consequence, the extension of the analysis to different clusters of countries and sectors (attending to different socioeconomic and environmental criteria) and a deeper study of the agri-food supply chains in Europe are clear lines of advance for future research.
Source: own elaboration from WIOD data.
Table 3 .
SDA of water use changes in the EU27, 1995-2009.
|
2019-04-08T02:14:48.403Z
|
2018-01-16T00:00:00.000
|
{
"year": 2018,
"sha1": "5aeab1b5a9a4150149bc9e3ddcbd866b0bc7923f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/1/205/pdf?version=1516120981",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5aeab1b5a9a4150149bc9e3ddcbd866b0bc7923f",
"s2fieldsofstudy": [
"Economics",
"Environmental Science"
],
"extfieldsofstudy": [
"Economics"
]
}
|
246441696
|
pes2o/s2orc
|
v3-fos-license
|
Improved Sobriety Rates After Brain-Computer Interface-Based Cognitive Remediation Training
Up to 80% of individuals seeking treatment fail in their attempts at sobriety. This study investigated whether 1) a cognitive remediation therapy (CRT) program augmented with a brain-computer interface (BCI) to influence brain performance metrics would increase participants' self-agency by restoring cognitive control performance; and 2) that ability increase would produce increased sobriety rates, greater than published treatment rates. The study employed a retrospective chart review structured to replicate a switching replication methodology (i.e., waitlist group) using a pre-test and post-test profile analysis quasi-experimental design. Participants' records were organized into treatment and non-treatment groups. Adult poly-substance users were recruited from alcohol and other drugs (AOD) use outpatient programs and AOD use treatment centers in the United States. Participants volunteered for pre- and post-testing without treatment (n = 121) or chose to enter the treatment program (n = 200). The treatment group engaged in a 48-session BCI/CRT augmented treatment program. Pre- and post-treatment measures comprised 14 areas from the Woodcock-Johnson Cognitive Abilities III Assessment Battery. An 18-month follow-up assessment measured maintenance of sobriety. After testing the difference for all variables across time between test groups, a significant multivariate effect was found. In addition, at 18 months post-treatment, 89% of the treatment group maintained sobriety, compared to 31% of the non-treatment group. Consistent with addiction neurobehavioral imbalance models, traditional treatment programs augmented with BCI/CRT training, focused on improving cognitive control abilities, may strengthen self-control and improve sobriety rates.
Introduction
Alcohol and other drugs (AOD) dependence and its associated mental health disorders are among the most severe health, economic, and social problems facing the United States [1]. According to the WHO, the cost of AOD dependence worldwide is in the trillions of dollars, with an estimate of over $700 billion in the United States alone [1]. However, the economic costs are not the only costs involved. Social ramifications are significant when families are torn apart. AOD use adversely affects children, spouses, parents, relatives, and other relationships.
According to the National Institute on Drug Abuse (NIDA), addiction is a chronic condition characterized by compulsive cravings, drug-seeking, and drug use that persist despite adverse consequences. Moreover, addiction can reoccur after long periods of abstinence [2][3][4]. Addiction is a natural, neural adaptation process consequential to drug use, resulting in an inability to make mature decisions regarding drug use, and requires repeated and persistent treatment [2][3][4]. Although overcoming substance use is one goal of therapy, returning people to productive functioning within the family, workplace, and community is a more compelling and longer-lasting goal.
Meta-analyses of AOD treatment program outcomes report that the average short-term abstinence rates are 20% for untreated individuals, compared with 40% for treated individuals [2][3][4]. Overall, these reports suggest that treated individuals achieve higher short-term remission rates than untreated individuals. However, these figures also indicate that 60%-80% of individuals who seek treatment fail in their quest to maintain sobriety. Current AOD treatment models address addictive behaviors with a wide range of treatment modalities, including different forms of psycho-education, traditional therapy, pharmacology, 12step recovery programs, or some combination therein. However, outcome reports that include treatments focused on brain recovery or actual brain repair of self-regulation abilities are absent from addiction treatment literature; lacuna is the focus of this study.
Neurobiological models of addiction seek to broaden the understanding of addiction as a brain disease. These models integrate classic psychological models (such as dual-process theory) with neurobiological responses. According to dual-process theory, individuals learn social rules, which are handled by a reflective system in the brain to control impulsive responses [5][6][7][8][9][10]. Dual-process theory research suggests that addictive behavior results from an imbalance between two independent, interacting neural systems that control decision making. These systems include a reflexive or automatic system used in signaling immediate pain or pleasure responses (i.e., the reward motivation system) mediated by mesolimbic dopamine circuitry and a reflective system used to evaluate the long-term choice effects (i.e., the executive control system) which is located in the prefrontal and parietal networks [5][6][7][8][9][10].
According to the dual-process theory, vulnerabilities in these two systems contribute to the development and maintenance of AOD addiction behaviors [8]. For example, the brain's neural focus on high levels of reward motivation likely increases one's inclination toward drug experimentation/use, whereas weakness in executive control is related to the progression of AOD use and compulsive forms of drug use [8][9][10][11][12]. This process is understood to occur in the following sequence: 1) AOD use desensitizes the brain's reward circuits, dampening the ability to feel pleasure and reducing the motivation to pursue everyday activities; 2) conditioned responses to AOD use and stress reactivity increase, which increases cravings for AOD and negative emotions when these cravings are not sated; and 3) brain regions involved in executive functions (e.g., decision making, inhibitory control, and self-regulation) weaken. In combination, this neurobehavioral imbalance/progression leads to repeated relapse. From a neurological systems perspective, this process likely results from a hypoactive prefrontal-mediated executive control system that fails to adequately control a hyperactive striatal reward system [10][11][12]. In an attempt to confirm this premise, Khurana A et al. compared youths with high impulsivity and sensation-seeking characteristics to those with high and low executive control abilities [13]. They found that weak executive control and heightened reward-seeking predicted the early progression of drug use. Conversely, increased reward-seeking, balanced by a strong executive control system, predicted only occasional experimentation [13].
Implications for breaking the relapse cycle
Individuals early in the recovery process are faced with multiple situations every day in which they must choose to remain sober. Many of these situations compel addicts to maintain a strong sense of self-agency to break non-sobriety supporting habits. As Khurana A et al. reported, habit-breaking abilities require not only good intentions but also robust and resilient cognitive functioning to exercise cognitive control and cultivate new sobriety behavioral habits [13].
According to Chatham CH et al., habit-breaking skills are acquired by progressing through four transitional stages in which individuals learn new skills and then integrate those skills into their daily lives [14,15]. To successfully break the relapse cycle, recovering individuals must effectively navigate through all the transitional stages of recovery. Thus, one must not only intend to remain sober and understand the environmental context of the relapse cycle but also be cognitively equipped to exercise volitional control when that control is needed. Each transitional stage requires the recruitment of different sets of cognitive functions to acquire and execute new skills. The rate of learning, the ability to retain a new skill, and the execution of this skill depend on the learner's health and the functional strength of his or her cognitive function. Unfortunately, many cognitive functions are significantly compromised for many individuals in AOD addiction recovery [14][15][16][17].
The capacity and performance of an individual's executive control capabilities dynamically vary in the moment, based on one's current cognitive load, stress level, and resilience to stress [17][18][19][20]. For individuals in recovery, low capacity and low stress-resilient cognitive function increase the risk of making poor decisions. As supported by dual-process theory and as evidenced by addiction studies [14][15][16][17], unless the individual in recovery can maintain strong reflective abilities (including the abilities to learn, integrate, and self-monitor) and has the neural resilience to withstand daily stress, that individual will remain at risk for relapse. From a brain perspective, the functional strength, health, and ability of the executive control functions are critical to ongoing success.
Strengthening cognitive control abilities
Cognitive remediation therapies (CRTs) fall within the class of cognition-based strengthening interventions [20][21][22]. Many forms of CRT interventions have been applied successfully among individuals with acquired CNS disorders, including traumatic brain injury, stroke, mental health issues, depression, substance use, and neurodegenerative conditions [20][21][22]. The brain-behavior relationship and the mechanisms of injury, disease, and recovery inform these therapies. Such interventions reflect two broad conceptual frameworks of functional brain recovery: compensatory and restorative approaches [22,23]. Compensatory interventions focus on translating underlying neuropsychological impairments into environmental adaptations, thereby enabling participation in daily life. The primary goal of compensatory approaches is to help individuals achieve real-world objectives and participate in activities that might be blocked by unrecoverable cognitive impairments.
Conversely, restorative approaches use repetitive exercises, similar to the exercises in standardized cognitive abilities tests, to restore dysfunctional cognitive functions (e.g., attention, organization, memory, reasoning, and problem solving). Restorative CRT strengthens underlying neuropsychological impairments located within the brain rather than teaching compensatory or adaptive skills [22,23]. Increased brain activation likely occurs by a progression of synaptic growth and repair generated by repeated practice or the stimulation of specific neuropathways. Supporting evidence for this approach includes a recent functional MRI (fMRI) study that exhibited increased memory-related brain activation following cognitive training in several brain regions in individuals at high risk for dementia due to mild cognitive impairment (MCI) [22,23]. The restorative methods used in this study have been applied successfully to patients with schizophrenia, substance use, or brain injuries, children and adults with ADHD, and for the cognitive deficits associated with major depression [21][22][23].
Materials And Methods
The study design employed a retrospective chart review methodology to formulate results derived from participants who had previously participated in a brain-computer interface (BCI)-augmented CRT program as a component of their AOD use recovery treatment program. In addition, this study used a profile analysis quasi-experimental design, using participant retrospective records arranged into two non-randomized groups: control and treatment, to explore treatment effects.
Participant records were structured with dependent pre-and post-test sampling in both groups. Profile analysis is an application of a multivariate analysis of variance (MANOVA) in which several dependent variables (DVs) are measured on the same scale [24,25], with the more common application where subjects are measured repeatedly on the same DV. Profile analysis offers a multivariate alternative to the univariate F test for the within-subjects effect and its interactions. The analysis asks if the two groups have the same pattern of means on the subscales.
Both treatment and non-treatment group participants were concurrently involved in some form of traditional addiction recovery therapy, either through a residential treatment center or an outpatient program. Each participant in the treatment group received an individualized program designed to address neurobehavioral imbalances in their executive function. Targeted treatment variables focused on remediating deficiencies observed in participants' cognitive control, memory, attention, and executive function. Neurobehavioral imbalances were addressed using an advanced form of a CRT employing a BCI method to influence CRT training activities based on the cognitive information processing strength of each imbalance in real-time [15,21,26].
Participants
Participants were adults (aged 18 or older), poly-substance users recruited from AOD use outpatient programs and AOD use treatment centers across the United States. Data were collected from 2012 to 2016, with follow-up data regarding maintenance of sobriety collected through 2018. All records were deidentified to protect the anonymity of individual health information. By request of the treatment centers from which participants were recruited, the records of participants were included only when individuals had accrued a minimum of 60 days of sobriety for the treatment group, with 120 days or more for the waitlist group. The participants had been poly-substance users for an average of 17 years and had an average of 10 residential treatment program failures. Participants were matched with regard to age, education, and gender. Treatment and non-treatment group record selection was based on a deliberate self-selection convenient sample method in which participants either volunteered for pre-and post-testing without treatment or chose to enter the treatment program. The treatment group was tested before treatment and upon treatment completion.
The treatment group was composed of 200 participant records (n = 200; 100 males and 100 females); the non-treatment comparison group included 121 records (n = 121; 61 males and 60 females). The following exclusion criteria were used for all groups: 1) <60 days of sobriety; 2) a history of severe traumatic brain injury with a loss of consciousness of >30 minutes; and 3) histories of schizophrenia, bipolar disorder, or obsessive-compulsive disorder. All participants provided written consent to participate in the study.
Participants' records were divided into a non-treatment group and a treatment group. Each group received the same pre-test.
Experimental pre-and post-test measures
To support a profile analysis of the effect of treatment status (no treatment or treatment) on cognitive ability, participants were measured on 14 subtests of the Woodcock-Johnson Test of Cognitive Abilities III (WJIII) [27]. The WJIII is a set of cognitive ability subtests based on the Cattell-Horn-Carroll theory (CHC) of cognitive abilities. The CHC theory provides a comprehensive framework for understanding the structure of cognitive information processing abilities. The 14 subtest areas were: iQT (fluid intelligence), thinking efficiency, concept formation, working memory, numbers reversed, visual-auditory learning, visual-auditory learning-delayed, verbal ability, phonemic awareness, verbal comprehension, incomplete words, sound blending, spatial relationships, and visual matching. The grouping variable was for BCI/CRT treatment vs. non-treatment.
Tracking sobriety and social reintegration rates
For this study, sobriety was defined as maintaining abstinence from any form of substance use. Social reintegration was defined as maintaining financial independence (i.e., living on one's own and working to support oneself by living independently or being in school). The records of random sets of treatment (n = 50) and non-treatment (n = 50) participants at 18-month follow-up interviews were reviewed to track the integrative effect of the program. In addition, answers to three questions were recorded: 1) How long participants had maintained sobriety? 2) What is the status of their current living situation? and 3) What is their work status?
Procedure and training
The CRT training method used in this study was implemented through a set of training tools composed of a collection of working memory and executive function activities, routinely employed by the primary author in clinical settings to address brain-based deficiencies, called the NeuroCoach program (NTLGroup Inc., Scottsdale) by clients and staff [21,26]. Each activity was designed to develop cognitive functional capacity within a chosen cognitive ability (e.g., auditory working memory capacity, impulse control on go/no-go tasks, or cognitive flexibility with variations of modified Stroop activities) and to develop resilience when encountering stress. Resiliency was enhanced by demanding greater performance under a larger, more demanding cognitive load based on varying working memory load demands and performance in conjunction with changing response time constraints. In addition, an EEG BCI interface was used to monitor and adjust cognitive loads based on previously identified EEG protocols of addictive drive mechanisms and working memory cognitive load, both of which were used to influence activity presentation [21,26,28,29].
Participants sat in front of a computer screen and performed tasks derived from the WJIII battery, presented by the EventIDE task management program (OkazoLab, Delft, The Netherlands All training group participants completed 48 extensive training sessions (approximately 30-40 minutes per session) before re-evaluation. Immediately after the initial evaluation, the training group used the remediation program three times per week for eight weeks (approximately 30-40 minutes per session); these participants were then reassessed. The training group participated in their traditional addiction therapy program provided by a residential treatment center or outpatient program. The non-training group did not participate in the remediation program but continued with traditional addiction therapy provided by the residential treatment center or outpatient program.
Results
The mean age of participants was 34 years old and ranged from 24 to 44 years old. Group means were used for data screening. All participants had complete data sets (i.e., no missing data). No univariate or multivariate outliers were detected, with p = 0.001, and assumptions regarding normality of sampling distributions, homogeneity of variance, covariance matrices, linearity, and multicollinearity were met. (Table 1). Thus, results imply that participants' measured cognitive abilities in the treatment group increased significantly more across tests administrations than those in the non-treatment group. In Table 2, the eta-squared coefficients are displayed, revealing that between 10% and 53% of the reason why the variables varied across time was due to treatment group status. Figure 1 displays the estimated marginal mean scores for each group across test administrations. The set of changes in pre/post marginal mean scores across each tested WJIII domain constitutes a profile for each treatment group (treated vs. untreated).
Effects on sobriety and social re-integration
A random set of treatment and non-treatment participants were followed for 18 months to track the integrative effect of the program. At the 18-month follow-up assessment, 89% of the treatment group had maintained sobriety, and 98% had transferred to sober living facilities and maintained an independent residence. Conversely, the sobriety rate of the non-treatment group was 31%, which is consistent with the sobriety rates reported in the literature [2][3][4]. In addition, the 89% abstinence rate marks a substantial improvement compared with the 20%-40% sobriety rate reported in the literature.
Discussion
Individuals in recovery exhibit persistent neurophysiological deficits affecting cognitive performance. For example, regarding cognitive control, abstinent cocaine users show reduced metabolism in the left anterior cingulate cortex (ACC) and right dorsolateral prefrontal cortex (DLPFC), including greater activation in the right ACC [16]. The ACC contributes to two essential aspects of executive control: inhibitory control and performance monitoring [16]. Performance monitoring processes include error detection and conflict monitoring, whereas inhibitory control restrains desired behaviors [30][31][32].
Neuroscience models of cognitive control emphasize that when the ACC detects erroneous or conflicting behavior, a signal is sent to the DLPFC [30][31][32]. The DLPFC modulates and sustains goal-oriented behaviors by influencing top-down cognitive control, directing behaviors away from incorrect, conflict-causing responses and toward correct, conflict-reducing responses [30][31][32]. With regard to addiction and sobriety, these monitoring and modulating processes are valuable for detecting hazardous situations or behaviors that increase the likelihood of relapse [33]. Importantly, previous studies have shown that reduced metabolic activity in these brain regions predicts relapse behaviors in both abstinent and active cocaine users [34][35][36].
In addition, individuals demonstrating healthier ACC activity at the onset of abstinence are less likely to relapse [33][34][35][36]. Equally important, performance scores on behavioral monitoring tasks in conjunction with neuroimaging data (using Stroop and decision-making activities known to activate cognitive control neuronal circuits) predict the probability of completing treatment [37,38]. Thus, cognitive control circuits are reliable targets for relapse prediction and neuronal rehabilitation training. The current study posited that similar tasks help evaluate functional changes in cortical circuits that underlie inhibitory control and the action monitoring of abstinence.
In a pilot study of poly-substance users, Gunkelman J and Cripe C used EEG-based neurometrics to identify and establish two joint neural factors observed in most addiction cases [26]. Each factor was considered to represent a separate pathophysiologic drive toward addictive behaviors: a) over-arousal of CNS involving DLPFC disruptions and b) cingulate issues (ACC disruptions and compulsive hyper/hypo foci). After applying EEG phenotype modeling methods [39], the authors derived a standard set of BCI protocols to monitor the EEG responses acquired during CRT training [21,26]. This training targeted executive function and ACC engagement to influence the level of difficulty of the activity [21,26]. The activities included a collection of modified Stroop activities, go/no-go activities, working memory activities, attention-binding activities, and other executive function activities [21]. Cripe has previously detailed the design and development of these training tools [21,26]. The present study employed the BCI-monitoring methodology explained earlier with the addition of executive function and working memory activities. These activities aimed to simulate neuroresilience training by varying cognitive load during training. In addition, the study investigated whether 1) BCI-augmented CRT methods can increase participants' cognitive control abilities and 2) this increase may allow recovering participants to maintain sobriety at higher rates than the 20%-40% treatment average.
Conclusions
A BCI-augmented CRT treatment method targeted at strengthening executive self-control abilities showed a significant impact on a treatment group's cognitive abilities and sobriety performance compared to untreated control. Comparisons of the pre-and post-treatment results between treated and non-treated participants suggested a causal inferential response to positive treatment effects, suggesting that using a BCI-augmented CRT method increases cognitive control abilities in recovering participants. Furthermore, when considering participants' qualitative sobriety/social reintegration reports, increased abstinence rates in treated versus non-treated participants raise the possibility that increased executive function abilities contribute to a participant's ability to maintain sobriety more effectively than the currently published recovery rates of 20%-40%. Nevertheless, the current results only suggest that BCI-augmented CRT training helps strengthen executive self-control abilities, which might improve sobriety rates. The principal limitations of this study were 1) the fact that it employed a retrospective design and 2) the fact that participants were paid. Follow-up studies that compare BCI plus CRT versus CRT alone versus no treatment or sham BCI study conditions are needed to determine which combination of BCI and CRT treatment methods is most effective.
Cripe declare(s) personal fees from NTLGroup, Inc. Author Curtis T. Cripe intermittently performs paid consultancy and data analysis for Neurologics. To account for this situation, and as described in the text of the manuscript, all data analyzed to assess impact were provided in an anonymized fashion to the data analysis team, none of whom were involved in the collection of raw data. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-01-21T16:49:23.736Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "f833b68c03fc92337e438dc19c5f9231ee9fbcd7",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/82506-improved-sobriety-rates-after-brain-computer-interface-based-cognitive-remediation-training.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "919b2aa1589507e9adb68b6ff1b0284cdd1b56b4",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232384276
|
pes2o/s2orc
|
v3-fos-license
|
Polymorphisms within Autophagy-Related Genes Influence the Risk of Developing Colorectal Cancer: A Meta-Analysis of Four Large Cohorts
Simple Summary We investigated the influence of autophagy-related variants in modulating colorectal cancer (CRC) risk through a meta-analysis of genome-wide association study (GWAS) data from four large European cohorts. We found that genetic variants within the DAPK2 and ATG5 loci were associated with CRC risk. This study also shed some light onto the functional mechanisms behind the observed associations and demonstrated the impact of DAPK2rs11631973 and ATG5rs546456 polymorphisms on the modulation of host immune responses, blood derived-cell counts and serum inflammatory protein levels, which might be involved in promoting cancer development. No effect of the DAPK2 and ATG5 polymorphisms on the autophagy flux was observed. Abstract The role of genetic variation in autophagy-related genes in modulating autophagy and cancer is poorly understood. Here, we comprehensively investigated the association of autophagy-related variants with colorectal cancer (CRC) risk and provide new insights about the molecular mechanisms underlying the associations. After meta-analysis of the genome-wide association study (GWAS) data from four independent European cohorts (8006 CRC cases and 7070 controls), two loci, DAPK2 (p = 2.19 × 10−5) and ATG5 (p = 6.28 × 10−4) were associated with the risk of CRC. Mechanistically, the DAPK2rs11631973G allele was associated with IL1 β levels after the stimulation of peripheral blood mononuclear cells (PBMCs) with Staphylococcus aureus (p = 0.002), CD24 + CD38 + CD27 + IgM + B cell levels in blood (p = 0.0038) and serum levels of en-RAGE (p = 0.0068). ATG5rs546456T allele was associated with TNF α and IL1 β levels after the stimulation of PBMCs with LPS (p = 0.0088 and p = 0.0076, respectively), CD14+CD16− cell levels in blood (p = 0.0068) and serum levels of CCL19 and cortisol (p = 0.0052 and p = 0.0074, respectively). Interestingly, no association with autophagy flux was observed. These results suggested an effect of the DAPK2 and ATG5 loci in the pathogenesis of CRC, likely through the modulation of host immune responses.
Introduction
Colorectal cancer (CRC) is the third most common cancer in developed countries and the second leading cause of morbidity and mortality in both men and women worldwide [1]. Despite modern advances in the diagnosis, surgery, and treatment of CRC, approximately 40% of patients die because of the disease [2]. Although it is well established that genetic events contribute to CRC pathogenesis [3] and that the combination of these factors with the gut microbiome, diet, environmental or even epigenetic factors provides an unprecedented opportunity to improve CRC diagnosis, disease stratification and the tailoring of treatments [4], the precise molecular mechanisms that lead to CRC development and its progression remain elusive.
Increasing evidence suggests that autophagy, a cellular catabolic degradation pathway, is a central process driving colorectal tumorigenesis and cytotoxic response to chemotherapeutic agents [5]. It has been demonstrated that hypoxic cancer cells use autophagy as a way to obtain additional nutrients and energy for cell survival and expansion [6]. Autophagy has been reported to be deregulated in CRC [7], and autophagy molecules such as Beclin 1, p62/sequestosome and LC3 are overexpressed in a high percentage of colorectal carcinomas [7]. In addition, genetic studies have shown that autophagy-related genes are frequently mutated in colon cancer cells and that positive regulators of autophagy (such as Bif-1) are implicated in the development of various cancers, including colon adenocarcinoma [8]. Autophagy could act as a treatment resistance mechanism prolonging tumour cell survival and it also contributes to the enrichment and survival of CRC stem cells under oxaliplatin treatment [9]. In support of the role of autophagy in modulating response to treatment, the administration of hydroxychloroquine (HCQ), an autophagy inhibitor, was found to enhance anti-cancer activity of the histone deacetylase inhibitor, vorinostat (VOR), in preclinical models and early phase clinical studies of metastatic CRC [10]. On the other hand, uncontrolled autophagy has been reported to limit inflammation and modulate multicellular immunity processes (affecting macrophages, T and B cells, neutrophils, and dendritic cells) and memory responses, but also cell differentiation, genomic stability and even leads to cell death through different pathways [11]. In this regard, it has been reported that autophagy controls immunity through NLRP3 inflammasome-dependent signals but also through ATG proteins that act independently on the inflammasome [12][13][14]. Despite the findings that point towards an important role of autophagy in CRC development, tumour cell survival and host immunity, the role of this biological process in CRC is not fully understood and might depend on how it is regulated during the course of the disease [11].
Studies on genetic variants in autophagy-related genes and their association with CRC risk may lead to further insight into mechanisms. So far, only a limited number of autophagy-related single nucleotide polymorphisms (SNPs) have been reported to be associated with CRC risk [15,16] and patient survival [17]. Therefore, we aimed to comprehensively evaluate germline variants within autophagy-related genes in relation to CRC risk using four large European cohorts. In addition, because autophagy has been linked to host immunity [17,18], we assessed the functional consequences of the SNPs that showed associations with CRC risk by conducting in vitro stimulatory experiments in a large cohort of healthy donors as well as through the analysis of a large panel of serum inflammatory biomarkers and steroid hormones and the comprehensive characterisation of blood-derived immune cell populations and the autophagy flux status.
Study Populations
This study included 4 large European populations. The discovery study sample included 7998 subjects (4485 CRC patients and 3513 controls) ascertained through the DACHS study conducted in southwest Germany. Demographic and clinical characteristics of recruited CRC patients and healthy controls are shown in Table S1. Briefly, CRC cases were recruited from patients who received in-patient treatment in a hospital of the Rhein-Neckar-Odenwald region due to a first diagnosis of CRC. Controls were frequencymatched according to gender, 5-year age groups, and county of residence, and were then contacted by mail and follow-up calls. Demographic information as well as information on colonoscopies, diet, anthropometry, physical activity, medication (including statins, nonsteroidal anti-inflammatory drugs (NSAIDs), menopausal HRT), reproductive factors, lifestyle factors, and family history was collected during a face-to-face interview by trained interviewers using a standardised questionnaire. To be eligible, participants had to be at least 30 years old and capable of completing the interview. The three other study samples included the CRCGen study consisting of 948 Spanish CRC patients and 1076 healthy controls (Table S2) [3,[19][20][21][22].
Gene and SNP Selection, Association Analysis, and Meta-Analysis
A total of 234 autophagy-related genes were selected on the basis of their presence in the autophagy database (http://autophagy.lu/index.html, accessed on 13 December 2019; Table S3) and association estimates for all genotyped or imputed SNPs within or near these genes (5 Kb upstream and 3 Kb downstream) were extracted from 4 genomewide association studies (GWAS) for CRC conducted in the DACHS population between 2003 and 2016. Details about the genotyping platforms used and the number of CRC cases and controls analysed in each study are shown in Table S4. Genotyping, quality control filtering, and imputation protocols used in these studies have been described in detail elsewhere [23][24][25][26]. Altogether, 9767 SNPs in the autophagy-related genes, either genotyped or imputed, were available from GWAS in the DACHS sample. We performed an overall logistic regression analysis adjusted for 3 principal component analyses (PCAs) and identified 925 SNPs showing an association with CRC risk at p < 0.10. Of those, 183 SNPs were considered independent SNPs according to the information provided by LD link pairwise linkage disequilibrium r 2 < 0.8 (https://ldlink.nci.nih.gov/?tab=home, accessed on 13 December 2019). Using GWAS data from the CRCGen study [3], we conducted a meta-analysis of the DACHS and CRCGen populations for the 183 independent SNPs and used the I 2 statistic to assess statistical heterogeneity between the studies. The pooled odds ratio (OR) was computed using the fixed-effect model. The multiple testing significance threshold was set to 0.00027 (0.05/183 independent SNPs) to the meta-analysis results. After the meta-analysis, the most interesting associations (p < 0.002) were further validated using GWAS data from the CORSA (948 CRC cases and 1076 controls) and Czech Republic CCS (1605 CRC cases and 1633 controls) studies. A workflow diagram of this study is shown in Figure 1.
Genotyping of Imputed SNPs in the CORSA and Czech Republic CCS Cohorts
To validate the genotypes of imputed SNPs that showed the lowest p-value in the association analysis (NRG3 rs11196336 , DAPK2 rs11635284 , EGFR rs2075108 , TP73 rs4648553 and ATG5 rs546456 ), genotyping of the whole CORSA population and a subset of the Czech CCS study (1031 CRC cases and 886 controls) was carried out at GENYO (Centre for Genomics and Oncological Research, PTS Granada, Granada, Spain) using KASPar TM genotyping technology (LGC Genomics, Hoddesdon, UK) or Taqman ® SNP Genotyping assays (Thermo Fisher Scientific, Foster City, CA, USA) according to previously reported protocols [27]. For internal quality control, 5% of samples were randomly selected and included as duplicates. Concordance between the imputed and the genotyped samples for the SNPs analysed was ≥99.5%. (OR) was computed using the fixed-effect model. The multiple testing significance threshold was set to 0.00027 (0.05/183 independent SNPs) to the meta-analysis results. After the meta-analysis, the most interesting associations (p < 0.002) were further validated using GWAS data from the CORSA (948 CRC cases and 1076 controls) and Czech Republic CCS (1605 CRC cases and 1633 controls) studies. A workflow diagram of this study is shown in Figure 1.
Functional Association of the Autophagy-Related Variants with Immune Responses
In order to determine the functional role of the most interesting SNPs after the metaanalysis of the 4 cohorts (independent SNPs showing a p-value lower than 0.001), we conducted cytokine stimulation experiments in the 500 Functional Genomics cohort from the Human Functional Genomics Project (HFGP; http://www.humanfunctionalgenomics. org/site/, accessed on 13 December 2019), an excellent cohort to determine the influence of genomic variation on the variability of immune responses. The HFGP study was approved by the Arnhem-Nijmegen Ethical Committee (no. 42561.091.12) and biological specimens were collected after informed consent was obtained. We investigated whether any of the SNPs associated with CRC in the meta-analysis of all study populations significantly correlated with levels of 9 pro-and anti-inflammatory cytokines (TNF α, IFN γ, IL1Ra, IL1 β, IL6, IL8, IL10, IL17, and IL22) after the stimulation of whole blood, peripheral blood mononuclear cells (PBMCs) or monocyte-derived macrophages (MDM) from 408 healthy subjects with LPS (1 or 100 ng/mL, Sigma-Aldrich, St. Louis, MO, USA), PHA (10 µg/mL, Sigma, St. Louis, MO, USA), Pam3Cys (10 µg/mL, EMC microcollections, Tübingen, Germany), or CpG (100 ng/mL, InvivoGen, San Diego, CA, USA), but also common bacterial components of the human intestinal microbiota (Bacteroides fragilis and Staphylococcus aureus representing Gram-negative and Gram-positive bacteria, respectively). After log transformation, linear regression analyses adjusted for age and sex were used to determine the correlation of the SNPs with cytokine expression quantitative trait loci (cQTLs). All analyses were performed using R software (http://www.r-project.org/, accessed on 13 December 2019) using custom scripts in the R programming language based on existing functions such as lm (stats). In order to account for multiple comparisons, we used a significance threshold of 0.00046 (0.05/2 independent SNPs within DAPK2 and ATG5 loci × 9 cytokines × 6 stimulants).
Detailed protocols for PBMCs isolation, macrophage differentiation and stimulation assays have been reported elsewhere [28]. Briefly, PBMCs were washed twice in saline and suspended in medium (RPMI 1640) supplemented with gentamicin (10 mg/mL), L-glutamine (10 mM) and pyruvate (10 mM). PBMC stimulations were performed with 5 × 10 5 cells/well in round-bottom 96-well plates (Greiner Bio-one, Frickenhausen, Germany) for 24 h in the presence of 10% human pool serum at 37 • C and 5% CO 2 . Su-Cancers 2021, 13, 1258 6 of 16 pernatants were collected and stored in −20 • C until used for ELISA. LPS (100 ng/mL), PHA (10 µg/mL) and Pam3Cys (10 µg/mL), CpG (100 ng/mL), Bacteroides fragilis (NCTC 10584) and Staphylococcus aureus (ATCC 25923) were used as stimulators for 24 or 48 h. Bacteroides fragilis and Staphylococcus aureus were heat-killed for 30 min at 95 • C and 100 • C, respectively. Whole blood stimulation experiments were conducted using 100 µL of heparin blood that was added to a 48-well plate and subsequently stimulated with 400 µL of LPS, PHA (final volume 500 µL) and Staphylococcus aureus for 48 h at 37 • C and 5% CO 2 . Supernatants were collected and stored in −20 • C until used for ELISA. Concentrations of human TNF α, IFN γ, IL1Ra, IL1 β, IL6, IL8, IL10, IL17, and IL22 were determined using specific commercial ELISA kits (PeliKine Compact, Amsterdam, or R&D Systems), in accordance with the manufacturers' instructions. When values were below or above the detection limit of the ELISA, the corresponding limit was used.
Correlation between Autophagy-Related SNPs and Serum Steroid Hormone Levels
Next, we investigated the correlation of the most interesting SNPs with levels of 7 serum steroid hormones (androstenedione, cortisol, 11-deoxy-cortisol, 17-hydroxy progesterone, progesterone, testosterone and 25 hydroxy vitamin D3) in 279 subjects selected from the HFGP project that did not have hormone replacement therapies or used oral contraceptives. Serum steroid hormone levels were determined by chromatography-tandem mass spectrometry after protein precipitation and solid-phase extraction following previously reported protocols [29]. After log transformation, correlation between steroid hormone levels and autophagy-related SNPs was evaluated by linear regression analysis adjusted for age and sex. The significance threshold was set to 0.0036 considering the number of independent SNPs tested (n = 2) and the number of hormones determined (n = 7).
Correlation of Autophagy SNPs and Blood Cell Counts and Serum/Plasmatic Proteomic Profile
We also investigated the effect of autophagy variants on cell-level variation by using a set of 91 manually annotated immune cell populations and genotype data from the HFGP cohort that included 408 healthy subjects (Table S5). Cell populations were measured by 10-color flow cytometry (Navios flow cytometer, Beckman Coulter, Miami, FL, USA) after blood sampling (2-3 h), and cell count analysis was performed using Kaluza software (Beckman Coulter, v.1.3). In order to reduce inter-experimental noise and increase statistical power, cell count analysis was performed by calculating parental and grandparental percentages, which were defined as the percentage of a certain cell type within the subpopulation of the cells from which it was isolated [30]. Detailed laboratory protocols for cell isolation, reagents, gating, and flow cytometry analysis have been reported elsewhere [29] and the accession number for the raw flow cytometry data and analysed data files are available upon request to the authors (http://hfgp.bbmri.nl, accessed on 13 December 2019). A proteomic analysis was also performed in serum and plasma samples from the HFGP cohort. Circulating proteins were measured using the commercial Olink ® Inflammation panel (Olink, Sweden) that resulted in the measurement of 103 different biomarkers (Table S6). Protein levels were expressed on a log2-scale as normalised protein expression values and normalised using bridging samples to correct for batch variation. Considering the number of proteins (n = 103) and polymorphisms (n = 2) tested, a p-value of 0.00024 was set as the significance threshold for the proteomic analysis.
Impact of Autophagy-Related Variants on the Autophagy Flux
In order to accurately determine the role of autophagy SNPs in modulating autophagy, we investigated their impact on the autophagy flux in a cohort of 41 European healthy donors. For that purpose, we isolated peripheral blood mononuclear cells (PBMCs) from whole blood by density gradient centrifugation using Histopaque ® , and we treated them for 2 h with 10 µM of bafilomycin A1 or 10 mM of metformin to inhibit or induce autophagy, respectively. A total of 5 × 10 −5 PBMCs were plated in each well for stimulatory and inhibitory experiments and treated with metformin or bafilomycin A1 alone or in combination. Untreated cells were used as experimental controls. After treatment, cells were harvested and protein extraction was performed with 50 µL of lysis buffer (1% NP-40, 500 mM Tris HCL, 2.5 M NaCl, 20 mM EDTA, phosphatase and protease inhibitors-from Roche-at pH 7.2). Twenty (20) µg of the total protein were resolved in a 12% SDS gel and transferred to a Nitrocellulose membrane for 10 min in a Trans-Blot Turbo transfer system. Membranes were then blocked for 1 h using Tris buffered saline (TBS) with 0.1% Tween 20 (TBST) containing 5% BSA and incubated overnight at 4 • C with the polyclonal primary antibodies at 1:1000 in 1% BSA (Rabbit anti-LC3A/B Antibody, Cell-Signaling and Mouse anti-Actin antibody, Merck Millipore, Darmstadt, Germany). After washing with tris-buffered saline-tween (TBS-T), nitrocellulose membranes were incubated with the corresponding secondary antibodies (IgG anti-Rabbit for LC3A/B and IgG anti-Mouse for Actin). Protein levels were detected after incubation with SuperSignal West Femto Maximum Sensitivity Substrate (Thermofisher) or Clarity Western ECL Substrate (Bio-Rad, Hercules, CA, USA). Digital images of the Western blots were obtained in a ChemiDoc XRS System (Bio-Rad) with Quantity One software V4.6.5 (Bio-Rad). Autophagy flux was determined as the difference in the LC3-II/Actin ratio between cells treated or not with bafilomycin A1 and/or metformin, and lineal regression analyses adjusted for age and sex were used to determine the correlation between autophagy-related SNPs and autophagy flux values. A significance threshold of 0.0125 was set according to the quotient of 0.05 and the number of SNPs tested (n = 2) and the treatments administrated in vitro (n = 2).
Results
This comprehensive association study included a total of 15,076 subjects (8006 CRC cases and 7070 controls) (Figure 1). In the DACHS sample (4485 CRC patients and 3513 controls), association analysis of 9767 genotyped and imputed SNPs in 234 autophagy-related genes yielded 183 independent SNPs (r 2 < 0.8) that were associated at p < 0.10. These SNPs were selected for the first meta-analysis of the DACHS and CRCGen samples, comprising 5433 CRC cases and 4589 controls. The meta-analysis revealed the most significant associations with risk of CRC for a single SNP within the NRG3 gene and an LD block including 14 variants in the DAPK2 locus (r 2 > 0.90; Table 1 and Table S7). Each copy of the NRG3 rs11196336 C allele increased the risk of developing CRC by 17% (p = 1.85 × 10 −5 ), whereas in the DAPK2 locus three SNPs in strong LD increased the risk by 15% (rs11633496, rs11633611 and rs11631973; p = 7.71 × 10 −5 -7.99 × 10 −5 ; Table 1 and Table S7). Additionally, we found potentially interesting associations for SNPs within the EGFR, TP73 and ATG5 loci. Considering these results, we decided to advance the NRG3 and DAPK2 SNPs for replication, due to showing the most significant associations with CRC risk, but also those that were associated with CRC risk at p < 0.002 (21 SNPs representing five independent signals after excluding LOC100128105, which was predicted to be a hypothetical protein by the Guide to the Human Genome (www.cshlp.org/ghg5 _db/recinfo/87/8750.shtml, accessed on 13 December 2019)).
Data generated in the first replication stage were then meta-analysed with those from the Austrian CORSA and Czech CCS studies, including a total of 15,076 subjects (8006 CRC cases and 7070 controls). Importantly, the meta-analysis of all study cohorts confirmed that carriers of the DAPK2 rs11631973G allele had a significantly increased risk of developing CRC (OR Meta = 1.13, 95%CI 1.07-1.19, p = 0.000022, p Corrected = 0.0041; Table 2 and Table S7). It is worth noting that the meta-analysis of all study cohorts also revealed a potentially interesting association of the ATG5 rs546456 SNP in modulating the risk of developing the disease. Each copy of the ATG5 rs546456T allele additively increased the risk of developing the disease by 8% (OR Meta = 1.08, 95%CI 1.04-1.14, p = 0.00062; Table 2 and Table S7). The associations with the DAPK2 and ATG5 SNPs did not show any population heterogeneity.
Mechanistically, we found that carriers of the DAPK2 rs11631973G showed increased levels of IL1 β after stimulation of PBMCs with Staphylococcus aureus (p = 0.0035; Figure 2A) and lower levels of serum en-RAGE (p = 0.0068; Figure 2B), a protein that hampers the spread and virulence of Helicobacter pylori. On the other hand, in support of a functional role of the ATG5rs546456 SNP in modulating disease risk, we found that, after the stimulation of PBMCs with LPS, carriers of the ATG5rs546456T allele had increased levels of TNF α and IL1 β (p = 0.0088 and p = 0.0076, respectively; Figure 3A,B). In addition, we found that carriers of the ATG5rs546456T allele tended to have decreased levels of classical monocytes in blood (CD14 + CD16−; p = 0.0068; Figure 3C) and increased levels of serum CCL19 and cortisol (p = 0.0052 and p = 0.0074; Figure 3D,E). No association between ATG5rs546456 SNP and autophagy flux was detected (Table S8 and Figure S1). Again, although none of the functional results could be considered statistically significant after correction for multiple testing, these results together with those from the GTex portal demonstrating a correlation of this marker with ATG5 mRNA expression in muscle skeletal tissue (p = 2.7 × 10 −9 ) suggested a weak but still functional role of the ATG5 locus in the pathogenesis of CRC at multiple levels.
Finally, it is also important to mention that the associations of the NRG3, TP73 and EGFR SNPs with the risk of developing CRC in the DACHS and CRCGen cohorts could not be confirmed either in the CORSA and/or Czech CCS cohorts, which dismissed the idea of a relevant biological role of these loci on the risk of developing CRC (Table S7). In addition, we found that subjects harbouring the DAPK2 rs11631973G allele showed slightly increased levels of CD24 + CD38 + CD27 + IgM + B cells (p = 0.0038; Figure 2C), a subset of cells enriched in CRC patients. Although none of the functional data remained significant after multiple testing, these results together with those reporting a correlation between DAPK2 SNPs and DAPK2 mRNA expression in multiple tissues, including oesophagus/oesophageal junction and oesophagus/muscularis (p-values ranging from 7.6 × 10 −6 to 2.3 × 10 −4 ; Table S7), pointed to a role of the DAPK2 locus in modulating CRC risk likely through the regulation of host immune responses against components of the human microbiota.
On the other hand, in support of a functional role of the ATG5 rs546456 SNP in modulating disease risk, we found that, after the stimulation of PBMCs with LPS, carriers of the ATG5 rs546456T allele had increased levels of TNF α and IL1 β (p = 0.0088 and p = 0.0076, respectively; Figure 3A,B). In addition, we found that carriers of the ATG5 rs546456T allele tended to have decreased levels of classical monocytes in blood (CD14 + CD16−; p = 0.0068; Figure 3C) and increased levels of serum CCL19 and cortisol (p = 0.0052 and p = 0.0074; Figure 3D,E). No association between ATG5 rs546456 SNP and autophagy flux was detected (Table S8 and Figure S1). Again, although none of the functional results could be considered statistically significant after correction for multiple testing, these results together with those from the GTex portal demonstrating a correlation of this marker with ATG5 mRNA expression in muscle skeletal tissue (p = 2.7 × 10 −9 ) suggested a weak but still functional role of the ATG5 locus in the pathogenesis of CRC at multiple levels.
Discussion
This comprehensive study reports, for the first time, the association of autophagyrelated genes with CRC risk. In the meta-analysis of four European cohorts with a total of 8006 CRC cases and 7070 controls, DAPK2 and ATG5 loci were associated with a risk of CRC. Functional characterisation of the SNPs showing the strongest associations revealed no association with autophagy flux, but a microbiome-immunity link in genetically susceptible individuals that might lead to CRC development.
The strongest association was found for the DAPK2rs11631973 polymorphism within the Finally, it is also important to mention that the associations of the NRG3, TP73 and EGFR SNPs with the risk of developing CRC in the DACHS and CRCGen cohorts could not be confirmed either in the CORSA and/or Czech CCS cohorts, which dismissed the idea of a relevant biological role of these loci on the risk of developing CRC (Table S7).
Discussion
This comprehensive study reports, for the first time, the association of autophagyrelated genes with CRC risk. In the meta-analysis of four European cohorts with a total of 8006 CRC cases and 7070 controls, DAPK2 and ATG5 loci were associated with a risk of CRC. Functional characterisation of the SNPs showing the strongest associations revealed no association with autophagy flux, but a microbiome-immunity link in genetically susceptible individuals that might lead to CRC development.
The strongest association was found for the DAPK2 rs11631973 polymorphism within the DAPK2 gene. Each copy of the DAPK2 rs11631973G allele increased the risk of developing CRC by 13%. DAPK2 encodes death-associated protein kinase 2 that belongs to a family of proapoptotic Ca 2+ /calmodulin-regulated serine/threonine kinases. Although it is thought to be a tumour suppressor in haematological malignancies [32,33], DAPK2, in contrast to other DAPK family proteins, has not been identified as a tumour suppressor in solid tumours. However, in support of its possible role in colorectal tumorigenesis, it has been demonstrated that DAPK2 is involved in the regulation of haematopoiesis, cellular motility [34], and neutrophil differentiation [35]. In addition, inactivation of the DAPK2 gene has been associated with cancer development [36,37]. These studies, along with more recent studies using DAPK2 inhibitors, have opened a new window for cancer treatment. However, data related to the role of this gene in determining CRC risk are sparse. In this regard, our functional experiments showed that PBMCs from carriers of the DAPK2 rs11631973G allele had increased levels of IL1 β after stimulation with Staphylococcus aureus, which led to the hypothesis that the DAPK2 locus, known to be involved in modulating neutrophil and eosinophil function, might influence CRC risk through the upregulation of IL1 β production by granulocytes in response to components of the intestinal microbiota and, thereby, promote chronic inflammation. These findings are in line with those reporting that high levels of neutrophil-derived IL1 β alter the colonic epithelial barrier [38], induces tumorigenesis, and correlates with poor prognosis in solid tumour patients [39]. Likewise, in support of the hypothesis suggesting a role of the DAPK2 rs11631973 SNP in modulating granulocyte function, we also found that carriers of the DAPK2 rs11631973G allele showed decreased serological levels of en-RAGE, a protein encoded by the S100A12 gene and secreted by granulocytes that have an inhibitory effect on the spread and virulence of Helicobacter pylori. This result suggested that the DAPK2 rs11631973 SNP might also have an impact on CRC risk by determining the immune response against H. pylori infection, a pathogen consistently associated with CRC development [40] among other cancers [41]. Interestingly, we also found that subjects harbouring the DAPK2 rs11631973G allele showed slightly increased levels of CD24 + CD38 + CD27 + IgM + B cells, a subset of transitional B cells frequently found in leukocytes from CRC patients that regulate T cell-mediated proinflammatory responses and correlate with advanced disease stages [42]. Furthermore, in silico data from Haploreg showed that this variant correlates with enhanced promoter activity exclusively in primary neutrophils and that it is located among H3K4me1 histone marks in the rectal mucosa. Data from GTex portal also showed that the DAPK2 rs11631973G allele correlates with DAPK2 mRNA expression levels in oesophagus/gastroesophageal junction and oesophagus/muscularis, among other tissues. Although none of the functional results remained significant after correction for multiple testing, altogether these results pointed to a role of the DAPK2 locus in CRC pathogenesis through host immune responses.
Another interesting finding was the association of the ATG5 SNP with CRC risk that remained only marginally significant after multiple testing corrections. In line with our genetic data, functional experiments suggested a role of this locus in the modulation of CRC risk. ATG5 (autophagy related gene 5) encodes for a 275 amino acid protein involved in the control of autophagic vesicle formation but also in the mitochondrial response to oxidative damage, T cell differentiation, and immune responses to microorganisms.
Although the role of ATG5 in CRC remains unclear due to the controversial results between in vitro and in vivo studies, it has been reported that the ATG5 locus was lost in more than 20% of CRC patients and that heterozygous or complete deletion of ATG5 led to increased cellular death and tumour burden and enhanced antitumor efficacy of IFN γ [43]. Mechanistically, it has been reported that heterozygous deletion of ATG5 activated EGFR and Wnt/β-catenin pathways in adenomas of Apc(Min/+) mice leading to the enhancement of the IFN γ-dependent inhibition of these pathways [43]. Moreover, more recent studies have suggested that the controversial role of ATG5 in CRC might be due to the compensatory activation of autophagy-related proteins (AKT, RICTOR and mTOR) in response to autophagy inhibition [44], which have also been associated with CRC prognosis [45]. In contrast to the notion of a role of the ATG5 locus in modulating autophagy, our study has suggested that the ATG5 rs546456 SNP might influence CRC risk by modulating host immune responses. We observed that carriers of the ATG5 rs546456T allele showed increased levels of TNF α and IL1 β after the stimulation of PBMCs with LPS and tended to have decreased levels of classical monocytes and increased levels of serum CCL19. These functional results were also in line with those suggesting ATG5 in the modulation of neutrophil-derived IL1 β levels in response to LPS [46], but also in the regulation of classical monocytes in blood and serum levels of CCL19, a relevant chemokine that play a key role in the control of CRC cell proliferation, migration and angiogenesis [47]. In line with this notion, previous studies have demonstrated that LPS stimulates the noncanonical inflammasome to induce the production of IL1 β in neutrophils but also other myeloid cells including macrophages, and that this effect on the inflammasome is mediated, at least in part, by ATG5 [48]. Furthermore, it has been demonstrated that autophagy inhibits neutrophil apoptosis and that the siRNA-mediated silencing of ATG5 resulted in accelerated spontaneous apoptosis but attenuated TNF α-induced apoptosis, which suggested a context-specific effect of ATG5 on immune cell survival [49]. Interestingly, we also found a weak correlation between the ATG5 rs546456T allele and increased levels of cortisol in serum, which was in agreement with previous studies suggesting that cortisol is associated with immune deregulation, cancer development, disease progression [50] and a more aggressive metastasis [51]. Previous studies have also demonstrated that cortisol, acting synergistically with catecholamines, may facilitate cancer cell growth and potentiate the release of TNF α and IL1 β. Even though neither the genetic association of the ATG5 rs546456 SNP with CRC risk nor functional data remained significant after correction for multiple testing, altogether these results suggest a role of the ATG5 locus in modulating immune cells (probably neutrophils and macrophages) and their function in activating tumorigenic pathways in CRC.
Finally, it is important to mention that this study has both strengths and drawbacks. The major strengths of our study are the comprehensive analysis of inherited genetic variation in 234 autophagy-related genes reported in the autophagy database (http://autophagy.lu/index. html, accessed on 13 December 2019) and the inclusion of four large European populations including a total of 15,076 subjects, 8006 CRC cases, and 7070 controls. In the meta-analysis including all study cohorts, we had 80% power to detect an odds ratio of 1.12 (α = 0.00027) for an SNP with a frequency of 0.25, which emphasised the feasibility of the study design. Likewise, we comprehensively analysed the impact of autophagy-related SNPs in modulating blood cell counts, steroid hormones, serum and plasma metabolites, and immune responses in a large cohort of healthy subjects. Another important strength of this study was the experimental analysis assessing the effect of autophagy SNPs in modulating the autophagy flux in PBMCs left untreated or treated with metformin or bafilomycin. An important drawback of this study was its multicentric nature that placed inevitable limitations such as the impossibility of uniformly collect mutation profiles (including KRAS G12 but also APC, TP53, EGFR, BRAF, LOH, PIK3CA and TGFBR) for a significant set of patients.
Conclusions
This study reports, for the first time, a functional impact of DAPK2 and ATG5 loci in modulating CRC risk and provides new insights into the functional role of DAPK2 and ATG5 polymorphisms in disease pathogenesis.
Supplementary Materials: The following are available online at https://www.mdpi.com/2072-669 4/13/6/1258/s1, Figure S1: Representative Western blot plot of the autophagy analysis, Table S1: Baseline characteristics of the DACHS population, Table S2: Baseline characteristics of the CRCGen population, Table S3: List of selected genes, Table S4: Genome-wide genotyping platforms used to genetically characterise the DACHS cohort, Table S5: Cell types analysed either in whole blood or peripheral mononuclear blood cells, Table S6: Serum and plasma metabolites measured in the HFGP cohort, Table S7: Meta-analysis of all study cohorts, Table S8: Correlation between DAPK2 and ATG5 SNPs and autophagy flux.
|
2021-03-29T05:25:29.409Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "115ae779dc0691bc3222cd76e518c7a2a9d050fb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/6/1258/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "115ae779dc0691bc3222cd76e518c7a2a9d050fb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.