id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
237467900 | pes2o/s2orc | v3-fos-license | Traumatic brain injury caused by Brazil-nut fruit in the Amazon: A case series
Background: Traumatic brain injury (TBI) represents one of the leading public health problems and a significant cause of neurological damage. Unintentional causes of TBI are the most frequent. However, fruit falling over the head causing TBI is extremely rare. In the Amazon region, accidents with ouriços, a coconut-like shell fruit, seem relatively common. However, to the best our knowledge, it has never been described in a scientific journal before. Therefore, we aim to evaluate a series of TBI caused by this tropical fruit. Methods: This study is a retrospective review of 7 TBI cases due to the fall of ouriços admitted to two tertiary hospitals in the Amazon region from January 2017 to December 2018. The collected data included: age, Glasgow Coma Scale, skull fracture, venous sinus injury, hematoma, surgical treatment, and outcome. Results: All patients were men, with an average age of 38, ranging from 8 to 77-years-old. Four out of seven had skull fractures. Five patients developed an epidural hematoma, and two of them had an associated subdural hematoma. Dura mater injury was observed in two patients, whereas four patients were operated. There was one related death. Conclusion: This case series is the first to describe an unconventional but potentially fatal cause of TBI in the Amazon: the falling of the Brazil-nut fruit. Most patients were diagnosed with mild TBI. Nevertheless, patients may have cranial fractures and epidural hematomas, leading to death when there’s a delay in medical assistance.
INTRODUCTION
Traumatic brain injury (TBI) represents one of the leading public health problems and an important cause of neurological damage, with high morbimortality rates. It occurs in all age groups, but it is more frequent in young men. [13] TBI may be unintentional (falls, motor-vehicles crashes, being struck by or against an object, and mechanism unspecified), intentional (assault or homicide and self-harm), or of undetermined intent. [7,22] e unintentional causes of TBI are far more common than the others. Accidentally being struck by or against an object, for instance, was responsible for 15.4% of all TBI emergency department visits, hospitalization, and deaths in the United States from 2007 to 2013, with most cases occurring among those aged 0-24-years-old. [24] erefore, head injury due to falling objects may represent another important, yet not so highlighted event. [13] Grivna et al. (2013) have described that falling objects accounted for 6% of all trauma injuries and that the head was the third most commonly harmed anatomical site, representing 19.5% of the cases. [11,22] One subtype of this unexpected form of TBI is fruit falling from trees. e current data regarding these specific injuries are scarce, and just few reports describe TBI due to falling coconut fruit. [3,8,16] Mulford et al. (2001) stated that coconut palm-related injury was implicated in 3.4% of all patients presenting to the surgical department in the Pacific Islands. Most patients had fell from the tree; however, 15.2% (16 patients) had a coconut fruit fall on them. ere was no report of death in this series, but two children had severe neurological deficit. [16] In the Amazon Forest, there's the Brazil-nut tree. Its fruit, a coconut-like shell called "ouriço, " is much heavier than the coconut from palm trees. Despite the recurrent news in local media, [20,21] an extensive literature review showed this phenomenon's paucity report. [9,10,14,15] erefore, we aim to evaluate a series of TBI cases caused by this tropical fruit in this study.
Although, lying in the middle of the Amazon forest and isolated from most of the country due to the lack of roads, Manaus is the only city providing specialized care for TBI patients to Amazonas state, [6,12] and Santarém is the city which provides medical assistance to the western of the Para state. However, the countryside's challenging health condition, marked by the lack of proper accessibility and integrality, prevents many victims from being promptly evaluated.
MATERIALS AND METHODS
is study is a retrospective review of TBI cases due to Brazil-nut fruit -ouriços -fall admitted in the main trauma hospital in Manaus (Hospital e Pronto-Socorro João Lúcio Pereira Machado) and Santarém (Hospital Municipal), from January 2017 to December 2018. e clinical data were collected through medical charts and included: age, Glasgow Coma Scale (GCS), skull fracture, venous sinus injury, hematoma, surgical treatment, and outcome. e tomographic findings were reviewed from the digital files of both hospitals. Due to a lack of information regarding skull fractures in the attending neurosurgeon's operative report, the data "skull fracture" was collected only from the computed tomography (CT) findings. Informed consent was obtained from all individual participants included in the study.
Case 1
A 13-year-old with emesis, dizziness, dysarthria, right paresis, and paresthesia was admitted to our service with the initial GCS of 13. He was initially evaluated at a local hospital in the countryside, where he received analgesics, mannitol, and corticosteroids. After being transferred to Manaus, he underwent a cranial CT which revealed a left parietal epidural hematoma [ Figure 1]. We performed a craniotomy for hematoma drainage and suction drain placement. e time between trauma and the beginning of surgery was approximately 72 h. He was discharged with a GCS of 15 and no alterations in the neurological exam.
Case 2
A 15-year-old boy struck by brazil-nut fruit in the occipital region presented with intense headache, emesis, and visual turbidity. At the first evaluation in a local hospital, the patient received analgesics and corticosteroids. He was transferred to Manaus and arrived at our service with a GCS of 15. e cranial CT findings were pneumocephalus and occipital fracture associated with epidural hematoma [ Figure 2]. Approximately 120 h after the trauma, he underwent craniotomy for evacuation of the hematoma. He was discharged with a GCS of 15.
Case 3
A 68-year-old Brazil-nut collector suffered from TBI due to a falling ouriço over the right parietal region. He presented at the local emergency department with confusion, headache, dizziness, dysarthria, paresis, and blurred vision. e referral to our hospital occurred only after 3 days, with a GCS of 15. Head CT showed a frontal-occipital subarachnoid hemorrhage and right temporoparietal intracerebral hemorrhage [ Figure 3]. e treatment was conservative.
RESULTS
In this case series, all patients were men and from the countryside, among which two were agriculturists. e average age was 38, ranging from 8 to 77-years-old.
Four patients had skull fractures -half of which were in the frontoparietal region. e most common intracranial finding was extradural hematoma, seen in five patients, and one of them had an associated acute subdural hematoma. Furthermore, two patients had dura mater injury and subarachnoid hemorrhage -and one of them also had an intracerebral hemorrhage. Surgical treatment was performed in four patients. In all of them, we performed craniotomy with hematoma evacuation. Finally, the only death occurred in a child admitted lately in the emergency department, with a GCS of 3 and no brain stem function signs. Head CT revealed a large epidural hematoma that was not evacuated. Details are summarized in [ Table 1].
DISCUSSION
is case series is the first about TBI caused by ouriços falling. Clinical manifestations may vary, but most patients evolve with total recovery. Epileptic seizures, paresis, and paresthesia are some of the most frequently observed symptoms. It is vital to highlight that resulting lesions can lead to unfavorable outcomes that include death, [3,8,16] which occurred in one patient of these series. erefore, this novel mechanism of trauma requires medical acknowledgment. Ouriço's fall is particularly interesting for the neurosurgical field as neurosurgeons play an essential role in modifying the disease's course.
Brazil-Nut Tree (Bertholletia excelsa) reaches from 30 m to 50 m [26] [ Figure 4]. It produces a capsular 20-cm fruit, named ouriços, which contains an average of 15 to 25 almonds -the edible part of the fruit [5] [ Figure 5]. Brazil nut extractivism is a common practice in the countryside of Brazil. Gathered nuts are seen not only as an income source but as a food delicacy. e fruit has a dark brown thick and hard surface, with a medium weight of 750 g. [23] e ouriços' fall season is from November to March, a period in which many workers become incredibly susceptible to head injuries. [25] Aiming to assess ouriços' damage when hitting the human's head, it is imperative to consider some variables, such as fall velocity and angle, fruit density, and diameter. If an ouriços weighting 0.75 kg falls from 50 m, the impact velocity is 31.32 m/s -about 1.4 times greater than the impact velocity of a coconut fruit, as described by Barss et al. (1984). [3] Considering the principle of energy conservation to calculate the impact energy transmitted by an impacting object to a target, the kinetic energy (KE) is approximately 367.875 J. e relationship between the impactor's KE and its diameter is expressed by the blunt criterion (BC), validated for skull fractures by Raymond et al. (2009). [19] In this study, BC = 1.61 represented a 50% chance of skull fracture. On a different note, according to Radi (2013), if we consider a curved surface, an object's impact with KE greater than 50J is very likely to cause severe or fatal injuries. [1] Hence, a 0.75 kg ouriços would be able to cause severe to fatal injuries by falling from merely 6.8 m.
Children seem to be a risk group for severe and fatal lesions since their skull's resistance to fractures is considerably lower than adults' . For example, the fracture resistance for an adult is 11 times greater than that of a neonate. [17] Another important risk group are brazil-nut collectors because of the naturally increased exposition to what can be classified as an occupational hazard.
Due to the nature of the injury -an impact on the head's top -it was expected that the prominent cranial abnormalities were fractures and extradural hematomas. Although most patients were admitted with few clinical manifestations, the presence of an epidural hematoma without any focal neurological symptom is not rare. We must remember that these injuries are associated with a lucid interval, after which the neurological condition may deteriorate.
us, in patients with a GCS lower than 15 or with a highimpact trauma history, head CT is mandatory, as well as in children with irritability, subgaleal hematoma, and a history of loss of consciousness after TBI. [18] A paramount issue is that Amazon comprises a large territorial extent without homogeneously distributed health services and efficient transportation. Moreover, primary care may be flawed as many patients receive corticosteroids, not indicated for TBI patients. [4,6] In addition, patient transfers may take longer due to the lack of roads. Most are transferred to tertiary hospitals by plane or by speedboats, which is not accessible promptly (REF). [6] Conversely, lesions as the epidural hematoma are an emergency and may cause death within a few hours.
Manaus is the only city of the Amazonas state, [12] and Santarém is the only city of the west of Pará state with attending neurosurgeons. en, as aforementioned, patients may die during transfer when the accident occurs far from these cities. erefore, TBI prevention is a cornerstone when it comes to changing the epidemiological panorama. Daufanamae et al. (2016) made some valid recommendations such as training, recruiting, and retaining occupational therapists in the countryside. is simple solution becomes an ordeal when we consider the associated geographical location, transport costs, travel times, limited health professionals, and lack of opportunities for professional growth. us, the prevention plan ought to comprise financial and academic incentives in order to overcome such barriers. [7] Regarding children and workers' particular susceptibility, Barss et al. (1984) recommend discouraging small children from playing near coconut trees to avoid injuries caused by falling coconuts, locating dwellings away from risk areas, and the using of safety helmets by those who enter on coconut plantations. [2] e same can be applied to ouriços falling prevention.
is case series has some limitations, mainly due to its retrospective nature, as the epidemiological data were not seldomly missing or incomplete. However, most critical clinical data were collected, and the CT scans could be reviewed from the digital records.
Implementation of public health measures is essential to prevent these accidents, such as encouraging personal protective equipment whenever collecting ouriços. Furthermore, family orientation programs in at-risk regions should be established. Finally, centers with a prepared team and head CT scans should be implemented in countryside cities, and rapid transportation to Manaus and Santarém should be guaranteed whenever necessary.
CONCLUSION
ese case series is the first to describe an unconventional but potentially fatal cause of TBI in the Amazon: the falling of the Brazil-nut fruit. Although most cases are considered "mild TBI, " patients may have cranial fractures and extradural hematomas, responsible for a significant morbimortality if proper medical assistance is delayed.
Data availability
e patient history, radiology, and other imaging, ECG, histology and morphology, and other types of data used to support the findings of this study are available from the corresponding author on request.
Declaration of patient consent
e authors certify that they have obtained all appropriate patient consent.
Financial support and sponsorship
Nil.
Conflicts of interest
ere are no conflicts of interest. | 2021-09-11T05:22:13.352Z | 2021-08-09T00:00:00.000 | {
"year": 2021,
"sha1": "a5858dbd31493fa9f79180974e2b51117ad1f586",
"oa_license": "CCBYNCSA",
"oa_url": "https://surgicalneurologyint.com/wp-content/uploads/2021/08/11029/SNI-12-399.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5858dbd31493fa9f79180974e2b51117ad1f586",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230639260 | pes2o/s2orc | v3-fos-license | Effect of Heavy Metals Emissions on Ecosystem of Pakistan
Environmental contamination is one of the significant problems of modern society. Heavy metal pollutants include zinc, arsenic, nickel, cadmium, copper, lead, chromium, manganese, and iron etc. Their universal bioavailability accumulation in the food chain causes severe effects on human health. The lives of living species will be preserved by unique heavy metal treatments to improve the air, soil, and water quality. This review article represents a glimpse of heavy metal contamination in several areas of Pakistan over the past few years, that assess the heavy metal especially the contamination in water (groundwater, surface water, and wastewater), soil, and particulate matter. The listed pollution affects the quality of drinking water, the ecological environment, and the food chain. The toxicity induced by contaminated water, soil, and air causes a genuine threat to human health. Moreover, the technologies to overcome this problem has also reported in it.
INTRODUCTION
Global environmental pollution is the fastestgrowing problem of the modern era. Heavy metals and their compounds play a significant role in threatened our ecosystem by their toxicity. The environment is known for its surroundings and conditions. All flora and fauna species along with abiotic habitat including aquatic, terrestrial, and atmosphere are greatly affected by the toxicity of heavy metal like pollutants which affect their typical activity by crossing their tolerance limit. It is an undesirable change in the surrounding that directly or indirectly affect the quality of water, air, and soil whom in turn affect the life of living organisms. It can take the form of chemical substances, energy, such as heat light, or noise (Wong, 2012).
An environment can be contaminated by various type of pollutants which are directly or indirectly exposed by human activity. And it became a challenge to the global society to get rid out of this. The contaminants in the form of heavy metals are appraised as naturally occurring compounds that in contact with the environment produce hazard effects and disturb the natural activity. They may be inorganic or organic. Environmental pollution has a direct relationship with the population. Rapid industrialization with increasing demands of the community leads to introduce potential toxic components in the environment. And it creates an alarming situation for our global society (Hill, 2020).
Inorganic pollutants introduced by heavy industries like chemicals, fertilizers, paints, dyes, oil and ghee, etc. Their vital components include metals, minerals, and salts. Literature reported that these pollutants are natural while there are exposed to the environment by human activities like mine drainage, chemical processes, metallurgical processes, etc. These pollutants show their toxicity in the surroundings by accretion in food chains (Cd, Cr, Cu, Hg, Mn, Ni, Pb, V, and Zn). Metalloids like As, Bo, and Sb, non-metal like Se, actinoids like Uranium, and halogens like Iodine and Fluorine also included in it. Mining activities are a source of toxicity. Some of these elements in trace quantity are essential for health while an excess of these elements causes severe diseases (Salomons, 1995).
These pollutants mainly produce by biodegradation. Anthropogenic activity, in this case, plays a role in contributing to environmental pollution. These are concerned with food waste, human waste, pesticides, aromatic hydrocarbons, and organochlorine pesticides. These pollutants show high solubility and stability. It is accumulated in the environment to cause a toxic effect (El-Shahawi, Hamza, Bashammakh, & Al-Saggaf, 2010;Van Ael, Covaci, Blust, & Bervoets, 2012).
Heavy metals are naturally present under the earth's crust. Some of them exist in free states and some of them are present in the form of compounds. They exhibit various chemical properties and when they are exposed to the environment Heavy metals are naturally present under the earth's crust. Some of them exist in free states while most of them are present in form of compounds. They exhibit various chemical properties and when they are exposed to the environment during the mining process, they tend to create environmental pollution. Apart from mining processes, they also come into the environment via by-products during the formation of different products. They are emitted both in elemental and compound form. For example; Cadmium is emitted as a by-product in refining of zinc, lead is emitted during its mining and smelting activities, from vehicles by burning of fuels and then treated with TEL to form lead paints while mercury is emitted by degassing of the earth's crust.
Mining activities mostly result in the formation of acid mine drainage (AMD), a phenomenon commonly linked with mining activities. Literature survey shows that heavy metals (M) at mining sites are leached and carried by acidic water downstream. They can be acted upon by bacterial and methylated to yield organic forms, such as monomethyl mercury and dimethyl cadmium. This conversion is effected by bacteria in water, in the presence of organic matter, according to the following simplified equation.
M + organic matter H2O, bacteria CH 3 M and (CH 3 ) 2 M These organic forms are reported as highly toxic and having an adverse effect on the quality of water by its leakage to pollute the underground sources of water (Pestemer & Strumpf, 2003;Sarkar, 2002). United States Environmental Protection Agency (USEPA) had taken a survey to report the maximum contamination levels for heavy metal concentrations in air, soil, and water. The pollution created by heavy metal on the earth's surface as well as underground in sources of water creates soil pollution and this pollution also increases when mined ores are dumped on the ground surface for manual dressing. Surface dumping exposes the metals to air and rain thereby generating much AMD. When agricultural soils are polluted, these metals are taken up by plants and consequently accumulate in their tissues. Animals that feed on such contaminated plants and drink that polluted waters, as well as marine lives that breed in heavy metal polluted waters also accumulate such metals in their tissues, and milk, if lactating. Humans are in turn exposed to heavy metals by consuming contaminated plants and animals, and this has been known to result in various biochemical disorders. In short, all living organisms within a given ecosystem are variously contaminated along their cycles of the food chain (Tyler, 1972).
Similarly, the usage of industrial products in household demands also comes in contact. Mercury exposure is occurring through disinfectants, antifungal agents, toiletries, creams, and organometallics, cadmium exposure is through nickel/cadmium batteries and artist paints; lead exposure is through wine bottle wraps, mirror coatings, batteries, old paints, and tiles amongst others. Infants are more susceptible to the endangering effects of exposure to heavy metals (Duruibe, Ogwuegbu, & Egwurugwu, 2007;Tyler, 1972).
Heavy metals are known as xenobiotics due to their non-beneficial role inside the living body even their minor concentrations are harmful. They are categorized as toxic metals. Pakistan is facing major environmental challenges for the last few decades due to fluctuations in its economic and social development. The toxicity of heavy metals on our ecosystem is eliminated from the point or non-point sources and severely affects the integrity of our ecosystem as well as has a great impact on human health. Heavy metals are known as inorganic pollutants which have to gain more attention from the public and scientific community in relation to their toxicity to aquatic organisms and ultimate effect on the well-being of humans. Heavy metals like iron, copper, chromium, and zinc are essential for metabolic activities, but become toxic at higher concentrations, whereas lead and cadmium have no documented role in living organisms. The migration of population from rural to urban areas for a better lifestyle made a haphazard situation in all its major big cities. Municipal authorities in this response not working properly. Similarly, all other service providers are also affected by this haphazard situation and failed to work properly. This situation greatly affects the natural resources of soil, air, and water (Malik & Zeb, 2009).
Heavy metals produce both naturally and by anthropogenic activities. These metals are present in the environment i.e. air, soil, and water. Naturally, these metals are existing in form of hydroxide, sulphides, phosphates, oxides, silicates, and organic compounds. The most common heavy metals are Lead chromium, nickel, cadmium, zinc, mercury, arsenic, and copper. However, they have occurred in trace amounts but their little quantity is quite enough to cause severe health problems to the living (Herawati, Suzuki, Hayashi, Rivai, & Koyama, 2000).
Some anthropogenic activities in industries, agriculture, mining, and waste-water also contribute to their release into the environment. These include automobile exhaust, smelting, burning of fossil fuels etc which release large amounts of lead, arsenic, copper, zinc, nickel, vanadium, tin, selenium, and mercury. Their amount in environment is increasing day by day by industries to meet the demands of growing population (He, Yang, & Stoffella, 2005;Herawati et al., 2000).
MATERIALS AND METHODS
This article revealed the emission sources of heavy metals such as industries, power plants, and transport systems. Nowadays, environmental contamination is one of the significant problems of modern society. This article also highlighted the occurrence, chemistry, source of heavy metals (industries, transport, and power plants), heavy metal impact on living organisms, and also reconnaissance on the environmental status of Pakistan related to pollution, especially about heavy metals. A detailed discussion is also done on its effect on the ecosystem (effect on air, water, and soil). The pollution of air, soil, and water cannot be removed at once but gradually it can be reduced to improve the quality of the environment. So, by keeping above mention view this paper also explains pollution control techniques. The data written in this article is taken up from the file data source.
Effects of heavy metals on ecosystem
The release of heavy metals shows a severe impact on the quality of soil, air, and water.
Effect on soil
Agricultural soils show high levels of toxic elements due to the application of agrochemicals and sewage sludge. Vehicular emissions, industrial wastes, and wastewater sludge have a noticeable impact on the heavy metal on soils. Inorganic pollutants are mainly known as contaminants of soil. Soil is the ultimate sink for heavy metals released in the environment by anthropogenic activities.
Non-exhaust emissions due to wear and tear of vehicle parts such as brake, tire, and clutch are an important source of trace metals in the urban environment reported by Thorpe andHarrison in 2008 andPant andHarrison in 2013. Industrial sources for the heavy metal (Cu, Pb, Zn, and Cr) contamination in the urban soil include electroplating, petrochemicals, dyes, pigments, ceramic, tanning, and textile industries. Contamination of urban soils by heavy metals is therefore a matter of major concern at local, regional, and global levels owing to its adverse effect on the urban ecosystem (Karim, Qureshi, & Mumtaz, 2015).
The presence of heavy metal in soil destroys the whole ecosystem via contamination in the food chain. Intake of food through different sources brings life at risk. The quality of soil gets affected by direct ingestion, food chains, acidification of soil, porosity, and even natural chemistry is altered in this case (Karim et al., 2015;Musilova, Arvay, Vollmannova, Toth, & Tomas, 2016).
Effect on air
The quality of air is greatly affected by the rapid increase in industrialization and urbanization. Due to the rapid increase in population growth, air pollution becomes a serious issue at the global environmental level. Air pollution has a direct relationship with climate change. It mainly occurs by the unnecessary exhaust gases from the chimneys of various industries. Greenhouse gases play a significant role in air pollution. The excess of these gases in the environment is causally linked to cause air pollution. On contact with other gases or molecules in the environment, they tend to create global warming which directly or indirectly affects the living species. These effects tend to reduce the biodiversity due to which species got extinct. A rise in temperature leads to lengthen the summertime period and shorten the winter season. Some particulate matter and dust are also reported as air pollutants. They took part naturally via soil erosions, rock weathering, dust storms, volcanic eruptions, and anthropogenic activities in industries and transportation (Soleimani, Amini, Sadeghian, Wang, & Fang, 2018).
Effect on water
Water is the main element for life. Freshwater comprises almost 3% of the total water on earth. While a small percentage of 0.01% of this freshwater is available for human activities. But unfortunately, even this small proportion of freshwater is facing immense stress due to the rapid increase in population, urbanization, and unsustainable consumption of water in industry and agriculture. Heavy metal transportation from industrial sources greatly affects the quality of water. Improper sewage of industrial waste lead to cause severe disturbance in an ecosystem. Their traces are even very toxic that they cause severe health problems to the living species. The toxicity of these metals mainly depends on their interaction with organisms and their biological role inside the living body. Food chains and the Food web represent the relationships between organisms. Thus, contamination in water indirectly or directly affects all living organisms (Lee, Bigham, & Faure, 2002).
Heavy metal impacts on living organism
The presence of heavy metal in the environment shows adverse impacts. These impacts cover all the surface of the earth due to which all food chain get disturb. An increase in temperature, unscheduled rainfall, increase humidity all depends on the emission of unnecessary emission of toxic compound in the environment. They tend to cause diarrhea, malaria, asthma, and some infection related to malnutrition in the children of the poor. They also increase the failure rate of the nervous system, cardiac, respiratory, and reproductive systems along with hematological and immunological problems (Singh, Gautam, Mishra, & Gupta, 2011).
People exposed to high amounts of heavy metals may experience serious ailments, for example, gastrointestinal and renal toxicity, heart issues, different types of tumors, hematic, melancholy, tubular and glomerular dysfunctions, and osteoporosis. Newborn children, kids, and young people are especially immune to heavy metal, bringing about formative difficulties and low insight remainders. Most of the countries have framed the norms for the heavy metals allowable limit in the food to avoid its consumption (Duruibe et al., 2007;Rai, Gaur, & Kumar, 1981).
Status of environment in Pakistan
In Pakistan, pollution is spread on a large scale at an uncontrollable rate. The major source of heavy metals elimination is industrial waste. Our air, soil, and water all are severely affected by the release of heavy metals in the form of gases, liquid, and solid contents. Soil plays an important role in the economy of agricultural land. And Pakistan earned 75% of the economy by its agricultural export. Agricultural soils show high levels of toxic elements due to the application of agrochemicals and sewage sludge. There are so many pollutants that take part in pollution. Vehicular emissions, industrial wastes, and wastewater sludge have a noticeable impact on the heavy metal on soils. The process of soil pollution is natural or by human activities. Natural pollution includes forest fires, acid rains, volcanoes, etc. while in human activities it includes mining, waste disposals from industries, fossil fuel combustions, smelting and sewage etc. Accidental pollution mainly occurred through natural disasters like a flood in rivers, nuclear attempts, landfills leakage etc. Excessive use of fertilizers and pesticides also leads to cause soil pollution. Mining activities are known as an important source of toxic elements, due to the legacy of contaminated sites. Soil degradation shows great impacts on the quality of water, air, and change in climate Reduction in the soil's capacity to support human communities and ecosystems, and to desertification. Moreover, it can also impair human health and threaten food and feed safety. Soil pollution affects the ecological functions of soils (Cachada, Rocha-Santos, & Duarte, 2018;Chabbi, Baati, Dammak, Bahloul, & Azri, 2020).
In Pakistan, a large area is occupied by industries like chemicals, textile, garments, paint and dyes, fertilizers, glass and cement, steel, oil and ghee, automobiles and batteries etc. These industries contribute to polluting the environment by the emission of hazardous gases and polluted the air quality. The Management of waste and global warming is the rising challenge of the 21st century. The rapid increase in industrialization leads to an increase in the challenges to face. Heavy industries play a significant role in disturbing the natural climate. The global emission reported greenhouse gases (GHGs) like carbon dioxide, methane, nitrous, ferrous, aluminium oxides with a large concentration of chlorofluorocarbons as air pollutants. The concentration of carbon dioxide gas is very much greater as compared to the rest of the gases. Almost 65% of CO2 is eliminated globally by fossil fuel burnings while 11% of the rest of the gases are produced in the process of chemicals and forestry. Due to the high density of CO2, it remains on the biosphere of our earth and made human health at risk. Due to the increasing demand for industrialization to fulfill the requirement of humans, CO2 concentration is gradually increased and it is not good for the survival of life. In the case of methane, 10% of it is produced by paddy cultivation. In paddy fields, methane is produced by the degradation of organic matter via hydrolysis, methanogenesis, acidogenesis, and acetogenesis. While N2O also occupied a major portion of greenhouse gases. It is emitted by agricultural soil by microbial activity like nitrification and denitrification. Nitrification and denitrification are reported as primary biological processes to utilize the inorganic compounds of nitrogen to produce 70% of the global N2O emission. Overall, the cumulative N2O emission is incredibly low as compared to the emission of CH4 but the global warming potential of N2O is higher than methane and makes it more considerable while considering the extenuation of GHGs (Cachada et al., 2018).
Heavy industries like steel, energy, chemicals, and fertilizers play a significant role in bringing fluctuation in the national economy. The steel industry is a very well-known heavy industry to produce a large amount of CO2 by burning of coal. Steel slag and phosphor-gypsum are also produced as by-products in pyrometallurgical processes. In stainless steel industries, three tons of waste are produced mainly comprise of slag and dust. Slag contains metal oxides while dust contains a lot of gangues like 40% of iron oxide in it, other contents of waste include a lot of hazardous elements, such as Cr, Pb, Ni, Cd, Zn, Hg, As, Ag, Cr, Cu, Fe, and other elements of platinum group and so on. Similarly, in the cement industry, 7% of CO2 is produced. These by-products are produced in millions of tons every year and still, the formation of waste load per unit area is increased with the ever-increasing industrial growth (Nidheesh & Kumar, 2019).
In the soil of various regions of Pakistan, it is observed that there is a large variation in Cadmium level among the chosen sites, which ranged between 0.02 and 184 mg/kg from normal soil to contaminated soil with mining or other activities. In a study of Sargodha district, the highest concentrations of Cd in the soil was found to be 6.74 mg/Kg and the higher values of Cd in soil suggested the possible risk of Cd entering into the higher food chain which was reflected by the Cd accumulation by forage in the range of 1.14 to 4.20 mg/kg. In the soil of Islamabad Territory, the capital city of Pakistan, and the dusty road along Islamabad Expressway, Cd concentrations of 5.8-6.1 and 4.5-6.8 mg/kg, respectively, have been found (Kazi et al., 2006).
Lead, the concentration of Pb is found well below the tolerance range (50-300 mg/Kg) in normal soil due to applied sewage sludge. The only exclusion in the above statement is where the highest Pb concentration of 103000 mg/kg (mean 1753 mg/kg) was detected in contaminated soil under-mining activities with a mean reference soil value of 70 mg/kg from Kohistan region, Gilgit Baltistan province. Moreover, the contamination of heavy metals especially Pb in roadside soil is related to the traffic density on the roads. In Pakistan, Pb concentration along National Highway-5 ranges from 12 to 176 mg/kg with a mean of 36.45 mg/Kg and the highest concentration of 176 mg/kg was found near the bypass road of Hyderabad city, Sindh Province, which is the fifthlargest industrial city of the country. The variations in the concentration of Pb in some areas are mainly due to heavy traffic, brick kilns, and usage of leaded gasoline (Manzoor et al., 2004).
Nickel is widely distributed in nature and is greatly found in animals, plants, and soil. The concentration of Ni in the soil is approximately in the range of 4-80 ppm. A large amount of Ni is released in the atmosphere due to natural as well as anthropogenic activities including fossil fuel consumption, industrial production (mining, smelting, and refining), use, and disposal of nickel compounds and alloys, and waste incineration. In soil samples, the highest concentration of Ni is 172 mg/kg recorded from the contaminated Lahore site, while the mean reference value of 70 mg/kg. Moreover, in another study conducted on the soil of the Jhangar Valley, Punjab province, the maximum total content of Ni was recorded as 81 mg/Kg (Manzoor et al., 2004).
Copper is an essential element. According to European Standards, the tolerable concentration of Cu in soil (is 50-140 mg/kg having (6 < pH < 7). In various regions of Pakistan, the Cu concentration in soil and dust ranges from < 6 to 412 mg/kg. The capital city of Pakistan (Islamabad) industrial area shows that the total concentration of Cu is in the range of 8.88-357.40 mg/kg (Manzoor et al., 2004).
Chromium an important element especially in the metallurgical/steel or pigment industry. Both of its oxidation forms (+3 and +6) in the chemical are used primarily in pigments, metal finishing, and wood preservatives. The main source of Cr pollution is from dyestuffs and leather tanning when wastes are discharged directly into waste streams. In soil samples, the Cr content is present in the acceptable range of 100-150 mg/kg (Kazi et al., 2006).
Iron is an important element in the human body's metabolism. It acts as a catalyst and present in greater amounts than any other trace element. It functions as a part of several proteins, including enzymes and hemoglobin. The literature study on the soil of Pakistan reported (the anthropogenic pressure on soil in terms of heavy metal pollution through wastewater/sludge treatment or industrial activities. In the case of Fe, this pressure builds up does not affect the plant growth as easily soluble and exchangeable fractions of Fe are very low in comparison with the total Fe content in soil. The range of Fe content in soil from different regions is 1 to 196 mg/kg (Kazi et al., 2006).
Zinc is known as an essential micronutrient and acts as a catalyst to do enzyme activity, contributes to protein structure, and regulates gene expression. Its deficiency has been recognized for many years, but it can be toxic when exposures exceed physiological needs. The average zinc content of the worldwide soils is estimated to be 70 mg/kg which is the same average level of Zn in the earth's crust. The standard limit of Zn in soil by sewage sludge applications set by EU is 150-300 mg/kg. In Pakistan soil, the concentration of Zn in soil/dust varies from >0.1 to 1193 mg/kg with only exception where the highest concentration of Zn in soil/dust was observed in contaminated area, that is, 29755. However, in roadside soil along the National Highway of Hyderabad, Sindh province, the Zn concentration varies from 13.8 to 180 mg/kg due to intense traffic (Kazi et al., 2006;Waseem et al., 2014).
In the case of water pollution, the increasing water demand is not be fulfilled due to the increasing quantity of effluent. Non-functional and poorly working factories and industries drain their effluents in improper ways. Out of 6634 registered industrial factories, 1228 are declared as a source of water pollution. In Karachi, 60% area is occupied by industries, and their effluents are directly drained in Malir Rivers. Around 300 million gallon discharge is recorded per day. Similarly, Hattar industrial area is well known for chemicals and ghee industries comprising on 700 acres but the lack of an improper drain system leads to water and soil pollution (Saifullah, Khan, & Ismail, 2002).
Availability of safe and clean water to drink is very less in the cities. According to PCRWR survey, 23.5% of rural and 30% of urban areas have access to clean water. Pakistan's one-third area is comprising of groundwater reservoirs and it is a single source for municipalities for water supplies. The major pollutants of water pollution are the elimination of by-products from many industries like textile, metal, dying chemicals, fertilizers, pesticides, cement, petrochemical, energy and power, leather, sugar processing, construction, steel, engineering, food processing, mining, and others. This discharge is carried to drains and rivers to enhance the border of pollution. It was also estimated in that report that water-borne diseases led to the annual loss of Rs. 25-58 billion in national economy and over 2.5 lacs children suffer from diarrhea per year and 20-40% of hospitals are occupied by patients suffering from water-borne diseases, and it caused about one-third of all deaths (Hasnie & Qureshi, 2002;. According to a press release of UNO in 2002, almost 1.1 billion of Pakistan have lack of access to pure water for drinking purposes and almost more than 5 billion people died every year due to waterborne diseases. It was also reported that the world's population is increasing exponentially while the availability of fresh water is declining. According to that, till 2025 two thirds population of the world is likely to live in those countries which have moderate or severe water shortages". Many countries in Africa, the Middle East, and South Asia will have to face serious threats of water shortage in the next two decades. While, in the developing countries this problem is further aggravated due to the absence of proper management, unavailability of professionals, and financial constraint (Azizullah, Khattak, Richter, & Häder, 2011).
Pakistan is in a critical situation of water shortage. It has fewer reservoirs for the storage of water and in near future, it has to face water scarcity because it has a low rate of precipitation as compared to the evaporation rate in the country so it continuously reducing the level of water in rivers, lakes, and seas. The per capita availability of water in Pakistan drops from 5000 in 1951 to 1100 m3 annually. The rapid increase in population and no expansion of water supplies may cause per capita water resources of less than 1000 m 3 from the year 2010 onwards. The situation could deteriorate further in areas situated outside the Indus basin where the average per capita water supply is already below 1000 m 3 annually. In the region of Sindh, drought-affected many populations that do not have fresh water for drinking and are forced to drink saline water. In Khairpur, out of 768 drinking water samples, 567 (73.83%) and 351 (45.70%) were contaminated with total bacteria and their discharge, respectively. In Baluchistan Province, the underground water sources are falling at a rate of 3.5 m per annum and will also be depleted in the next 15 years. The situation is the same in other major cities of the country like Peshawar, Lahore, and Karachi. In all these cities drinking water is found highly contaminated with bacteria and pathogens. Another study reported E.coli bacteria as a major source of contamination of water. A recent study of Rawal dam that is serving drinking water to almost 1.5 million population of Rawalpindi contains many bacteria (Iqbal, 2010).
The natural source of water contains contaminants of trace elements, heavy metals as it dissolves these substances while moving downward as a hydrological cycle. The introduction of these heavy metals in surface and underground water occurs via several human activities like large-scale use of chemicals in agriculture and improper removal of industrial and municipal wastes. Many of these metals are considered essential for human health but an excess of these metals has severe effects on human health. In Pakistan toxic metals in both ground and surface waters, often exceed the maximum permitted concentrations recommended by WHO for drinking water. Pakistan is an industrial land so disposed of waste is not proper and thus it results in water pollution. The results of various studies conducted on the contamination of water with toxic metals in Pakistan are summarized in Table Heavy/toxic metals concentrations (mg/L) inground and surface water samples of Pakistan. Data are extracted from various individual studies and arranged chronologically based on the year of publication of the reviewed articles (Waseem et al., 2014).
Zinc and copper are known as essential elements for humans, but excess elimination can lead to adverse health consequences. According to the world health organization, drinking water should contain 3 mg/L and 2 mg/L for Zn and Cu, respectively. Just one study reports its excess concentration of about 4mg/L found in Karachi a city of Sindh (Poulsen, 1998;.
Manganese is a naturally occurring mineral present in water, but human activities increase its concentration as compared to its tolerable rang. Intake of such water leads to a nervous breakdown (Mirhashemi & Shahabaddin, 2011;.
Iron the most abundant element on earth, essential for the normal functioning of a living body. Its deficiency and excess can be harmful both for animals and plants. In Pakistan, it is considered as the major pollutant of water. A survey hosted by PCRWR in all cities of Pakistan reported 28% of ground and 40% of surface water samples contain an excess of Fe as compare to the standard range mentioned by WHO (M. A. .
Cadmium is an element of great concern from its toxicity point of view. Its exposure causes chronic and acute health effects in living organism quantity when found in excess. The safe standard for Cd concentrations in drinking water set by WHO is 0.03mg/L but the reported surveys elaborate that in KPK and Sindh its concentration is found much greater than Punjab. Intake of such water causes a gastrointestinal problem, chronic issues, kidney failure, damage of reproductive organs, and cancer due to its high-level toxicity (M. A. .
Chromium is the most common metal. It is highly toxic. According to PCRWR survey in Pakistan, only 1% of groundwater exceeds the safe limit. Analysis of drinking water samples from the residential area of Kasur showed chromium concentrations reaching 9.80 mg/L. In general, chromium had 21-42 times higher concentrations than the recommended quality value that is 0.05mg/L. Frequently the high concentrations of chromium in the drinking water of cities like Lahore, Sialkot, and Gujrat have been traced from leather and tanneries industries. Its toxicity played an important role in the living body. Its hexavalent state (Cr+6) causes severe diseases like cancer, lung infection, irritants, respiratory, digestive, reproduction, and excretory system (M. A. .
Lead is a normal constituent of the earth's crust and its traces are also found in soil and water. The safe standard report by WHO for lead in drinking water is 0.01 mg/L. However, in-country survey its concentration was found from 0.001 to 2.0 mg/L in groundwater and 0 to 0.38 mg/L in surface water. According to PCRWR survey 15% and 1% of surface and groundwater samples, respectively, had Pb above the safe limits. Its overdose causes fatal diseases inside the living body like the nervous, digestive, cardiovascular, reproductive, and immunological systems as well as the skeleton and the kidneys. Mercury is known as "persistent bioaccumulative toxin". It is naturally occurring in nature. Intake of this element is highly poisoned to the body. In Pakistan data regarding Hg containing contaminated water are limited as very few studies exist on the issue. PCRWR reported Hg concentrations beyond the safe limits in 5% of surface water samples but none in groundwater samples. Chashma showed Hg concentrations beyond the desire limits (0.017 mg/L). it is highly dangerous to marine life, deposited into the gills, and causes death (Martınez & Motto, 2000;.
Arsenic is known as a big threat in Asian countries. In Pakistan, most reservoirs contain Astatine (As) metal in water beyond the safe limit reported by WHO. In the 1990s literature reported high concentrations of arsenic in the large water reservoirs of Pakistan, i.e. Tarbela, (620 μg/L), Chashma (750 μg/L). Similarly, in many areas of Sindh and Punjab, the drinking water was reported by PCRWR to have the maximum concentration of As that's not safe for human health (M. A. .
The overall situation of water contaminated by toxic metals shows the variation in their contamination level and frequency. All toxic metals except copper and zinc have their concentrations beyond the critical limits in many cases. Most of the authors linked the high concentration of heavy metals in the water to human activities like the disposal of industrial, municipal, and domestic wastes. Pakistan's major area is comprised of industrial areas, but they have no proper arrangement to discard the waste so the water from those industries moved and reached rivers, lakes, and sea and contaminate them. All agricultural land is also watered by this disposed wastewater due to which disease is becoming more common and causing death.
According to a survey of PCRWR for 2002-2006 following percentage was recorded in all provinces of Pakistan (M. A. . Human activities are the cause of water pollution. The major part played by the industrial, domestic and municipal waste in water channels rivers, lakes, seas etc. Literature reported that around 2 million tons of waste and other effluents are disposed into the world's waters every day. In developing countries, this situation is worst where over 90% of raw sewage and 70% of untreated industrial wastes are dumped into the surface that reached to water source (Aziz, 2005).
Hattar industrial area is well known for chemicals and ghee industries comprising on 700 acres but lack of improper drain system. All their discharge effluents are dump into natural drains and ultimately collected in a drain near Wah. Due to a shortage of water, this contaminated water is used for agricultural land to grow fruits and vegetables (Sial Sr, 2006). Air pollution is caused by the combustion of coal and excess use of fertilizers. Heavy metals in the atmosphere are usually present as a part of fine particles called particulate matter (PM10 or PM2.5). Respiration is one of the pathways for many metals to enter the human body. Heavy metals in the air are a matter of great concern; as we breathe, the polluted air directly transfers the contaminant air into the lungs. According to WHO, the proposed value of Cd is 5 ng/m 3 in air, however, it is highly dangerous to health. The IARC Working Group recently classified outdoor air pollution and particulate matter from outdoor air pollution as carcinogenic to humans (Simoneit, 2002).
Most of the studies from Pakistan report the airborne Cadmium concentration of less than 5 ng/m 3 (on an average basis) in the suspended particulate matter shown in Table 5. However, a report from Lahore shows the annual mean Cd concentration of 69 ng/m 3 in PM2.5.
Lead has been acknowledged as one of the toxic constituents of airborne PM, with emission levels estimated at 450 million kg per annum from industrial coal and oil combustion and 30 million kg per annum from natural sources. The variations in the Pb concentration at some points may be due to traffic burden, brick kilns, and usage of leaded gasoline. Nowadays, the concentration of Pb in the urban 22 BioMed Research International atmosphere of Islamabad decreased in recent years due to the use of Pb-free gasoline, although the Pb content is still at a high level, ranging from 0.002 to 4.7 ng/m 3 (Riaz et al., 2017;von Schneidemesser, Stone, Quraishi, Shafer, & Schauer, 2010).
Nickel in large amount released in the atmosphere due to natural as well as anthropogenic activities including fossil fuel consumption, industrial production (mining, smelting, and refining), use, and disposal of nickel compounds and alloys, and waste incineration. As per IARC, Ni compounds are human carcinogens by inhalation exposure; therefore, no safe level for nickel compounds can be recommended in the air. In the current analysis, the concentration of Ni in the particulate matter was reported in the range of 0.001-0.15 ng/m 3 , and the highest of its content was reported in the urban atmosphere of Islamabad (von Schneidemesser et al., 2010).
Reduction of pollution
The pollution of air, soil, and water cannot be removed at once but gradually it can be reduced to improve the quality of the environment. For example, in the case of air pollution, a forestation helps to minimize the concentration of CO2 due to the photosynthesis process. The absorption of CO2 by crops and agricultural orchard trees consider as the main sink of CO2 in the agriculture system. In the case of bio-energy with carbon capture and storage atmospheric CO2 is also removed by purpose-grown plants and trees, which are then harvested as biomass. The biomass can be burned to generate heat and electricity, with the majority of CO2 released during combustion being captured, liquefied, and sequestered in underground storage sites. The Direct Air Carbon Capture and Sequestration (DACCS) is another mature method to capture atmospheric CO2 and place it in ambient air into contact with a strong liquid base, such as potassium hydroxide or sodium hydroxide (NaOH), which dissolves the CO2. This leads to a chemical reaction between the CO2 and base, forming a carbonate solution, from which the CO2 can then be removed in a separate process that involves combining the carbonate solution with calcium hydroxide (Ca (OH)2 solution in a precipitator. This regenerates the base and forms solid calcium carbonate (CaCO3), which is precipitated out of the base solution. The precipitate is then sent to a calciner, where it is reacted at extremely high temperatures (about 800°C) with oxygen (O2) from an air-separation unit, forming pure CO2 and calcium oxide (CaO). The CaO is combined with water in a slacker and forms Ca (OH)2 for reuse (Gambhir & Tavoni, 2019;Johnston & Keough, 2005;Nishitani, Kaneko, Fujii, & Komatsu, 2011). Ecologically, DACCAS has a low impact on the ecosystem as compared to BECCS. Some DACCS plant designs involve no water removal in their operations, although water removals during the manufacture of sorbents may be significant. There are also potential adverse consequences if the chemicals used for sorbent manufacture, and the disposal of sorbents at the end of their useful lives, are not handled in an environmentally responsible manner. In particular, the sodium hydroxide used in some DACCS plants is highly corrosive, and the chlorine gas by-product that results from its production from brine is highly poisonous (Fasihi, Efimova, & Breyer, 2019;Gambhir & Tavoni, 2019;Socolow et al., 2011).
A variety of technologies, product design choices, and operational approaches can rapidly and cost-effectively reduce energy consumption and GHG emissions across a broad range of industries. All these technologies and practices can be enhanced by integrated systems design. Over 90% of GHG emissions are from about a dozen industries, so exceptionally large reductions in industrial GHG emissions are possible by focusing on a limited set of product and process improvements (Gambhir & Tavoni, 2019).
The use of municipal solid waste as an alternative material in cement production largely reduces net carbon dioxide and other greenhouse gas emissions. The use of decarbonated raw materials like steel slag, concrete waste, fly ash, etc. in place of limestone, reduces the carbon dioxide emissions both related to the calcination process and fuel combustion. the use of other fuels instead of conventional fuels reduces the carbon dioxide emission significantly. The use of "engineered fuel" in the cement industry will reduce 3 tonnes of CO2 per ton of alternative fuel used (Lei, Zhang, Nielsen, & He, 2011).
Co-processing
Co-processing of waste is a promising solution for waste management and sustainable cement production. For example, the utilization of hazardous liquid waste as an alternative fuel in the cement industry reduces the environmental impacts associated with freshwater ecotoxicity, acidification, and global warming; while it enhances the environmental impacts associated with eutrophication and increases human toxicity impacts for cancer. Due to these types of issues, the anti-incineration movement is going on in some countries against this waste utilization in the cement industry (Buzzi, Viegas, Rodrigues, Bernardes, & Tenório, 2013;Hasanbeigi, Lu, Williams, & Price, 2012).
Membrane technologies
Membrane technology is a growing technology in leading to reduce pollution by its separation properties. The use of membranes for the recovery of acid mine drainage is very much effective in reducing water pollution. The pollutant in high concentration moves in the opposite direction with a concentration gradient phenomenon called reverse osmosis. Different kinds of membranes are used for such purposes i.e.: ultra-filtration, micro-filtration, nanofiltration, reverse osmosis, and particle filtration. Municipal authorities should have made proper methods for dumping industrial waste to reduce water pollution. The separation of metals and gases via membranes is recycled and reused in other various important treatments and products. And thus, marine life and water reservoirs are protected from pollution (Buzzi et al., 2013;Neoh, Noor, Mutamim, & Lim, 2016;Nleya, Simate, & Ndlovu, 2016).
Biosorption
The use of biological materials in the removal of contaminants from water is referred to as biosorption. It involves absorption, adsorption, surface complexation, ion exchange, and precipitation. These bio sorbents work with great efficiency and capacity with easy use. The regeneration process makes it more favorable. However, the high concentration of feed solution reduces further pollutant removal (Silvas, Buzzi, Espinosa, & Tenório, 2011).
Most physical and chemical-heavy metal removal techniques require the handling of a large amount of sludge. Thus, to protect the ecosystem, the use of the above-mentioned technique should be carried out with proper and careful attention (Dermont, Bergeron, Mercier, & Richer-Laflèche, 2008).
CONCLUSION
The environment is everything that creates natural conditions for the existence of living organisms. Heavy metals are toxins, releasing directly into the environment are a serious threat to human life and other living organisms due to their easy accumulation in the food chain. The activity of industries played a great role in this concern. So, proper strategies should be made to counterbalance the population and industrialization. Old technologies must be replaced by modern technologies such as membrane separation and biosorption for industrial waste processing. Moreover, the dumping of industrial waste should also be in proper ways to reduce environmental pollution. Due to the low efficiencies of old technologies, the emissions of pollutants affect our ecosystem very badly. We should have a better alternative to avoid the loss of human health and the entire ecosystem. | 2020-12-10T09:02:31.226Z | 2020-12-05T00:00:00.000 | {
"year": 2020,
"sha1": "12f9761a51800e1b9cccbceac5373413390451eb",
"oa_license": "CCBYSA",
"oa_url": "https://ojs.literacyinstitute.org/index.php/ijsei/article/download/60/59",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7259fe9e4e3b28f0f9edcaf7b18e0443605d9d2e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
251881598 | pes2o/s2orc | v3-fos-license | Surfaces of coordinate finite type in the Lorentz-Minkowski 3-space
In this article, we study the class of surfaces of revolution in the 3-dimensional Lorentz-Minkowski space with nonvanishing Gauss curvature whose position vector x satisfies the condition {\Delta}IIIx = Ax, where A is a square matrix of order 3 and {\Delta}III denotes the Laplace operator of the second fundamental form III of the surface. We show that such surfaces are either minimal or pseudospheres of a real or imaginary radius.
Introduction
Let M 2 be a connected non-degenerate submanifold in the 3-dimensional Lorentz-Minkowski space E 3 1 and x : M 2 → E 3 1 be a parametric representation of a surface in the Lorentz-Minkowski 3-space E 3 1 equipped with the induced metric. Let (x, y, z) be a rectangular coordinate system of E 3 1 . By saying Lorentz-Minkowski space E 3 1 , we mean the Euclidean space E 3 with the standard metric given by ds 2 = −dx 2 + dy 2 + dz 2 .
As well known that a submanifold is called a k-type submanifold if its position vector x can be written as a sum of eigenvectors of the Laplace-Beltrami operator, ∆, according to k distinct eigenvalues, i.e., x = y 0 + y 1 + · · · + y k , for a constant vector y 0 and smooth non-constant functions y k , (i = 1, . . . , k) such that ∆y i = λ i y i , λ i ∈ R. The year 1966 was the beginning when Takahashi in [34] stated that spheres and minimal surfaces are the only ones in E 3 whose position vector x satisfies the relation ∆ I x = λx, λ ∈ R, (1.1) where ∆ I is the Laplace operator associated with the 1 st fundamental form I of the surface. Since the coordinate functions of x can be denoted as (x 1 , x 2 , x 3 ), then Takahashi's condition (1.1) becomes Let (x 1 , x 2 , x 3 ) be the component functions of x. Then it is well-known that Later, in [25] O. Garay generalized Takahashi's condition (1.3). Actually, he studied surfaces of revolution in E 3 , whose component functions satisfy the condition ∆ I x i = λ i x i , i = 1, 2, 3, that is, the component functions are eigenfunctions of their Laplacian but not necessary with the same eigenvalue. Another generalization is to study surfaces whose position vector x satisfies a relation of the form where A ∈ R 3×3 . Many results concerning this can be found in ( [16], [19], [20], [22], [23], [25]). This type of study can be also extended to any smooth map, not necessary for the position vector of the surface, for example, the Gauss map of a surface. Regarding this see ( [8], [13], [15], [25], [17], [18], [19], [20], [24], [26]). Similarly, another extension can be drawn by applying the conditions stated before but for the second or third fundamental form of a surface [32]. Here again, many results can be found in ( [1], [2], [14], [30], [31], [33]).
On the other hand, all the ideas mentioned above can be applied in the Lorentz-Minkowski space E 3 1 . So, an interesting geometric question has posed classify all the surfaces in E 3 1 , which satisfy the condition ∆ J x = Ax, J = I, II, III, (1.5) where A ∈ R 3×3 and ∆ J is the Laplace operator, with respect to the fundamental form J.
Kaimakamis and Papantoniou in [28] solved the above question for the class of surfaces of revolution with respect to the second fundamental form, while in [21] Bekkar and Zoubir studied the same class of surfaces with respect to the first fundamental form satisfying
Basic concepts
Let C : r(s) : s ∈ (a, b) ⊂ E −→ E 2 be a curve in a plane E 2 of E 3 1 and l be a straight line of E 2 which does not intersect the curve C. A surface of revolution M 2 in E 3 1 is defined to be a non-degenerate surface, revolving the curve C around the axis l. If the axis l is timelike, then we consider that the z-axis as axis of revolution. If the axis l is spacelike, then we may assume that the x-axis or y-axis as axis of revolution. Without loss of generality, we may consider the x-axis as the axis of revolution. If the axis is null, then we may assume that this axis is the line spanned by the vector (0,1,1) of the yz-plane.
We consider the axis of revolution is the x-axis (spacelike) and the curve C is lying in the xy-plane. Then a parameterization of C with respect to its arclength is r(s) = (f (s), g(s), 0) where f, g are smooth functions. Without loss of generality, we may assume that f (s) > 0, s ∈ (a, b). A surface of revolution M 2 in E 3 1 in a system of local curvilinear coordinates (s, θ) is given by: In the case that the axis of revolution is the z-axis (timelike) and the curve C is given r(s) = (f (s), 0, g(s)) and lies in the xz-plane, the surface of revolution M 2 is given by: x(s, θ) = f (s) cos θ, f (s) sin θ, g(s) .
Finally, if the axis of revolution is the line spanned by the vector (0, 1, 1) and the curve C lies in the yz-plane, then the surface of revolution M 2 can be parametrized as where h(s) = f (s) − g(s) = 0. We denote by g = g km , b = b km and e = e km , k, m = 1, 2 the first, second and third fundamental forms of M 2 respectively, where we put which are the coefficients of the first, second, third fundamental form respectively, and <, > is the Lorentzian metric. For a sufficient differentiable function p(u 1 , u 2 ) on M 2 The second Laplace operator according to the fundamental form III of M 2 is defined by [4].
where p /k := ∂p ∂u k , e km denote the components of the inverse tensor of e km and e = det(e km ). After a long computation, we arrive at Here we have LN − M 2 = 0, since the surface has no parabolic points.
Proof of the main results
In this paragraph we classify the surfaces of revolution M 2 satisfying the relation (1.5). We distinguish the following three types according to whether these surfaces are determined.
Type I. The parametric representation of M 2 is given by by (2.1). Then where ′ := d ds , from which we obtain that and 3) Denoting by κ the curvature of the curve C and r 1 , r 2 the principal radii of curvature of M 2 . We have which are the Gauss and mean curvature of M 2 respectively. Since the relation (3.1) holds, there exists a smooth function ϕ = ϕ(s) such that where ϕ = ϕ(s). Then κ = ϕ ′ and relations (3.3), (3.4) become Taking the derivative of last equation, we get From (2.4), (3.2) and (3.5) we have Let (x 1 , x 2 , x 3 ) be the coordinate functions of the position vector x of (2.3). Then according to relation (1.2), (3.8) and taking into account (3.6) and (3.7) we find that ∆ III x 2 = ∆ III g(s) = r cos ϕ + r ′ sin ϕ ϕ ′ , (3.10) We denote by a ij , i, j = 1, 2, 3, the entries of the matrix A, where all entries are real numbers. By using (3.9), (3.10) and (3.11) condition (1.5) is found to be equivalent to the following system From (3.13) it can be easily verified that a 21 = a 23 = 0. On the other hand, differentiating (3.12) and (3.14) twice with respect to θ we get that a 12 = a 32 = 0. So, the system is reduced to But sinh θ and cosh θ are linearly independent functions of θ, so we deduce that a 13 = a 31 = 0, a 11 = a 33 . Putting a 11 = a 33 = λ and a 22 = µ, we see that the system of equations (3.15), (3.16) and (3.17) reduces now to the following two equations Hence the matrix A for which relation (1.5) is satisfied becomes Solving the system (3.18) and (3.19) with respect to r and r ′ , we conclude that Taking the derivative of (3.21), we find We distinguish now the following cases: Case I. µ = λ = 0. Thus, according to (3.21) we have r = 0. Consequently H = 0. Therefore M 2 is minimal and the corresponding matrix A is the zero matrix.
Case II. µ = λ = 0. Then from (3.22) we have r ′ = 0. If ϕ ′ = 0, then M 2 would consist only of parabolic points, which has been excluded. Therefore we find that (3.23) On differentiating (3.23) and taking into account (3.20) with µ = 0, we obtain (3.25) Taking the derivative of (3.25) and taking into account (3.20) Taking the derivative of (3.26), we find On account of (3.24), (3.6) and (3.7) it is easily verified that Here we have also a contradiction. Case V. λ = 0, µ = 0. We write equations (3.18) and (3.19) as follows From (3.30) we have relation (3.28). By eliminating ϕ ′′ from (3.29) we get On differentiating the last equation and using (3.28) we find Multiplying (3.31) by 2ϕ ′ sin ϕ and (3.32) by − cos ϕ we obtain 2 Combining (3.33) and (3.34) we conclude that Taking the derivative of the above equation and using (3.22) and (3.28) we find Multiplying (3.35) by − cos ϕ, and adding the resulting equation to (3.36) we get (3.37) On account of (3.31) we find This relation, however, is valid for a finite number of values of ϕ. So in this case there are no surfaces of revolution with the required property. So we proved the following
be a surface of revolution given by (2.1). Then x satisfies (1.5) regarding to the third fundamental form if and only if the following statements hold true
• M 2 is the pseudosphere S 2 1 (c) of real radius c, • M 2 has zero mean curvature.
Type II. The parametric representation of M 2 is given by (2.2). Then the tangent vector of the revolving curve is We assume that f ′2 − g ′2 = 1, ∀s ∈ (a, b). On the other hand we have Here we have Taking the derivative of last equation, we get On the other hand According to relation (2.2) and (3.44) we find that Thus, as in the former paragraph, we find − r sinh ϕ − r ′ cosh ϕ ϕ ′ cos θ = a 11 f (s) cos θ + a 12 f (s) sin θ + a 13 g(s), . Applying the same algebraic methods, used in the previous type, this system of equations reduced to where a 11 = a 22 = λ, a 33 = µ, λ, µ ∈ R. Solving the system (3.45) and (3.46) with respect to r and r ′ , we conclude that (3.48) Similarly, we have the following five cases according to the values of λ, µ. Case I. λ = µ = 0. Thus from (3.48) we conclude that r = 0. Consequently H = 0. Therefore M 2 is minimal and the corresponding matrix A is the zero matrix.
From (3.56) we have relation (3.54). By eliminating ϕ ′′ from (3.55) we get 1 On differentiating the last equation and using (3.54) we find Multiplying (3.57) by 2ϕ ′ sinh ϕ and (3.58) by − cosh ϕ we obtain 2 Combining (3.59) and (3.60) we conclude that Taking the derivative of the above equation and using (3.54) we find Multiplying (3.61) by − cosh ϕ, and adding the resulting equation to (3.62) we get On account of (3.57) we find Eliminating µgϕ ′ cosh ϕ from (3.63) by using (3.64), we get Obviously λ = −1 because otherwise, from (3.61) we would have A contradiction. Now, by inserting (3.66) in (3.65) we obtain This relation, however, is valid for a finite number of values of ϕ. So in this case there are no surfaces of revolution with the required property. So we proved the following 1 be a surface of revolution given by (2.2). Then x satisfies (1.5)
regarding to the third fundamental form if and only if the following statements hold true
• M 2 is the pseudosphere S 2 1 (c) of real or imaginary radius c, • M 2 has zero mean curvature.
Type III. The parametric representation of M 2 is given by (2.3), i.e., where h(s) = f (s) − g(s) = 0. Since, M 2 is non-degenerate, f ′ (s) 2 − g ′ (s) 2 never vanishes, and so h ′ (s) = f ′ (s) − g ′ (s) = 0 everywhere. Now, we may take the parameter in such a way that h(s) = −2s. Assume that k(s) = g(s) − s, then f (s) = k(s) − s g(s) = k(s) + s, (see for example, [29]). Therefore M 2 can be reparametrized as follows Now, let M 2 be spacelike surface, i.e., k ′ (s) > 0. Then, the timelike unit normal vector field N of M 2 is given by Then the components of the second fundamental forms are given by Thus the relation (2.4) becomes According to relations (2.3) and (3.70) we find that . (3.73) Regarding the above equations as polynomials in θ, so from the coefficients of (3.73) we get (a 31 + a 32 )s = 0, We put a 11 = λ and a 22 = µ, so the matrix A for which relation (1.5) is satisfied takes finally the following form Hence system of equations [(3.74),... (3.82)] reduces to the following two equa- where, as we mention before, a 33 = 1 2 (λ + µ) and a 12 = 1 2 (µ − λ). Solving the system of equations (3.88) and (3.89) with respect to λ and µ we find Case I. λ = µ = 0. Thus from (3.90) and (3.91) we conclude that k = as 3 + b with a > 0, b is a constant and s = 0. Consequently, H = 0. Therefore M 2 is minimal and the corresponding matrix A is the zero matrix.
Case III. λ = 0, µ = 0. By considering the last assumption in (3.91), i.e. µ = 0, we have 2k ′ 2 sk ′′ + k ′ . By substituting this into (3.90), we get where λ is non-zero function. Since there is no k function to implement in both conditions, so there is no surface of revolution that fulfills these conditions. Case IV. λ = 0, µ = 0. Similarly, we get a contradiction as in Case III. Case V. λ = µ and λ = 0, µ = 0. In this case, the above two relations (3.90) and (3.91) are valid only when λ and µ are functions of s. Thus there are no surfaces of revolution with the required property. So we proved the following: 1 be a surface of revolution given by (2.3). Then x satisfies (1.5) | 2022-08-29T01:16:13.513Z | 2022-07-06T00:00:00.000 | {
"year": 2022,
"sha1": "1b242a751208f5fdde2808c7fca4945fc8b4df1b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1b242a751208f5fdde2808c7fca4945fc8b4df1b",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
14749510 | pes2o/s2orc | v3-fos-license | Left-Garside categories, self-distributivity, and braids
In connection with the emerging theory of Garside categories, we develop the notions of a left-Garside category and of a locally left-Garside monoid. In this framework, the connection between the self-distributivity law LD and braids amounts to the result that a certain category associated with LD is a left-Garside category, which projects onto the standard Garside category of braids. This approach leads to a realistic program for establishing the Embedding Conjecture of [Dehornoy, Braids and Self-distributivity, Birkhauser (2000), Chap. IX].
In this paper we describe and investigate a new example of (left)-Garside category, namely a certain category LD + associated with the left self-distributivity law (LD) x(yz) = (xy)(xz).
The interest in this law originated in the discovery of several nontrivial structures that obey it, in set theory [16,41] and in low-dimensional topology [37,30,45]. A rather extensive theory was developed in the decade 1985-95 [18]. Investigating self-distributivity in the light of Garside categories seems to be a good idea. It turns out that a large part of the theory developed so far can be summarized into one single statement, namely The category LD + is a left-Garside category, (this is the first part of Theorem 6.1 below).
The interest of the approach should be at least triple. First, it gives an opportunity to restate a number of previously unrelated properties in a new language that should make them more easily understandable. In particular, the connection between self-distributivity and braids is now expressed in the simple statement: There exists a right-lcm preserving surjective functor of LD + to the Garside category of positive braids, (second part of Theorem 6.1). This result allows one to recover most of the usual algebraic properties of braids as a direct application of the properties of LD + : roughly speaking, Garside's theory of braids is the emerged part if an iceberg, namely the algebraic theory of self-distributivity.
Second, a direct outcome of the current approach is a realistic program for establishing the Embedding Conjecture. The latter is the most puzzling open question involving free self-distributive systems. Among others, it says that the equivalence class of any bracketed expression under self-distributivity is a semilattice, i.e., any two expressions admit a least upper bound with respect to a certain partial ordering. Many equivalent forms of the conjecture are known [18,Chapter IX]. At the moment, no complete proof is available, but we establish the following new result Unless the left-Garside category LD + is not regular, the Embedding Conjecture is true, (Theorem 6.2). This result reduces a possible proof of the conjecture to a (long) sequence of verifications.
Third, the category LD + seems to be a seminal example of a left-Garside category, quite different from all previously known examples of Garside categories. In particular, being strongly asymmetric, LD + is not a Garside category. The interest of investigating such objects per se is not obvious, but the existence of a nontrivial example such as LD + seems to be a good reason, and a help for orientating further research. In particular, our approach emphasizes the role of locally left-Garside monoids 1 : this is a monoid M that fails to be Garside because no global element ∆ exists, but nevertheless possesses a family of elements ∆ x that locally play the role of the Garside element and are indexed by a set on which the monoid M partially acts. Most of the properties of left-Garside monoids extend to locally left-Garside monoids, in particular the existence of least common multiples and, in good cases, of the greedy normal form.
Acknowledgement. Our definition of a left-Garside category is borrowed from [27] (up to a slight change in the formal setting, see Remark 2.6). Several proofs in Sections 2 and 3 use arguments that are already present, in one form or another, in [1,49,28,29,12,19,35] and now belong to folklore. Most appear in the unpublished paper [27] by Digne and Michel, and are implicit in Krammer's paper [39]. Our reasons for including such arguments here is that adapting them to the current weak context requires some useful polishing, and is necessary to explain our two main new notions, namely locally Garside monoids and regular left-Garside categories.
The paper is organized in two parts. The first one (Sections 1 to 3) contains those general results about left-Garside categories and locally left-Garside monoids that will be needed in the sequel, in particular the construction and properties of the greedy normal form. The second part (Sections 4 to 8) deals with the specific case of the category LD + and its connection with braids. Sections 4 and 5 review basic facts about the self-distributivity law and explain the construction of the category LD + . Section 6 is devoted to proving that LD + is a left-Garside category and to showing how the results of Section 3 might lead to a proof of the Embedding Conjecture. In Section 7, we show how to recover the classical algebraic properties of braids from those of LD + . Finally, we explain in Section 8 some alternative solutions for projecting LD + to braids. In an appendix, we briefly describe what happens when the associativity law replaces the self-distributivity law: here also a left-Garside category appears, but a trivial one.
We use N for the set of all positive integers.
Left-Garside categories
We define left-Garside categories and describe a uniform way of constructing such categories from so-called locally left-Garside monoids, which are monoids with a convenient partial action.
1.1. Left-Garside monoids. Let us start from the now classical notion of a Garside monoid. Essentially, a Garside monoid is a monoid in which divisibility has good properties, and, in addition, there exists a distinguished element ∆ whose divisors encode the whole structure. Slightly different versions have been considered [24,19,26], the one stated below now being the most frequently used. In this paper, we are interested in one-sided versions involving left-divisibility only, hence we shall first introduce the notion of a left-Garside monoid.
Throughout the paper, if a, b are elements of a monoid-or, from Section 1.2, morphisms of a category-we say that a left-divides b, denoted a b, if there exists c satisfying ac = b. The set of all left-divisors of a is denoted by Div(a). If ac = b holds with c = 1, we say that a is a proper left-divisor of b, denoted a ≺ b.
We shall always consider monoids M where is a partial ordering. If two elements a, b of M admit a greatest lower bound c with respect to , the latter is called a greatest common left-divisor, or left-gcd, of a and b, denoted c = gcd(a, b). Similarly, a -least upper bound d is called a least common right-multiple, or rightlcm, of a and b, denoted d = lcm(a, b). We say that M admits local right-lcm's if any two elements of M that admit a common right-multiple admit a right-lcm.
Finally, if M is a monoid and S, S ′ are subsets of M , we say that S leftgenerates S ′ if every nontrivial element of S ′ admits at least one nontrivial leftdivisor belonging to S. Using "generates" instead of "left-generates" in (LG 3 ) would make no difference, by the following trivial remark-but the assumption (LG 0 ) is crucial, of course. Proof. Let a be a nontrivial element of M . By definition there exist a 1 = 1 in S and a ′ satisfying a = a 1 a ′ . If a ′ is trivial, we are done. Otherwise, there exist a 2 = 1 in S and a ′′ satisfying a ′ = a 2 a ′′ , and so on. The sequence 1, a 1 , a 1 a 2 , ... is ≺increasing and it lies in Div(a), hence it must be finite, yielding a = a 1 ...a d with a 1 , ..., a d in S.
Right-divisibility is defined symmetrically: a right-divides b if b = ca holds for some c. Then the notion of a right-(pre)Garside monoid is defined by replacing leftdivisibility by right-divisibility and left-product by right-product in Definition 1.1.
The equivalence of the above definition with that of [26] is easily checked. The seminal example of a Garside monoid is the braid monoid B + n equipped with Garside's fundamental braid ∆ n , see for instance [32,29]. Other classical examples are free abelian monoids and, more generally, all spherical Artin-Tits monoids [10], as well as the so-called dual Artin-Tits monoids [9,4]. Every Garside monoid embeds in a group of fractions, which is then called a Garside group.
Let us mention that, if a monoid M is left-Garside, then mild conditions imply that it is Garside: essentially, it is sufficient that M is right-cancellative and that the left-and right-divisors of the left-Garside element ∆ coincide [19].
Left-Garside categories.
Recently, it appeared that a number of results involving Garside monoids still make sense in a wider context where categories replace monoids [4,27,39]. A category is similar to a monoid, but the product of two elements is defined only when the target of the first is the source of the second. In the case of Garside monoids, the main benefit of considering categories is that it allows for relaxing the existence of the global Garside element ∆ into a weaker, local version depending on the objects of the category, namely a map from the objects to the morphisms.
We refer to [44] for some basic vocabulary about categories-we use very little of it here.
Convention. Throughout the paper, composition of morphisms is denoted by a multiplication on the right: f g means "f then g". If f is a morphism, the source of f is denoted ∂ 0 f , and its target is denoted ∂ 1 f . In all examples, we shall make the source and target explicit: morphisms are triples (x, f, y) satisfying A morphism f is said to be nontrivial if f = 1 ∂0f holds.
We extend to categories the terminology of divisibility. So, we say that a morphism f is a left-divisor of a morphism g, denoted f g, if there exists h satisfying f h = g. If, in addition, h can be assumed to be nontrivial, we say that f ≺ g holds. Note that f g implies ∂ 0 f = ∂ 0 g. We denote by Div(f ) the collection of all left-divisors of f .
The following definition is equivalent to Definition 2.10 of [27] by F. Digne and J. Michel-see Remark 2.6 below.
A category C is called left-Garside if it is left-preGarside and possesses at least one left-Garside map. It is natural to call C(M ) the Cayley category of M since its graph is the Cayley graph of M (defined provided M is also right-cancellative).
The notion of a right-Garside category is defined symmetrically, exchanging left and right everywhere and exchanging the roles of source and target. In particular, the map ∆ and Axiom (LG 3 ) is to be replaced by a map ∇ satisfying ∂ 1 ∇(x) = x and, using b a for "a right-divides b", Then comes the natural two-sided version of a Garside category [4,27].
It is easily seen that, if M is a Garside monoid, then the categories C(M ) and C(M ) of Example (1.5) are Garside categories. Insisting that the maps ∆ and ∇ involved in the left-and right-Garside structures are connected as in Definition 1.6 is crucial: see Appendix for a trivial example where the connection fails.
1.3.
Locally left-Garside monoids. We now describe a general method for constructing a left-Garside category starting from a monoid equipped with a partial action on a set. The trivial examples of Example 1.5 enter this family, and so do the two categories LD + and B + investigated in the second part of this paper.
We start with a convenient notion of partial action of a monoid on a set. Several definitions could be thought of; here we choose the one that is directly adapted to the subsequent developments.
holds for all x, a, b, this meaning that either both terms are defined and they are equal, or neither is defined, (iii) for each finite subset S in M , there exists x in X such that x • a is defined for each a in S. In the above context, for each x in X, we put Then Conditions (i), (ii), (iii) of Definition 1.7 imply A monoid action in the standard sense, i.e., an everywhere defined action, is a partial action. For a more typical case, consider the n-strand Artin braid monoid B + n . We recall that B + n is defined for n ∞ by the monoid presentation Then we obtain a partial action of B + ∞ on N by putting if a belongs to B + n , undefined otherwise.
A natural category can then be associated with every partial action of a monoid. Definition 1.8. For α a partial action of a monoid M on a set X, the category associated with α, denoted C(α), or C(M, X) if the action is clear, is defined by Example 1.9. We shall denote by B + the category associated with the action (1.3) of B + ∞ on N, i.e., we put Obj(B + ) = N, Hom(B + ) = {(n, a, n) | n ∈ B + n }. Define ∆ : Obj(B + ) → Hom(B + ) by ∆(n) = (n, ∆ n , n). Then the well known fact that B + n is a Garside monoid for each n [32,38] easily implies that B + is a Garside category, as will be formally proved in Proposition 1.11 below.
The example of B + shows the benefit of going from a monoid to a category. The monoid B + ∞ is not a (left)-Garside monoid, because it is of infinite type and there cannot exist a global Garside element ∆. However, the partial action of (1.3) enables us to restrict to subsets B + n (submonoids in the current case) for which Garside elements exist: with the notation of (1.1), B + n is (B + ∞ ) n . Thus the categorical context enables us to capture the fact that B + ∞ is, in some sense, locally Garside. It is easy to formalize these ideas in a general setting. Definition 1.10. Let M be a monoid with a partial action α on a set X. A sequence (∆ x ) x∈X of elements of M is called a left-Garside sequence for α if, for each x in X, the element x • ∆ x is defined and The monoid M is said to be locally left-Garside with respect to α if it is left-preGarside and there is at least one left-Garside sequence for α.
A typical example of a locally left-Garside monoid is B + ∞ with its action (1.3) on N. Indeed, the sequence (∆ n ) n∈N is clearly a left-Garside sequence for (1.3).
The next result should appear quite natural.
Proof. By definition, (x, a, y) (x ′ , a ′ , y ′ ) in C(M, X) implies x ′ = x and a a ′ in M . So the hypothesis that M satisfies (LG 0 ) implies that C(M, X) does.
Assume (x, a, y)(y, b ′ , x ′ ) = (x, b, z)(z, a ′ , x ′ ) in Hom(C(M, X)). Then ab ′ = ba ′ holds in M . By (LG 2 ), a and b admit a right-lcm c, and we have a c, b c, and c ab ′ . By hypothesis, x • ab ′ is defined, hence so is x • c, and it is obvious to check that (x, c, x • c) is a right-lcm of (x, a, y) and (x, b, z) in Hom(C(M, X)). Hence C(M, X) satisfies (LG 2 ).
Assume that (x, a, y) is a nontrivial morphism of Hom(C(M, X)). This means that a is nontrivial, so, by (LG ℓoc 3 ), some left-divisor a ′ of ∆ x is a left-divisor of a. Then (x, a ′ , x • a ′ ) ∆(x) holds, and ∆(x) left-generates Hom(x, −).
Finally, assume (x, a, y) ∆(x) in Hom(C(M, X)). This implies a ∆ x in M . Then (LG ℓoc 3 ) in M implies ∆ x a∆ y . By hypothesis, y • ∆ y is defined, and we have (x, a, y)∆(y) = (x, a∆ y , It is not hard to see that, conversely, if M is a left-preGarside monoid, then C(M, X) being a left-Garside category implies that M is a locally left-Garside monoid. We shall not use the result here.
If M has a total action on X, i.e., if x • a is defined for all x and a, the sets M x coincide with M , and Condition (LG ℓoc 3 ) reduces to (LG 3 ). In this case, each element ∆ x is a left-Garside element in M , and M is a left-Garside monoid. A similar result holds for each set M x that turns out to be a submonoid (if any). Proof. By definition of a partial action, x • 1 is defined, so M x contains 1, and it is a submonoid of M . We show that M x satisfies (LG 0 ), (LG 1 ), (LG 2 ), and (LG 3 ). First, a counter-example to (LG 0 ) in M x would be a counter-example to (LG 0 ) in M , hence M x satisfies (LG 0 ). Similarly, an equality ab = ab ′ with b = b ′ in M x would also contradict (LG 1 ) in M , so M x satisfies (LG 1 ). Now, assume that a and b admit in M x , hence in M , a common right-multiple c. Then a and b admit a right-lcm c ′ in M . By hypothesis, x • c is defined, and c ′ c holds. By definition of a partial action, x • c ′ is defined as well, i.e., c ′ lies in M x , and it is a right-lcm of a and b in M x . So M x satisfies (LG 2 ), and it is left-preGarside.
Next, ∆ x is a left-Garside element in M x . Indeed, let a be any nontrivial element of M x . By (LG ℓoc 3 ), there exists a nontrivial divisor a ′ of a satisfying a ′ ∆ x . By definition of a partial action, x • a ′ is defined, so it belongs to M x , and ∆ x leftgenerates M x . Finally, assume a ∆ x . As ∆ x belongs to M x , this implies a ∈ M x , hence ∆ x a∆ x•a by (LG ℓoc 3 ), i.e., ∆ x a∆ x since we assumed that the sequence ∆ is constant on M x . So ∆ x is a left-Garside element in M x .
Simple morphisms
We return to general left-Garside categories and establish a few basic results. As in the case of Garside monoids, an important role is played by the divisors of ∆, a local notion here.
2.1. Simple morphisms and the functor φ. Hereafter, we always use ∆ as the default notation for the (left)-Garside map with left-Garside map ∆ involved in the considered structure.
In this case, we denote by f * the unique simple morphism satisfying f f * = ∆(∂ 0 f ). The family of all simple morphisms in C is denoted by Hom sp (C).
By definition, every identity morphism 1 x is a left-divisor of every morphism with source x, hence in particular of ∆(x). Therefore 1 x is simple.
Although straightforward, the following result is fundamental-and it is the main argument for stating (LG 3 ) in the way we did. Lemma 2.3. Assume that C is a left-Garside category.
(i) If f is a simple morphism, so are f * and φ(f ).
(ii) Every right-divisor of a morphism ∆(x) is simple.
Applying the result to f * shows that φ(f )-as well as φ k (f ) for each positive k-is simple.
(ii) Assume that g is a right-divisor of ∆(x). This means that there exists f satisfying f g = ∆(x), hence g = f * by (LG 1 ). Then g is simple by (i).
Lemma 2.4. Assume that C is a left-Garside category.
(i) The morphisms 1 x are the only left-or right-invertible morphisms in C.
(ii) Every morphism of C is a product of simple morphisms.
(iii) There is a unique way to extend φ into a functor of C into itself.
(iv) The map ∆ is a natural transformation of the identity functor into φ, i.e., for each morphism f , we have Proof. (i) Assume f g = 1 x with f = 1 x and g = 1 ∂1f . Then we have an infinite ≺-increasing sequence in Div(1 x ) that contradicts (LG 0 ).
(ii) Let f be a morphism of C, and let x = ∂ 0 f . If f is trivial, then it is simple, as observed above. We wish to prove that simple morphisms generate Hom(C). Owing to Lemma 1.2, it is enough to prove that simple morphisms left-generate Hom(C), i.e., that every nontrivial morphism with source x is left-divisible by a simple morphism with source x, in other words by a left-divisor of ∆(x). This is exactly what the first part of Condition (LG 3 ) claims.
(iii) Up to now, φ has been defined on objects, and on simple morphisms. Note that, by construction, (2.1) is satisfied for each simple morphism f . Indeed, applying Definition 2.1 for f and f * gives the relations Applying this to f = 1 x gives ∆(x) = ∆(x)φ(1 x ), hence φ(1 x ) = 1 φ(x) by (LG 1 ). Let f be an arbitrary morphism of C, and let f 1 ...f p and g 1 ...g q be two decompositions of f as a product of simple morphisms, which exist by (ii). Repeatedly applying (2.1) to f p , ..., f 1 and g q , ..., g 1 gives By (LG 1 ), we deduce φ(f 1 )...φ(f p ) = φ(g 1 )...φ(g q ), and therefore there is no ambiguity in defining φ(f ) to be the common value. In this way, φ is extended to all morphisms in such a way that φ is a functor and (2.1) always holds. Conversely, the above definition is clearly the only one that extends φ into a functor.
(iv) We have seen above that (2.1) holds for every morphism f , so nothing new is needed here. See Figure 1 for an illustration. Lemma 2.5. Assume that C is a left-Garside category. Then, for each object x and each simple morphism f , we have Proof. By definition, the source of ∆(x) is x and its target is φ(x), hence applying (2.1) with f = ∆(x) yields ∆(x)∆(φ(x)) = ∆(x)φ(∆(x)), hence ∆(φ(x)) = φ(∆(x)) after left-cancelling ∆(x). On the other hand, let x = ∂ 0 f . Then we have f f * = ∆(x), and ∂ 0 (φ(f )) = φ(x). Applying φ and the above relation, we find Remark 2.6. We can now see that Definition 1.4 is equivalent to Definition 2.10 of [27]: the only difference is that, in the latter, the functor φ is part of the definition. Lemma 2.4(iv) shows that a left-Garside category in our sense is a left-Garside category in the sense of [27]. Conversely, the hypothesis that φ and ∆ satisfy (2.1) implies that, for f : x → y, we have ∆(x)φ(y) = f ∆(y), whence ∆(x) f ∆(y) and f * φ(y) = ∆(y), which, by (LG 1 ), implies φ(f ) = f * * . So every left-Garside category in the sense of [27] is a left-Garside category in the sense of Definition 1.4.
2.2.
The case of a locally left-Garside monoid. We now consider the particular case of a category C(M, X) associated with a partial action of a monoid M .
Proof. Assume that x • a is defined. By Lemma 2.4(ii), the morphism (x, a, x • a) of C(M, X) can be decomposed into a finite product of simple morphisms (x 0 , a 1 , Hence, for each element a in M x , there exists a unique element a ′ satisfying ∆ x a ′ = a∆ x•a , and this is the element we define to be φ x (a). Then, , φ(y)). By uniqueness, we deduce φ((x, a, y)) = (φ(x), φ x (a), φ(y)).
Greatest common divisors.
We observe-or rather recall-that left-gcd's always exist in a left-preGarside category. We begin with a standard consequence of the noetherianity assumption (LG 0 ). Lemma 2.8. Assume that C is a left-preGarside category and S is a subset of Hom(C) that contains the identity-morphisms and is closed under right-lcm. Then every morphism has a unique maximal left-divisor that lies in S.
Proof. Let f be an arbitrary morphism. Starting from f 0 = 1 ∂0f , which belongs to S by hypothesis, we construct a ≺-increasing LG 0 ) implies that the construction stops after a finite number d of steps. Then f d is a maximal left-divisor of f lying in S.
As for uniqueness, assume that g ′ and g ′′ are maximal left-divisors of f that lie in S. By construction, g ′ and g ′′ admit a common right-multiple, namely f , hence, by (LG 2 ), they admit a right-lcm g. By construction, g is a left-divisor of f , and it belongs to S since g ′ and g ′′ do. The maximality of g and g ′ implies g ′ = g = g ′′ . Proposition 2.9. Assume that C is a left-preGarside category. Then any two morphisms of C sharing a common source admit a unique left-gcd.
Proof. Let S be the family of all common left-divisors of f and g. It contains 1 ∂0f , and it is closed under lcm. A left-gcd of f and g is a maximal left-divisor of f lying in S. Lemma 2.8 gives the result.
2.4. Least common multiples. As for right-lcm, the axioms of left-Garside categories only demand that a right-lcm exists when a common right-multiple does. A necessary condition for such a common right-multiple to exist is to share a common source. This condition is also sufficient. Again we begin with an auxiliary result.
Proof. We use induction on d. For d = 1, this is the definition of simplicity. Assume d 2. Put y = ∂ 1 f 1 . Applying the induction hypothesis to f 2 ...f d , we find The second equality comes from applying (2.1) d − 1 times, and the last relation comes from the fact that Proposition 2.11. Assume that C is a left-Garside category. Then any two morphisms of C sharing a common source admit a unique right-lcm.
Proof. Let f, g be any two morphisms with source x. By Lemma 2.4, there exists d such that f and g can be expressed as the product of at most d simple morphisms. Then, by Lemma 2.10, ) is a common right-multiple of f and g. Finally, (LG 2 ) implies that f and g admit a right-lcm. The uniqueness of the latter is guaranteed by Lemma 2.4(i).
In a general context of categories, right-lcm's are usually called push-outs (whereas left-lcm's are called pull-backs). So Proposition 2.11 states that every left-Garside category admits push-outs.
Applying the previous results to the special case of categories associated with a partial action gives analogous results for all locally left-Garside monoids.
Corollary 2.12. Assume that M is a locally left-Garside monoid with respect to some partial action of M on X.
(i) Any two elements of M admit a unique left-gcd and a unique right-lcm.
Proof. (i) As for left-gcd's, the result directly follows from Proposition 2.9 since, by definition, M is left-preGarside. As for right-lcm's, assume that M is locally left-Garside with left-Garside sequence (∆ x ) x∈X . Let a, b be two elements of M . By definition of a partial action, there exists x in X such that both x • a and x • b are defined. By Proposition 2.11, (x, a, x • a) and (x, b, x • b) admit a right-lcm (x, c, z) in the category C(M, X). By construction, c is a common right-multiple of a and b in M . As M is assumed to satisfy (LG 2 ), a and b admit a right-lcm in M .
(ii) Fix now x in X, and let a, b belong to M x , i.e., assume that x • a and x • b are defined. Then (x, a, x • a) and (x, b, x • b) are morphisms of C(M, X). As above, they admit a right-lcm, which must be (x, c, x • c) where c is the right-lcm of a and b. Hence c belongs to M x .
Regular left-Garside categories
The main interest of Garside structures is the existence of a canonical normal form, the so-called greedy normal form [29]. In this section, we adapt the construction of the normal form to the context of left-Garside categories-this was done in [27] already-and of locally left-Garside monoids. The point here is that studying the computation of the normal form naturally leads to introducing the notion of a regular left-Garside category, crucial in Section 6.
3.1. The head of a morphism. By Lemma 2.4(ii), every morphism in a left-Garside category is a product of simple morphisms. The decomposition need not be unique in general, and the first step for constructing a normal form consists in isolating a particular simple morphism that left-divides the considered morphism. It will be useful to develop the construction in a general framework where the distinguished morphisms need not necessarily be the simple ones.
Notation. We recall that, for f, g in Hom(C), where C is a left-preGarside category, lcm(f, g) is the right-lcm of f and g, when it exists. In this case, we denote by f \g the unique morphism that satisfies We use a similar notation in the case of a (locally) left-Garside monoid.
Definition 3.1. Assume that C is a left-preGarside category and S is included in Hom(C). We say that S is a seed for C if (i) S left-generates Hom(C), (ii) S is closed under the operations lcm and \, In other words, S is a seed for C if (i) every nontrivial morphism of C is leftdivisible by a nontrivial element of S, (ii) for all f, g in S, the morphisms lcm(f, g) and f \g belong to S whenever they exist, and (iii) for each f in S, the relation h f implies h ∈ S.
Next, assume that f, g are simple morphisms sharing the same source x. By Proposition 2.11, the morphisms lcm(f, g) and f \g exist. By definition, we have , is simple, and, therefore, f \g, which is a left-divisor of (f \g) h, is simple as well by transitivity of .
Finally, Hom sp (C) is closed under left-divisor by definition.
Lemma 2.8 guarantees that, if S is a seed for C, then every morphism f of C has a unique maximal left-divisor g lying in S, and Condition (i) of Definition 3.1 implies that g is nontrivial whenever f is. Definition 3.3. In the context above, the morphism g is called the S-head of f , denoted H S (f ).
In the case of Hom sp (C), it is easy to check, for each f in Hom(C), the equality in this case, we shall simply write H(f ) for H Hom sp (C) (f ).
Normal form.
The following result is an adaptation of a result that is classical in the framework of Garside monoids.
Proposition 3.4. Assume that C is a left-preGarside category and S is a seed for C. Then every nontrivial morphism f of C admits a unique decomposition Proof. Let f be a nontrivial morphism of C, and let f 1 be the S-head of f . Then f 1 belongs to S, it is nontrivial, and we have f = f 1 f ′ for some unique f ′ . If f ′ is trivial, we are done, otherwise we repeat the argument with f ′ . In this way we obtain a ≺-increasing that the construction stops after a finite number of steps, yielding a decomposition of the form (3.3).
Definition 3.5. In the context above, the sequence ( When S turns out to be the family Hom sp (C), the S-normal form will be simply called the normal form. The interest of the S-normal form lies in that it is easily characterized and easily computed. First, one has the following local characterization of normal sequences. Proposition 3.6. Assume that C is a left-preGarside category and S is a seed for C. Then a sequence of morphisms This follows from an auxiliary lemma.
Proof. The hypothesis implies that f and g have the same source. Put g ′ = f \g. The hypothesis that S is closed under \ and an easy induction on the length of the S-normal form of f show that g ′ belongs to S. By hypothesis, we have both Proof of Proposition 3.6. It is enough to consider the case d = 2, from which an easy induction on d gives the general case. So we assume that (f 1 , f 2 ) and (f 2 , f 3 ) are S-normal, and aim at proving that ( 3.3. A computation rule. We establish now a recipe for inductively computing the S-normal form, namely determining the S-normal form of gf when that of f is known and g belongs to S.
Proof. For an induction, it is enough to consider the case d = 2, hence to prove Claim. Assume that the diagram Figure 2. Adding one S-factor g0 on the left of an S-normal sequence , and so on from left to right; the sequence So assume that h belongs to S and satisfies h The results of Proposition 3.6 and 3.8 apply in particular when C is left-Garside and S is the family of all simple morphisms, in which case they involve the standard normal form.
In the case of lcm's, Corollary 2.12 shows how a result established for general left-Garside categories induces a similar result for locally left-Garside monoids. The situation is similar with the normal form, provided some additional assumption is satisfied.
For instance, the family (∆ n ) n∈N witnessing for the locally left-Garside structure of the monoid B + ∞ is coherent. Indeed, a positive n-strand braid a is a left-divisor of ∆ n if and only if it is a left-divisor of ∆ n ′ for every n ′ n. The reason is that being simple is an intrinsic property of positive braids: a positive braid is simple if and only if it can be represented by a braid diagram in which any two strands cross at most once [28].
Then Σ is a seed for M , every element of M admits a unique Σ-normal form, and the counterpart of Propositions 3.6 and 3.8 hold for the Σ-normal form in M .
Proof. Axiom (LG ℓoc
3 ) guarantees that every nontrivial element of M is left-divisible by some nontrivial element of Σ. Then, by hypothesis, 1 x ∆ x holds for each object x. Then, assume a, b ∈ Σ. There exists x such that x • a and and we conclude that a\b belongs to Σ. Finally, it directly results from its definition that Σ is closed under left-divisor. Hence Σ is a seed for M in the sense of Definition 3.1.
As, by definition, M is a left-preGarside monoid, Proposition 3.4 applies, guaranteeing the existence and uniqueness of the Σ-normal form on M , and so do Propositions 3.6 and 3.8.
Thus, the good properties of the greedy normal form are preserved when the assumption that a global Garside element ∆ exists is replaced by the weaker assumption that local Garside elements ∆ x exist, provided they make a coherent sequence.
3.4. Regular left-Garside categories. It is natural to look for a counterpart of Proposition 3.8 involving right-multiplication by an element of the seed instead of left-multiplication. Such a counterpart exists but, interestingly, the situation is not symmetric, and we need a new argument. The latter demands that the considered category satisfies an additional condition, which is automatically satisfied in a twosided Garside category, but not in a left-Garside category.
In this section, we only consider the case of a left-Garside category and its simple morphisms, and not the case of a general left-preGarside category with an arbitrary seed-see Remark 3.14. So, we only refer to the standard normal form.
Definition 3.11. We say that a left-Garside category C is regular if the functor φ preserves normality of length two sequences: for is the normal form of a morphism f , and that g is simple. Then the normal form Figure 3. We begin with an auxiliary observation. Proof. The following equalities always hold: .
Proof of Proposition 3.12. As in the case of Proposition 3.8 it is enough to consider the case d = 2, and therefore it is enough to prove Claim. Assume that the diagram is commutative. Indeed, applying (2.1), we find . By Lemma 3.13, the hypothesis that (g 1 , f ′ 2 ) is normal implies gcd(g * 1 , f ′ 2 ) = 1, and, finally, we deduce h f ′ 1 , i.e., (f ′ 1 , f ′ 2 ) is normal.
Remark 3.14. It might be tempting to mimick the arguments of this section in the general framework of a left-preGarside category C and a seed S, provided some additional conditions are satisfied. However, it is unclear that the extension can be a genuine one. For instance, if we require that, for each f in S, there exists f * in S such that f f * exists and depends on ∂ 0 f only, then the map ∂ 0 f → f f * is a left-Garside map and we are back to left-Garside categories.
3.5. Regularity criteria. We conclude with some sufficient conditions implying regularity. In particular, we observe that, in the two-sided case, regularity is automatically satisfied.
Lemma 3.15. Assume that C is a left-Garside category. Then a sufficient condition for C to be regular is that the functor φ be bijective on Hom(C).
Proof. Assume that C is a left-Garside category and φ is bijective on Hom(C). First Next, we claim that φ(f ) is simple if and only if f is simple. That the condition is sufficient directly follows from Definition 2.1. Conversely, assume that φ(f ) is simple. This means that there exists g satisfying φ(f )g = ∆(∂ 0 φ(f )). As φ is surjective, there exists g ′ satisfying g = φ(g ′ ). Applying (2.2), we obtain φ(f g Finally, assume that (f 1 , f 2 ) is normal, and g is a simple morphism left-dividing φ(f 1 )φ(f 2 ), hence satisfying gh = φ(f 1 )φ(f 2 ) for some h. As φ is surjective, we have g = φ(g ′ ) and h = φ(h ′ ) for some g ′ , h ′ . Moreover, by the claim above, the hypothesis that g is simple implies that g ′ is simple as well. Then we have , φ(f 2 )) is normal, and C is regular.
Proposition 3.16. Every Garside category is regular.
Proof. Assume that C is a left-Garside with respect to ∆ and right-Garside with respect to ∇ satisfying ∆( and, for g simple in Hom(C), hence a right-divisor of ∇(∂ 1 g), denote by * g the unique simple morphism satisfying * gg = ∇(∂ 0 g), and put ψ(g) = * * g. Then arguments similar to those of Lemma 2.4 give the equality which is an exact counterpart of (2.1). Let f : x → y be any morphism in C. Put x ′ = φ(x) and y ′ = φ(y). By construction, we also have x = ψ(x ′ ) and y = ψ(y ′ ).
Remark. The above proof shows that, if C is a left-Garside category that is Garside, then the associated functor φ is bijective both on Obj(C) and on Hom(C). Let us mention without proof that this necessary condition is actually also sufficient.
Apart from the previous very special case, we can state several weaker regularity criteria that are close to the definition and will be useful in Section 6. We recall that H(f ) denotes the maximal simple morphism left-dividing f .
Proof. Assume that C is regular, and that f 1 , f 2 satisfy be the formal form of f 1 f 2 -which has length 2 at most by Proposition 3.8. Then, Conversely, assume (3.8) and let (f 1 , f 2 ) be normal. By construction, we have is such a morphism g, and, by (LG 1 ), it is the only one. So the normal form of φ(f 1 f 2 ) is (φ(f 1 ), φ(f 2 )), and C is regular.
On the other hand, it is clear that (ii) implies (i).
Self-distributivity
We quit general left-Garside categories, and turn to one particular example, namely a certain category (two categories actually) associated with the left selfdistributive law. The latter is the algebraic law (LD) x(yz) = (xy)(xz) extensively investigated in [18]. We first review some basic results about this law and the associated free LDsystems, i.e., the binary systems that obey the LD-law. The key notion is the notion of an LD-expansion, with two derived categories LD + 0 and LD + that will be our main subject of investigation from now on.
4.1.
Free LD-systems. For each algebraic law (or family of algebraic laws), there exist universal objects in the category of structures that satisfy this law, namely the free systems. Such structures can be uniformly described as quotients of absolutely free structures under convenient congruences. Definition 4.1. We let T n be the set of all bracketed expressions involving variables x 1 , ..., x n , i.e., the closure of {x 1 , ..., x n } under t 1 ⋆ t 2 = (t 1 )(t 2 ). We use T for the union of all sets T n . Elements of T are called terms.
Typical terms are x 1 , x 2 ⋆ x 1 , x 3 ⋆ (x 3 ⋆ x 1 ), etc. It is convenient to think of terms as rooted binary trees with leaves indexed by the variables: the trees associated with the previous terms are • x 1 , x 2 x 1 , and x 3 x 3 x 1 , respectively. The system (T n , ⋆) is the absolutely free system (or algebra) generated by x 1 , ..., x n , and every binary system generated by n elements is a quotient of this system. So is in particular the free LD-system of rank n.
Definition 4.2. We denote by = LD the least congruence (i.e., equivalence relation compatible with the product) on (T n , ⋆) that contains all pairs of the form Two terms t, t ′ satisfying t = LD t ′ are called LD-equivalent.
The following result is then standard. Definition 4.4. Let t, t ′ be terms. We say that t ′ is an atomic LD-expansion of t, denoted t → 1 LD t ′ , if t ′ is obtained from t by replacing some subterm of the form t 1 ⋆ (t 2 ⋆ t 3 ) with the corresponding term (t 1 ⋆ t 2 ) ⋆ (t 1 ⋆ t 3 ). We say that t ′ is an LD-expansion of t, denoted t → LD t ′ , if there exists a finite sequence of terms t 0 , ..., t p satisfying t 0 = t, t p = t ′ , and t i−1 → 1 LD t i for 1 i p. By definition, being an LD-expansion implies being LD-equivalent, but the converse is not true. For instance, the term (x ⋆ x) ⋆ (x ⋆ x) is an (atomic) LD-expansion of x⋆(x⋆x), but the latter is not an LD-expansion of the former. However, it should be clear that = LD is generated by → LD , so that two terms t, t ′ are LD-equivalent if and only if there exists a finite zigzag t 0 , t 1 , ..., t 2p satisfying t 0 = t, t 2p = t ′ , and The first nontrivial result about LD-equivalence is that the above zigzags may always be assumed to have length two.
Next, for each term t, the term φ(t) is recursively defined by 2 The idea is that t ⊙ ⋆ t ′ is obtained by distributing t everywhere in t ′ once. Then φ(t) is the image of t when ⋆ is replaced with ⊙ ⋆ everywhere in the unique expression of t in terms of variables. Examples are given in Figure 4. A straightforward induction shows that t ⊙ ⋆ t ′ is always an LD-expansion of t ⋆ t ′ and, therefore, that φ(t) is an LD-expansion of t.
The main step for establishing Proposition 4.5 consists in proving that φ(t) plays with respect to atomic LD-expansions a role similar to Garside's fundamental braid ∆ n with respect to Artin's generators σ i -which makes it natural to call φ(t) the fundamental LD-expansion of t. Figure 5. The fundamental LD-expansion φ(t) of a term t, a concrete Sketch of proof. One uses induction on the size of the involved terms. Once Lemma 4.7 is established, an easy induction on d shows that, if there exists a length d sequence of atomic LD-expansions connecting t to t ′ , then φ d (t) is an LD-expansion of t ′ . Then a final induction on the length of a zigzag connecting t to t ′ shows that, if t and t ′ are LD-equivalent, then φ d (t) is an LD-expansion of t ′ for sufficiently large d (namely for d at least the number of "zag"s in the zigzag).
4.3.
The category LD + 0 . A category (and a quiver) is naturally associated with every graph, and the previous results invite to introduce the category associated with the LD-expansion relation → LD .
Definition 4.8. We denote by LD + 0 the category whose objects are terms, and whose morphisms are all pairs of terms (t, t ′ ) satisfying t → LD t ′ .
By construction, the category LD + 0 is left-and right-cancellative, and Proposiion 4.5 means that any two morphisms of LD + 0 with the same source admit a common right-multiple. Moreover, a natural candidate for being a left-Garside map is obtained by defining ∆(t) = (t, φ(t)) for each term t. Question 4.9. Is LD + 0 a left-Garside category? Question 4.9 is currently open. We shall see in Section 6.3 that it is one of the many forms of the so-called Embedding Conjecture. The missing part is that we do not know that least common multiples exist in LD + 0 , the problem being that we have no method for proving that a common LD-expansion of two terms is possibly a least common LD-expansion.
The monoid LD + and the category LD +
The solution for overcoming the above difficulty consists in developing a more precise study of LD-expansions that takes into account the position where the LDlaw is applied. This leads to introducing a certain monoid LD + whose elements provide natural labels for LD-expansions, and, from there, a new category LD + , of Figure 6. Action of Dα to a term t: the LD-law is applied to expand t at position α, i.e., to replace the subterm t /α , which is t /α0 ⋆ (t /α10 ⋆ t /α11 ), with (t /α0 ⋆ t /α10 ) ⋆ (t /α0 ⋆ t /α11 ); in other words, the light grey subtree is duplicated and distributed to the left of the dark grey and black subtrees.
which LD + 0 is a projection. This category LD + is the one on which a left-Garside structure will be proved to exist.
5.1.
Labelling LD-expansions. By definition, applying the LD-law to a term t means selecting some subterm of t and replacing it with a new LD-equivalent term. When terms are viewed as binary rooted trees, the position of a subterm can be specified by describing the path that connects the root of the tree to the root of the considered subtree, hence typically by a binary address, i.e., a finite sequence of 0's and 1's, according to the convention that 0 means "forking to the left" and 1 means "forking to the right". Hereafter, we use A for the set of all such addresses, and ǫ for the empty address, which corresponds to the position of the root in a tree.
Notation. For t a term and α an address, we denote by t /α the subterm of t whose root (when viewed as a subtree) has address α, if it exists, i.e., if α is short enough.
So, for instance, if t is the term x 1 ⋆ (x 2 ⋆ x 3 ), we have t /0 = x 1 , t /10 = x 2 , whereas t /00 is not defined, and t /ǫ = t holds, as it holds for every term. Figure 6.) We say that t ′ is a D α -expansion of t, denoted t ′ = t • D α , if t ′ is the atomic LD-expansion of t obtained by applying LD at the position α, i.e., replacing the subterm t /α , which is t /α0 ⋆ (t /α10 ⋆ t /α11 ), with the term (t /α0 ⋆ t /α10 ) ⋆ (t /α0 ⋆ t /α11 ).
By construction, every atomic LD-expansion is a D α -expansion for a unique α. The idea is to use the letters D α as labels for LD-expansions. As arbitrary LDexpansions are compositions of finitely many atomic LD-expansions, hence of D αexpansions, it is natural to use finite sequences of D α to label LD-expansions. In other words, we extend the (partial) action of D α on terms into a (partial) action of finite sequences of D α 's. Thus, for instance, we write to indicate that t ′ is the LD-expansion of t obtained by successively applying the LD-law (in the expanding direction) at the positions α, then β, then γ.
If S is a nonempty set, we denote by S * the free monoid generated by S, i.e., the family of all words on the alphabet S (finite sequences of elements of S) equiped with concatenation.
("critical case") Sketch of proof. The commutation relation of the parallel case is clear, as the transformations involve disjoint subterms. The nested cases are commutation relations as well, but, because one of the involved subterms is nested in the other, it may be moved, and even possibly duplicated, when the main expansion is performed, so that the nested expansion(s) have different names before and after the main expansion. Finally, the critical case is specific to the LD-law, and there is no way to predict it except a direct verification, as shown in Figure 7.
Definition 5.4. Let R LD be the family of all relations of Lemma 5.3. We define LD + to be the monoid {D α | α ∈ A} | R LD + .
Lemma 5.3 immediately implies
Proposition 5.5. The partial action of the free monoid {D α | α ∈ A} * on terms induces a well defined partial action of the monoid LD + .
For t a term and a in LD + , we shall naturally denote by t • a the common value of t • w for all sequences w of letters D α that represent a.
Remark. In this way, each LD-expansion receives a label that is an element of LD + , thus becoming a labelled LD-expansion. However, we do not claim that a labelled LD-expansion is the same as an LD-expansion. Indeed, we do not claim that the relations of Lemma 5.3 exhaust all possible relations between the action of the D α 's on terms. A priori, it might be that different elements of LD + induce the same action on terms, so that one pair (t, t ′ ) might correspond to several labelled expansions with different labels. As we shall see below, the uniqueness of the labelling is another form of the above mentioned Embedding Conjecture. 5.3. The category LD + . We are now ready to introduce our main subject of interest, namely the category LD + of labelled LD-expansions. The starting point is the same as for LD + 0 , but the difference is that, now, we explicitly take into account the way the source is expanded into the target.
Definition 5.6. We denote by LD + the category whose objects are terms, and whose morphisms are triples (t, a, t ′ ) with a in LD + and t • a = t ′ .
In other words, LD + is the category associated with the partial action of LD + on terms, in the sense of Section 1.8. We recall our convention that, when the morphisms of a category are triples, the source is the first entry, and the target is the last entry. So, for instance, a typical morphism in LD + is the triple , , whose source is the term x ⋆ (x ⋆ (x ⋆ x)) (we adopt the default convention that specifying no variable means using some fixed variable x), and whose target is the
5.4.
The element ∆ t . We aim at proving that the category LD + is a left-Garside category. To this end, we need to define the ∆-morphisms. As planned in Section 4.3, the latter will be constructed using the LD-expansions (t, φ(t)). Defining a labelled version of this expansion means fixing some canonical way of expanding a term t into the corresponding term φ(t). A natural solution then exists, namely following the recursive definition of the operations ⊙ ⋆ and φ.
For w a word in the letters D α , we denote by sh 0 (w) the word obtained by replacing each letter D α of w with the corresponding letter D 0α , i.e., by shifting all indices by 0. Similarly, we denote by sh γ (w) the word obtained by appending γ on the left of each address in w. The LD-relations of Lemma 5.3 are invariant under shifting: if w and w ′ represent the same element a of LD + , then, for each γ, the words sh γ (w) and sh γ (w ′ ) represent the same element, naturally denoted sh γ (a), of LD + . For each a in LD + , the action of sh γ (a) on a term t corresponds to the action of a to the γ-subterm of t: so, for instance, if t ′ = t • a holds, then t ′ ⋆ t 1 = (t ⋆ t 1 ) • sh 0 (a) holds as well, since the 0-subterm of t ⋆ t 1 is t, whereas that of t ′ ⋆ t 1 is t ′ .
Definition 5.7. For each term t, the elements δ t and ∆ t of LD + are defined by the recursive rules Example 5.8. Let t be x⋆(x⋆(x⋆x)). Then t /0 is x, and, therefore, ∆ t /0 is 1. Next, On the other hand, using (5.2) again, we find and, finally, we obtain ∆ t = D 1 D ǫ D 0 D 1 . According to the defining relations of the monoid LD + , this element is also D ǫ D 1 D ǫ . Note the compatibility of the result with the examples of Figures 5 and 7.
Lemma 5.9. For all terms t 0 , t, we have The proof is an easy inductive verification.
Connection with braids.
Before investigating the category LD + more precisely, we describe the simple connection existing between LD + and the positive braid category B + of Example 1.9.
Then π induces a surjective monoid homomorphism of LD + onto B + ∞ . Proof. The point is that each LD-relation of Lemma 5.3 projects under π onto a braid relation. All relations involving addresses that contain at least one 0 collapse to mere equalities. The remaining relations are which projects to the valid braid relation σ i−1 σ j−1 = σ j−1 σ i−1 , and which projects to the not less valid braid relation σ i−1 σ j−1 σ i−1 = σ j−1 σ i−1 σ j−1 .
We introduced a category C(M, X) for each monoid M partially acting on X in Definition 1.8. The braid category B + and our current category LD + are of these type. For such categories, natural functors arise from morphisms between the involved monoids, and we fix the following notation.
Definition 5.11. Assume that M, M ′ are monoids acting on sets X and X ′ , respectively. A morphism ϕ: M →M ′ and a map ψ: X→X ′ are called compatible if holds whenever x • a is defined. Then, we denote by [ϕ, ψ] the functor of C(M, X) to C(M ′ , X ′ ) that coincides with ψ on objects and maps (x, a, y) to (ψ(x), ϕ(a), ψ(y)).
Proposition 5.12. Define the right-height ht(t) of a term t by ht(x i ) = 0 and ht(t 0 ⋆ t 1 ) = ht(t 1 ) + 1. Then the morphism π of (5.5) is compatible with ht, and [π, ht] is a surjective functor of LD + onto B + .
The parameter ht(t) is the length of the rightmost branch in t viewed as a tree or, equivalently, the number of final )'s in t viewed as a bracketed expression.
Proof. Assume that (t, a, t ′ ) belongs to Hom(LD + ). Put n = ht(t). The LD-law preserves the right-height of terms, so we have ht(t ′ ) = n as well. The hypothesis that t • a exists implies that the factors D 1 i that occur in some (hence in every) expression of a satisfy i < n − 1. Hence π(a) is a braid of B + n , and n • π(a) is defined. Then the compatibility condition (5.6) is clear, and [π, ht] is a functor of LD + to B + .
Surjectivity is clear, as each braid σ i belongs to the image of π.
Moreover, a simple relation connects the elements ∆ t of LD + and the braids ∆ n .
Main results
We can now state the two main results of this paper. Theorem 6.1. For each term t, put ∆(t) = (t, ∆ t , φ(t)). Then LD + is a left-Garside category with left-Garside map ∆, and [π, ht] is a surjective right-lcm preserving functor of LD + onto the positive braid category B + . Theorem 6.2. Unless the category LD + is not regular, the Embedding Conjecture of [18, Chapter IX] is true.
6.1.
Recognizing left-preGarside monoids. Owing to Proposition 1.11 and to the construction of LD + from the partial action of the monoid LD + on terms, the first part of Theorem 6.1 is a direct consequence of Proposition 6.3. The monoid LD + equipped with its partial action on terms via self-distributivity is a locally left-Garside monoid with associated left-Garside sequence (∆ t ) t∈T . This is the result we shall prove now. The first step is to prove that LD + is left-preGarside. To do it, we appeal to general tools that we now describe. As for (LG 0 ), we have an easy sufficient condition when the action turns out to be monotonous in the following sense. Proposition 6.4. Assume that M has a partial action on X and there exists a map µ : X → N such that a = 1 implies µ(x • a) > µ(x). Then M satisfies (LG 0 ).
Proof. Assume that (a 1 , ..., a ℓ ) is a ≺-increasing sequence in Div(a). By definition of a partial action, there exists x in X such that x • a is defined, and this implies that x • a i is defined for each i. Next, the hypothesis that (a 1 , ..., a ℓ ) is ≺-increasing implies that there exist b 2 , ..., b ℓ = 1 satisfying a i = a i−1 b i for each i. We find and the sequence (µ(x • a 1 ), ..., µ(x • a ℓ )) is increasing. As µ(x • a 1 ) µ(x) holds, we deduce ℓ µ(x • a) − µ(x) + 1 and, therefore, M satisfies (LG 0 ).
As for conditions (LG 1 ) and (LG 2 ), we appeal to the subword reversing method of [21]. We recall that S * denotes the free monoid generated by S. We use ǫ for the empty word. Definition 6.5. Let S be any set. A map C : S × S → S * is called a complement on S. Then, we denote by R C the family of all relations aC(a, b) = bC(b, a) with a = b in S, and by C the unique (possibly partial) map of S * × S * to S * that extends C and obeys the recursive rules Proposition 6.6 ( [21] or [18,Prop. II.2.5.]). Assume that M is a monoid satisfying (LG 0 ) and admitting the presentation S, R C + , where C is a complement on S. Then the following are equivalent: (i) The monoid M is left-preGarside; (ii) For all a, b, c in S, we have (6.2) C( C( C(a, b), C(a, c)), C( C(b, a), C(b, c))) = ǫ.
6.2. Proof of Therem 6.1. We shall now prove that the monoid LD + equipped with its partial action on terms via left self-distributivity satisfies the criteria of Section 6.1. Here, and in most subsequent developments, we heavily appeal to the results of [18], some of which have quite intricate proofs.
Proof of Theorem 6.1. First, each term t has a natural size µ(t), namely the number of inner nodes in the associated binary tree. Then the hypothesis of Proposition 6.4 clearly holds: if t ′ is a nontrivial LD-expansion of t, then the size of t ′ is larger than that of t. Then, by Proposition 6.4, LD + satisfies (LG 0 ). Next, we observe that the presentation of LD + in Definition 5.4 is associated with a complement on the set {D α | α ∈ A}. Indeed, for each pair of addresses α, β, there exists in the list R LD exactly one relation of the type D α ... = D β .... Hence, in view of Proposition 6.6, and because we know that LD + satisfies (LG 0 ), it suffices to check that (6.2) holds in LD + for each triple D α , D β , D γ . This is Proposition VIII.1.9 of [18]. Hence LD + satisfies (LG 1 ) and (LG 2 ), and it is a left-preGarside monoid.
Let us now consider the elements ∆ t of Definition 5.7. First, by Lemma 5.9, t • ∆ t is defined for each term t, and it is equal to φ(t). Next, assume that t • D α is defined. Then Lemma VII.3.16 of [18] states that D α is a left-divisor of ∆ t in LD + , whereas Lemma VII.3.17 of [18] states that ∆ t is a left-divisor of D α ∆ t•Dα . Hence Condition (LG ℓoc 3 ) of Definition 1.10 is satisfied, and the sequence (∆ t ) t∈T is a left-Garside sequence in LD + . Thus LD + is a locally left-Garside monoid, which completes the proof of Proposition 6.3.
By Proposition 1.11, we deduce that LD + , which is C(LD + , T ) by definition, is a left-Garside category with left-Garside map ∆ as defined in Theorem 6.1.
As for the connection with the braid category B + , we saw in Proposition 5.12 that [π, ht] is a surjective functor of LD + onto B + , and it just remains to prove that it preserves right-lcm's. This follows from the fact that the homomorphism π of LD + to B + ∞ preserves right-lcm's, which in turn follows from the fact that LD + and B + ∞ are associated with complements C and C satisfying, for each pair of addresses α, β, Indeed, let a, b be any two elements of LD + . Let u, v be words on the alphabet {D α | α ∈ A} that represent a and b, respectively. By Proposition II.2.16 of [18], the word C(u, v) exists, and u C(u, v) represents lcm(a, b). Then π(u C(u, v)) represents a common right-multiple of the braids π(a) and π(b), and, by (6.3), we have π(u C(u, v)) = π(u) C(π(u), π(v)).
This shows that the braid represented by π(u C(u, v)), which is π (lcm(a, b)) by definition, is the right-lcm of the braids π(a) and π(b). So the morphism π preserves right-lcm's, and the proof of Theorem 6.1 is complete.
6.3. The Embedding Conjecture. From the viewpoint of self-distributive algebra, the main benefit of the current approach might be that it leads to a natural program for possibly establishing the so-called Embedding Conjecture. This conjecture, at the moment the most puzzling open question involving free LD-systems, can be stated in several equivalent forms.
Proposition 6.7. [18, Section IX.6] The following are equivalent: (i) The monoid LD + embeds in a group; (ii) The monoid LD + admits right-cancellation; (iii) The categories LD + 0 and LD + are isomorphic; (iv) The functor φ associated with the category LD + is injective; (v) For each term t, the LD-expansions of t make an upper-semilattice; (vi) The relations of Lemma 5.3 generate all relations that connect the action of D α 's by self-distributivity.
Each of the above properties is conjectured to be true: this is the Embedding Conjecture.
We turn to the proof of Theorem 6.2. So our aim is to show that the Embedding Conjecture is true whenever the category LD + is regular. To this end, we shall use some technical results from [18], plus the following criterion, which enables one to prove right-cancellability by only using simple morphisms. Proposition 6.8. Assume that C is a left-Garside category and the associated functor φ is injective on Obj(C). Then the following are equivalent: (i) Hom(C) admits right-cancellation; (ii) The functor φ is injective on Hom(C). Moreover, if C is regular, (i) and (ii) are equivalent to (iii) The functor φ is injective on simple morphisms of C.
So, in order to prove Theorem 6.2, it suffices to show that the category LD + satisfies the hypotheses of Proposition 6.8, and this is what we do now. Lemma 6.9. The functor φ of LD + is injective on objects, i.e., on terms.
Proof. We show using induction on the size of t that φ(t) determines t. The result is obvious if t has size 0, i.e., when t is a single variable x i . Assume t = t 0 ⋆ t 1 . By construction, the term φ(t) is obtained by substituting every variable x i occurring in the term φ(t 1 ) with the term φ(t 0 ) ⋆ x i . Hence φ(t 0 ) is the 1 n−1 0-subterm of φ(t), where n is the common right-height of t and φ(t). From there, φ(t 1 ) can be recovered by replacing the subterms φ(t 0 ) ⋆ x i of φ(t) by x i . Then, by induction hypothesis, t 0 and t 1 , hence t, can be recovered from φ(t 1 ) and φ(t 0 ). Lemma 6.10. The functor φ of LD + is injective on simple morphisms.
We can now complete the argument.
Proof of Theorem 6.2. The category LD + is left-Garside, with an associated functor φ that is injective both on objects and on simple morphisms. By Proposition 6.8, if LD + is regular, then Hom(LD + ) admits right-cancellation, which is one of the forms of the Embedding Conjecture, namely (ii) in Proposition 6.7.
6.4. A program for proving the regularity of LD + . At this point, we are left with the question of proving (or disproving) Conjecture 6.11. The left-Garside category LD + is regular.
The regularity criteria of Section 3.5 lead to a natural program for possibly proving Conjecture 6.11 and, therefore, the Embedding Conjecture.
We begin with a preliminary observation.
Lemma 6.12. The left-Garside sequence (∆ t ) t∈T on LD + is coherent (in the sense of Definition 3.9).
Proof. The question is to prove that, if t is a term and t • a is defined and a ∆ t ′ holds for some t ′ , then we necessarily have a ∆ t . This is a direct consequence of Proposition VIII.5.1 of [18]. Indeed, the latter states that an element a is a left-divisor of some element ∆ t if and only if a can be represented by a word in the letters D α that has a certain special form. This property does not involve the term t, and it implies that, if a left-divides ∆ t , then it automatically left-divides every element ∆ t ′ such that t ′ • a is defined.
So, according to Proposition 3.10, we obtain a well defined notion of a simple element in LD + : an element a of LD + is called simple if it left-divides at least one element of the form ∆ t . Then simple elements form a seed in LD + , and are eligible for a normal form satisfying the general properties described in Section 3. In this context, applying Proposition 3.18(ii) leads to the following criterion. Proposition 6.13. Assume that, for each term t and for all simple elements a, b of LD + such that t • a and t • b are defined, we have gcd(a, b)).
Then Conjecture 6.11 is true.
Proof. Let f, g be two simple morphisms in LD + that satisfy ∂ 0 f = ∂ 0 g = t.
By definition, f has the form (t, a, t • a) for some a satisfying a ∆ t , hence simple in LD + . Similarly, f has the form (t, b, t • b) for some simple element b, and we have gcd(f, g) = (t, gcd(a, b), t • gcd (a, b)). On the other hand, Lemma 2.7 gives If (6.4) holds, we deduce Moreover, if t • a is defined, then a = 1 implies µ(φ(t • a)) > µ(φ(t)), whence φ t (a) = 1. Then, Proposition 3.18(ii) implies that LD + is regular.
The reader may similarly check that (6.4) holds for t = (x⋆(x⋆x))⋆(x⋆(x⋆x)) with a = D 0 and b = D 1 ; the values are φ t (D 0 ) = D 000 D 010 D 100 D 110 and φ t (D 1 ) = D ǫ . Proposition 6.13 leads to a realistic program that would reduce the proof of the Embedding Conjecture to a (long) sequence of verifications. Indeed, it is shown in Proposition VIII.5.15 of [18] that every simple element a of LD + admits a unique expression of the form α denotes D α1 e−1 ...D α1 D α and > refers to the unique linear ordering of A satisfying α > α0β > α1γ for all α, β, γ. In this way, we associate with every simple element a of LD + a sequence of nonnegative integers (e α ) α∈A that plays the role of a sequence of coordinates for a. Then it should be possible to -express the coordinates of φ t (a) in terms of those of a, -express the coordinates of gcd(a, b) in terms of those of a and b. If this were done, proving (or disproving) the equalities (6.4) should be easy.
Remark. Contrary to the braid relations, the LD-relations of Lemma 5.3 are not symmetric. However, it turns out that the presentation of LD + is also associated with what can naturally be called a left-complement, namely a counterpart of a (right)-complement involving left-multiples. But the latter fails to satisfy the counterpart of (6.2), and it is extremely unlikely that one can prove that the monoid LD + is possibly right-cancellative (which would imply the Embedding Conjecture) using some version of Proposition 6.6.
Reproving braids properties
Proposition 5.12 and Theorem 6.1 connect the Garside structures associated with self-distributivity and with braids, both being previously known to exist. In this section, we show how the existence of the Garside structure of braids can be (re)-proved to exist assuming the existencce of the Garside structure of LD + only. So, for a while, we pretend that we do not know that the braid monoid B + n has a Garside structure, and we only know about the Garside structure of LD + . 7.1. Projections. We begin with a general criterion guaranteeing that the projection of a locally left-Garside monoid is again a locally left-Garside monoid.
If S, S are two alphabets and π is a map of S to S * (the free monoid on S), we still denote by π the alphabetical homomorphism of S * to S * that extends π, defined by π(s 1 ...s ℓ ) = π(s 1 )...π(s ℓ ).
Lemma 7.1. Assume that • M is a locally left-Garside monoid associated with a complement C on S; • M is a monoid associated with a complement C on S and satisfying (LG 0 ); • π : S → S ∪ {ǫ} satisfies π(S) ⊇ S and (7.1) For all a, b in S, we have C(π(a), π(b)) = π(C(a, b)). Then M is left-preGarside, and π induces a surjective right-lcm preserving homomorphism of M onto M .
Let a, b, c be elements of S. By hypothesis, there exist a, b, c in S satisfying π(a) = a, π(b) = b, π(c) = c. As M is left-preGarside, by the direct implication of Proposition 6.6, the relation (6.2) involving a, b, c is true in S * . Applying π and using (7.2), we deduce that the relation (6.2) involving a, b, c is true in S * . Then, as M satisfies (LG 0 ) by hypothesis, the converse implication of Proposition 6.6 implies that M is left-preGarside. Then, by definition, the relations aC(a, b) = bC(b, a) with a, b ∈ S make a presentation of M . Now, for a, b in S, we find π(a)C(π(a), π(b)) = π(aC(a, b)) = π(bC(b, a)) = π(b)C(π(b), π(a)) in M , which shows that the homomorphism of S * to M that extends π induces a well defined homomorphism of M to M . This homomorphism, still denoted π, is surjective since, by hypothesis, its image includes S.
Finally, we claim that π preserves right-lcm's. The argument is almost the same as in the proof of Theorem 6.1, with the difference that, here, we do not assume that common multiples necessarily exist. Let a, b be two elements of M that admit a common right-multiple. Let u, v be words on S * that represent a and b, respectively. By Proposition II.2.16 of [18], the word C(u, v) exists, and u C(u, v) represents lcm(a, b). Then the word π(u C(u, v)) represents a common right-multiple of π(a) and π(b) in M , and, by (7.2), we have π(u C(u, v)) = π(u) C(π(u), π(v)), which shows that the element represented by π(u C(u, v)), which is π (lcm(a, b)) by definition, is the right-lcm of π(a) and π(b) in M .
We turn to locally left-Garside monoids, i.e., we add partial actions in the picture. Although lengthy, the following result is easy. It just says that, if M is a locally left-Garside monoid, then its image under a projection that is compatible with the various ingredients of the Garside structure is again locally left-Garside.
Proposition 7.2. Assume that
• M is a locally left-Garside monoid associated with a complement C on S and (∆ x ) x∈X is a left-Garside sequence for the involved action of M on X; • M is a monoid associated with a complement C on S that has a partial action on X and satisfies (LG 0 ); • π : S → S ∪ {ǫ} satisfies (7.2), θ : S → S is a section for π, ̟ : X → X is a surjection, and For x in X, let ∆ x be the common value of π(∆ x ) for ̟(x) = x. Then M is locally left-Garside, with associated left-Garside sequence (∆ x ) x∈X , and π induces a surjective right-lcm preserving homomorphism of M onto M .
Proof. First, the hypotheses of Lemma 7.1 are satisfied, hence M is left-preGarside and π induces a surjective lcm-preserving homomorphism of M onto M .
Next, by (7.5), the definition of the elements ∆ x for x in X is unambiguous. It remains to check that (∆ x ) x∈X is a left-Garside sequence with respect to the action of M on X. So, assume x ∈ M , and let x be any element of M satisfying ̟(x) = x. First, Assume now a = 1 and x • a is defined. As S generates M , we can assume a ∈ S without loss of generality. By (7.4), the existence of x • a implies that of x • θ(a). As (∆ x ) x∈X is a left-Garside sequence for the action of M on X, we have a ′ ∆ x for some a ′ = 1 left-dividing θ(a). By construction, θ(a) lies in S, and it is an atom in M . So the only possiblity is a ′ = θ(a), i.e., we have θ(a) ∆ x . Applying π, we deduce a ∆ x in M .
Finally, under the same hypotheses, we have ∆ x θ(a) ∆ x•θ(a) in M . Using we deduce ∆ x a ∆ x•a in M , always under the hypothesis a ∈ S. The case of an arbitrary element a for which x • a exists then follows from an easy induction on the length of an expression of a as a product of elements of S.
It should then be clear that, under the hypotheses of Proposition 7.2, [π, ̟] is a surjective, right-lcm preserving functor of C(M, X) to C(M , X).
7.2.
The case of LD + and B + . Applying the criterion of Section 7.1 to the categories LD + and B + is easy.
∞ is a locally left-Garside monoid with respect to its action on N, and (∆ n ) n∈N is a left-Garside sequence in B + ∞ . Proof. Hereafter, we denote by C the complement on {D α | α ∈ A} associated with the LD-relations of Lemma 5.3, and by C the complement on {σ i | i 1} associated with the braid relations of (1.2). We consider the maps π of Lemma 5.10, and the right-height ht from terms to nonnegative integers. Finally, we define θ by θ(σ i ) = D 1 i−i . We claim that these data satisfy all hypotheses of Proposition 7.2. The verifications are easy. That the complements C and C satisfy (7.1) follows from a direct inspection. For instance, we find π(C(D 1 , D ǫ )) = π(D ǫ D 1 D 0 ) = σ 1 σ 2 = C(σ 2 , σ 1 ) = C(π(D 1 ), π(D ǫ )), and similar relations hold for all pairs of generators D α , D β .
Then, the action of LD + on terms preserve the right-height, whereas the action of braids on N is trivial, so (7.3) is clear. Next, θ is a section for π, and we observe that t • θ(σ i ) is defined if and only if the right-height of t is at least i + 1, hence if and only if ht(t) • σ i is defined, so (7.4) is satisfied. Finally, we observed in Proposition 5.12 that π(∆ t ) is equal to ∆ ht(t) , hence it depends on ht(t) only. So (7.5) is satisfied.
Therefore, Proposition 7.2 applies, and it gives the expected result. (ii) For each n, the braid monoid B + n is a Garside monoid. Proof. Point (i) follows from Proposition 1.11 once we know that B + ∞ is locally left-Garside. Point (ii) follows from Proposition 1.12 since, for each n, the submonoid B + n of B + ∞ is (B + ∞ ) n in the sense of Definition 1.7. Thus, as announced, the Garside structure of braids can be recovered from the left-Garside structure of LD + .
Intermediate categories
We conclude with a different topic. The projection of the self-distributivity category LD + to the braid category B + described above is rather trivial in that terms are involved through their right-height only and the corresponding action of braids on integers is just constant. Actually, one can consider alternative projections corresponding to less trivial braid actions and leading to two-step projections LD + −→ C(B + ∞ , X) −→ B + . We shall describe two such examples.
π π x p x q x p x q (..., p, q, ...) (..., q, p, ...) σ i D 1 i−1 Figure 9. Compatibility of the action of D 1 i−1 on sequences of "subright" variables and of the action of σ i on sequences of integers. defines an action of B + n on N n , whence a partial action of B + ∞ on N * , where N * denotes the set of all finite sequences in N. In this way, we obtain a new category C(B + ∞ , N * ), which clearly projects to B + . We shall now describe an explicit projection of LD + onto this category. We recall that terms have been defined to be bracketed expressions constructed from a fixed sequence of variables x 1 , x 2 , ... (or as binary trees with leaves labelled with variables x p ), and that, for t a term and α a binary address, t /α denotes the subterm of t whose root, when t is viewed as a binary tree, has address α. where π is defined for ht(t) = n by π(t) = (var R (t /0 ), var R (t /10 ), ..., var R (t /1 n−1 0 )), var R (t) denoting the index of the righmost variable occurring in t.
So, a typical morphism of B + is ((1, 2, 2), σ 1 , (2, 1, 2)), and the projection of terms to sequences of integers is given by x p1 x p2 x pn → (p 1 , p 2 , ..., p n ). π Sketch of proof. The point is to check that the action of the LD-law on the indices of the right variables of the subterms with addresses 1 i 0 is compatible with the action of braids on sequences of integers. It suffices to consider the basic case of D 1 i−1 , and the expected relation is shown in Figure 9. Details are easy. Note that, for symmetry reasons, the category B + is not only left-Garside, but even Garside.
8.2.
Action of braids on LD-systems. The action of positive braids on sequences of integers defined in (8.1) is just one example of a much more general situation, namely the action of positive braids on sequences of elements of any LDsystem. It is well known-see, for instance, [18, Chapter I]-that, if (S, ⋆) is an LD-system, i.e., ⋆ is a binary operation on S that obeys the LD-law, then (8.2) (x 1 , ..., x n ) • σ i = (x 1 , ..., x i−1 , x i ⋆ x i+1 , x i , x i+2 , ..., x n ) induces a well defined action of the monoid B + n on the set S n , and, from there, a partial action of B + ∞ on the set S * of all finite sequences of elements of S. Proposition 8.2. Assume that (S, ⋆) is an LD-system, and let B + S be the category associated with the partial action where π s is defined for ht(t) = n by π s (t) = (eval s (t /0 ), ..., eval s (t /1 n−1 0 )), eval s (t) being the evaluation of t in (S, ⋆) when x p is given the value s p for each p.
We skip the proof, which is an easy verification similar to that of Proposition 8.1. When (S, ⋆) is N equipped with x⋆y = y and we map x p to p for each p, we obtain the category B + of Proposition 8.1. In this case, the (partial) action of braids is not constant as in the case of B + , but it factors through an action of the associated permutations, and it is therefore far from being free. By contrast, if we take for S the braid group B ∞ with the operation ⋆ defined by x ⋆ y = x sh(y) σ 1 sh(x) −1 , where we recall sh is the shift endomorphism of B ∞ that maps σ i to σ i+1 for each i, and if we send x p to 1 (or to any other fixed braid) for each p, then the corresponding action (8.2) of B + ∞ on (B ∞ ) * is free, in the sense that a = a ′ holds whenever s • a = s • a ′ holds for at least one sequence s in (B ∞ ) * : this follows from Lemma III.1.10 of [23]. This suggests that the associated category C(B + ∞ , (B ∞ ) * )) has a very rich structure.
Appendix: Other algebraic laws
The above approach of self-distributivity can be developed for other algebraic laws as well. However, at least from the viewpoint of Garside structures, the case of self-distributivity seems quite particular.
The case of associativity. Associativity is the law x(yz) = (xy)z. It is syntactically close to self-distributivity, the only difference being that the variable x is not duplicated in the right hand side. Let us say that a term t ′ is an A-expansion of another term t if t ′ can be obtained from t by applying the associativity law in the left-to-right direction only, i.e., by iteratively replacing subterms of the form t 1 ⋆ (t 2 ⋆ t 3 ) by the corresponding term (t 1 ⋆ t 2 ) ⋆ t 3 . Then the counterpart of Proposition 4.5 is true, i.e., two terms t, t ′ are equivalent up to associativity if and only if they admit a common A-expansion, a trivial result since every size n term t admits as an A-expansion the term φ(t) obtained from t by pushing all brackets to the left.
As in Sections 4.3 and 5.2, we can introduce the category A + 0 whose objects are terms, and whose morphisms are pairs (t, t ′ ) with t ′ an A-expansion of t. As in Section 5.1, we can take positions into account, using A α when associativity is applied at address α, and introduce a monoid A + that describes the connections between the generators A α [22]. Here the relations of Lemma 5.3 are to be replaced by analogous new relations, among which the MacLane-Stasheff Pentagon relations A 2 α = A α1 A α A α0 . The monoid A + turns out to be a well known object: indeed, it is (isomorphic to) the submonoid F + of R. Thompson's group F generated by the standard generators x 1 , x 2 , ... [11]. Also, the orbits of the partial action of the monoid A + on terms are well known: these are the (type A) associahedra, equipped with the structure known as Tamari lattice. Now, as in Section 5, we can introduce the category A + , whose objects are terms, and whose morphisms are triples (t, a, t ′ ) with a in A + and t • a = t ′ . Using ψ(t) for the term obtained from t by pushing all brackets to the right, we have Proposition. The categories A + 0 and A + are isomorphic; A + 0 is left-Garside with Garside map t → (t, φ(t)), and right-Garside with Garside map t → (ψ(t), t).
This result might appear promising. It is not! Indeed, the involved Garside structure(s) is trivial: the maps φ and ψ are constant on each orbit of the action of A + on terms, and it easily follows that every morphism in A + 0 and A + is left-simple and right-simple so that, for instance, the greedy normal form of any morphism always has length one 3 . The only observation worth noting is that A + provides an example where the left-and the right-Garside structures are not compatible, and, therefore, we have no Garside structure in the sense of Definition 1.6.
Central duplication. We conclude with still another example, namely the exotic central duplication law x(yz) = (xy)(yz) of [20]. The situation there turns out to be similar to that of self-distributivity, and a nontrivial left-Garside structure appears. As there is no known connection between this law and other widely investigated objects like braids, it is probably not necessary to go into details. | 2008-10-26T15:37:45.000Z | 2008-10-26T00:00:00.000 | {
"year": 2008,
"sha1": "1eda9e393bd4b3ad68c25ad7739428be8c76d7fa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5802/ambp.263",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0f4242a34d75c28713a8a0975461a15b7eb2af24",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
85453859 | pes2o/s2orc | v3-fos-license | What is the correct cost functional for variational data assimilation?
Variational approaches to data assimilation, and weakly constrained four dimensional variation (WC-4DVar) in particular, are important in the geosciences but also in other communities (often under different names). The cost functions and the resulting optimal trajectories may have a probabilistic interpretation, for instance by linking data assimilation with Maximum Aposteriori (MAP) estimation. This is possible in particular if the unknown trajectory is modelled as the solution of a stochastic differential equation (SDE), as is increasingly the case in weather forecasting and climate modelling. In this case, the MAP estimator (or"most probable path"of the SDE) is obtained by minimising the Onsager--Machlup functional. Although this fact is well known, there seems to be some confusion in the literature, with the energy (or"least squares") functional sometimes been claimed to yield the most probable path. The first aim of this paper is to address this confusion and show that the energy functional does not, in general, provide the most probable path. The second aim is to discuss the implications in practice. Although the mentioned results pertain to stochastic models in continuous time, they do have consequences in practice where SDE's are approximated by discrete time schemes. It turns out that using an approximation to the SDE and calculating its most probable path does not necessarily yield a good approximation to the most probable path of the SDE proper. This suggest that even in discrete time, a version of the Onsager--Machlup functional should be used, rather than the energy functional, at least if the solution is to be interpreted as a MAP estimator.
Introduction
In the geosciences, the term data assimilation refers to a variety of mathematical and numerical techniques whereby time series of observations are employed to estimate states or trajectories of relevant dynamical models. In other words, plausible states or orbits are determined which, on the one hand, are consistent with a given dynamical model and, on the other hand, are consistent with a given set of observations. Many different approaches to data assimilation exist, based on very different philosophies and premises, see for instance Ide et al. (1997); Kalnay (2001); Evensen (2007), but this list is by no means complete.
Both within the atmospheric sciences, but also in other branches of physics and engineering, variational approaches have gained widespread attention (although the nomenclature may differ considerably). A particular instance of this idea is known as weakly constrained four dimensional variation in atmospheric sciences; basically, a series of model states is found that minimises a cost functional which quantifies both the deviations from the observed data as well as the misfit with the given model. An early paper on discrete time WC-4DVar in atmospheric sciences is Derber (1989), see also Kalnay (2001). The cost function is almost invariably some form of quadratic error, and for this reason, the technique is known as the minimum energy estimator in the engineering community, see for instance Jazwinski (1970) or Mortensen (1968) (in the latter publication, the authors go further and derive an incremental version).
In the atmospheric sciences and in particular in climate modelling, stochastic models are becoming ever more important, despite having a long and distinguished history already (see for instance Imkeller and von Storch, 2001;Franzke et al., 2015, and references therein). Mathematically speaking, climate models increasingly take the form of stochastic differential equations (SDE's). Consequently, data assimilation into such models needs well understood foundations. In particular, if variational data assimilation into SDE's is envisaged, the question arises as to what cost function should be used, and in particular whether the cost functions and the resulting optimal trajectories have any probabilistic interpretation. A possible avenue is to link variational data assimilation with Maximum Aposteriori (MAP) estimation. The MAP estimator of a random variable given some observations is essentially the maximiser of the posterior, that is, of the conditional density of the unknown random variable given the observations. In some sense, the MAP estimator can be interpreted as the "most probable value" of the unknown random variable given the observation. The concept of density generalises to situations where the unknown random variable is an entire function, given by the solution of a stochastic differential equation (SDE), and the MAP estimator becomes the "most probable path" of the SDE (see e.g. Zeitouni and Dembo (1987), Zeitouni and Dembo (1988); for MAP estimation in classical inverse problems but with random observations see Cotter et al. (2009); see also Apte et al. (2007); Stuart (2010) for applications to Bayesian estimation in stochastic dynamical systems). Contrary to what is sometimes claimed in the literature, the most probable path of an SDE is not a minimiser of the energy functional but rather of the Onsager-Machlup functional, which differs from the energy functional in that the latter contains extra terms. In other words, to find MAP estimators or most probable paths for SDE's, the Onsager-Machlup functional has to be minimised, rather than the energy functional.
The first aim of this paper is to illustrate this well known fact. The reader is referred to Zeitouni and Dembo (1987), Zeitouni and Dembo (1988) for a rigorous derivation of the Onsager-Machlup functional and discussion of the MAP estimator in the context of SDE's. The second aim is to show that although this is a result pertaining to stochastic models in continuous time, it does have consequences in discrete time. In practice, SDE's are approximated by discrete time schemes, for instance the Euler scheme which results in discrete time stochastic dynamical system with additive Gaussian errors. The (negative logarithm of the) density of solutions to this discrete time system is given by the energy functional. But we will argue that the appropriate functional in this situation should still be the Onsager-Machlup functional or a discrete time version thereof, at least if the solution is to be interpreted as a MAP estimator. The reason is that the MAP estimator (or most probable path) of an approximation to the SDE is not necessarily a good approximation to the most probable path of the SDE proper, as we will see. It is worth noting that this point involves the dynamics only and is entirely independent of whether observations are considered discrete or continuous in time.
In Section 2, we revisit the concepts of densities for random variables and the MAP estimator. In Section 3, we specialise to the situation were the unknown random variable is a trajectory of a stochastic differential equation, and demonstrate that the energy functional cannot be the correct functional to determine the MAP estimator. An expression for the Onsager-Machlup functional will also be provided. The findings will be supported by numerical simulations in Section 4. Further, these simulations illustrate that the Onsager-Machlup functional essentially provides the correct density for paths of SDE's even though the simulations are not truly continuous in time but rather use an approximation scheme that is discrete in time. Section 5 provides the Onsager-Machlup functional for more general SDE's that are not used in the present paper but which are relevant for the climate sciences, namely SDE's with multiplicative noise (see e.g. Franzke et al., 2015) 1 . Section 6 concludes with a discussion as to how our findings bear on discrete time simulations of SDE's. An informal derivation of the Onsager-Machlup functional is provided in Appendix A.
Definition of the Maximum Aposteriori (MAP) estimator
A fundamental concept in statistics in general and data assimilation in particular is the Maximum Aposteriori or MAP estimator. Let X, Y be random variables, where we interprete X as the unknown quantity (to be estimated) and Y as the observation. Let p(x|y) denote the conditional probability density function of X given that Y assumes the value y. A MAP estimator of X given Y is a maximiser over x of the density p(x|y). That is, the MAP estimator is a functionx(y) so that for any y we have p(x(y)|y) = sup x p(x|y).
MAP estimators need not exist in general, nor are they unique. Since the observations Y play the role of parameters in this problem, they will mostly suppressed in the notation for the sake of simplicity. That is, if X is a random variable with density p X , we understand that p X might in fact be the conditional density of X given some observations or parameters.
The presented definition of the MAP estimator will be referred to as the de facto definition (following Dutra et al. (2014)); there is an alternative definition which not only provides an intuitive interpretation but is more generally applicable. Roughly speaking, the MAP estimator of a random variable X is the center of a small ball positioned so as to have greatest possible probability of containing X, in the limit of the diameter of that ball going to zero. More formally, suppose that X is a random variable with values in some vector space V with norm . . Then the MAP estimator is a pointx so that for any other point x If observations are present, then these probabilities are conditional probabilities given those observations.
If a random variable X with values in R d has a density p which is everywhere positive, then a MAP estimator according to the alternative definition (1) is a MAP estimator according to the de facto definition and vice versa. Indeed, if X has a positive density p, then for all x ∈ R d the relation holds (except perhaps if x is in some exceptional set which has however volume zero; we will ignore this technical point). Here, vol denotes the standard volume on R d . Hence, if y is so that p(y) > 0, then for any x we have . (3) The relation (3) shows that any pointx ∈ R d which satisfies the de facto definition of a MAP estimator will also satisfy the alternative definition and vice versa. A strong point of the de facto definition is that it provides a means to find a MAP estimator through an optimisation problem. An important insight from the alternative definition though is that it is not quite necessary to have a probability density function as in Equation (2) in order to define the MAP estimator. In particular the normalisation in Equation (2) need not be the standard volume; normalising in a different way would give a different density, but as long as the normalisation is the same for all reference points x and the resulting density is still everywhere positive, we would obtain the same MAP estimators, since the relation (3) would still be valid. For instance, if W is another random variable, we could normalise as follows if the limit exists for every x; if p (W ) is everywhere positive, p (W ) can be used to calculate the MAP just as well. It turns out that generalised densities as in Equation (4) might still be well defined even if X has values in some infinite dimensional space with norm . for which there exists no generalisation of the standard volume. 2 This is precisely the situation when trying to find MAP estimators for trajectories of continuous time stochastic dynamical models; such a trajectory is a function (of time) and hence an infinite dimensional object. Hence the Definition (3) of a density does not apply in this situation but Definition (4) does, provided we find a suitable random variable W to normalise with.
2 The problem is the translation invariance of the standard volume. In an infinite dimensional normed space, a ball of unit radius may contain infinitely many disjoint balls of sufficiently small but nonzero radius. By translation invariance, these balls must have the same volume. But this means that either the volume of the unit ball is infinity or the volume of a sufficiently small ball is zero.
MAP estimators for stochastic difference and differential equations
The link between MAP estimators and data assimilation in discrete time can be described as follows. The dynamics underlying the observations is modelled as a stochastic difference equation of the form where F is some mapping on a vector space E (called the state space), and the R n , n = 1, 2, . . . are taken as independent and identically distributed random variables with values in E. For simplicity's sake, we assume throughout that E is one dimensional (see however Sec. 5). Further, the R n , n = 1, 2, . . . are assumed to be normal with mean zero and variance γ. We further set X 0 = ξ, where ξ ∈ E is known.
The observations are assumed to be functions of the X 1 , . . . , X n further corrupted by noise. But as said earlier, they will enter the densities as parameters in some way which is not relevant for our purposes. It is then a simple matter to show that where we understand that x 0 = ξ. Since (X 1 , . . . , X N ) is a random variable in E N , we can interprete the right hand side of Equation (6) as a density of (X 1 , . . . , X N ) according to Definition (4) with V = E N and norm (x 1 , . . . , x N ) = max n |x n |.
Atmospheric and ocean dynamics are, however, continuous in time, as are many other processes in science and engineering where data assimilation is relevant. Considering data assimilation in discrete time is merely a concession to practical constraints. Indeed, there are several different processess that introduce time stepping in operational practice, for instance the integration of the model or the batch processing of the observations, but the relevant time steps can be very different. Accounting for "model error" with additive noise after discretising models in time will result in the solutions for different time stepping having different statistical properties. Although this is to some extent inevitable, we still ought to have a formalism for comparing these different solutions, as they ultimately represent the same thing.
A convenient way to enable comparison of different discretisations (with noise added) is to formulate a stochastic model in continuous time, that is, a stochastic differential equation (SDE), and consider any discretisation as an approximation of that model. The question then arising is what is the MAP estimator, or more generally the density, for trajectories of an SDE? To put this question more precisely, let I = [0, T ] be an interval of the real line, and consider the SDE where f is a vector field on E, ρ > 0, and r t , t ∈ I is white noise with zero mean and unit intensity (i.e. the correlation function is δ(t − s) with δ the Dirac delta function). Again, we set X 0 = ξ, where ξ ∈ E is known. Whatever the precise interpretation of the SDE (7), the solution is a random continuous function {X t , t ∈ I}, and the density of it at some given reference (7) (thin solid line) falls entirely into a small strip of width ǫ around the reference trajectory {z t , t ∈ I} (thick solid line). The strip is indicated with dashed lines. (Note that this is a schematic sketch rather than an actual simulation.) where {W t , t ∈ I} is the Wiener process, which can be seen as the time integral of white noise, that is We will learn more about the Wiener process later. Normalisation with the Wiener process in the Definition (8) of the density will turn out to be convenient. It is worth stressing that the density in Definition (8) is a special case of the Definition (4) if we use the norm z := sup t∈I |z t | for trajectories over I. We also note that the density is zero for trajectories which do not start at the initial condition z 0 = ξ. For later use, we introduce the ǫ-weight of a trajectory {z t , t ∈ I}. The ǫ-weight is the probability that the solution {X t , t ∈ I} of the SDE (7) falls entirely into a small strip or "sausage" of width ǫ around {z t , t ∈ I}, relative to the probability that the Wiener process {W t } falls entirely into a "sausage" of width ǫ/ρ around zero. Figure 1 illustrates the situation. The density p according to Definition (8) is given by p({z t }) = lim ǫ→0 α(ǫ, {z t }). The density p can be written in the form and several publications seem to imply that A({z t }) should be equal to the energy functional or at least that the MAP estimator should be a minimiser of A E (sometimes without clear reference to the concept of densities). In case observations are present, the energy estimator would carry another term pertaining to the observations. As mentioned in the introduction already, the correct expression for the functional A in Equation (9) is not the energy functional but the Onsager-Machlup functional An informal derivation of this expression will be given in Appendix A. Note however that for very small noise amplitudes, the energy functional A E becomes the dominant term in the Onsager-Machlup functional, as this term scales inversely proportional with the noise, while the additional term does not depend on the noise at all. This suggests that data assimilation employing the energy functional does have a rigorous interpretation in the small noise limit. This is indeed the case, as discussed for instance in Vanden-Eijnden and Weare (2013), where the energy functional emerges from a large deviation principle. Furthermore, there are clearly other cases where the additional term in Equation (11) does not matter for the purposes of data assimilation, for instance if the dynamics is linear, as then the second term in Equation (11) is constant. In higher dimensions, the additional term is the integral over divf (z t ) (see Section 5) so that for systems with constant divergence, minimising the energy functional gives the same results as minimising the Onsager-Machlup functional.
In the remainder of this section, we will provide evidence that the expression (9) with the energy functional is not the correct density, and discuss possible reasons for this misconception. We write the SDE (7), somewhat more rigorously, as an integral equation where W t , t ∈ I is the standard Wiener process, which as we have seen can heuristically be interpreted as the integral of the white noise process r t . In fact, from these heuristics, one can derive that the Wiener process ought to have the following properties: (1) W 0 = 0, (2) for 0 ≤ t 1 < t 2 the increment W t2 − W t1 is a normally distributed random variable with mean zero and covariance t 2 − t 1 , (3) increments for nonoverlapping intervals are independent, It is well known (see for instance Breiman (1973), Mörters and Peres (2010)) that a process {W t , t ∈ I} with the properties listed above exists and can be realised as a random continuous function of time. In view of this, the Equation (7) is a classical integral equation perturbed by a randomly selected function that is continuous in time.
Discretisation schemes for Equation (7) can be derived by observing that and approximating the integral in an appropriate way. For instance, using the approximation tn tn−1 f (X s )ds ∼ = f (X tn−1 )(t n − t n−1 ) and assuming for simplicity a constant time step (t n − t n−1 ) = ∆ results in the Euler scheme (also known as the Euler-Maruyama scheme, Milstein, 1995) (13) X tn−1 )∆ + ρ · (W tn − W tn−1 ). (The superscript ∆ indicates that this solution is obtained with the Euler scheme and time discretisation ∆). If we set R n = ρ · (W tn − W tn−1 ), then Equation (13) is precisely in the form of Equation (5) with F (x) = x + f (x)∆ and γ = ∆ρ 2 . Hence the density (6) for the solution (X It now seems tempting to take the "limit" ∆ → 0 here. In fact, assuming that the x 1 , . . . , x n in Equation (14) are the values of some reference trajectory {z t , t ∈ I} at the points t 1 , . . . , t n , we would by formally taking this limit indeed obtain Equation (9) for the density with the energy functional as in Equation (10).
If we retrace the steps in our calculation though, we realise that we have not quite taken them in the order we should according to Definition (8) of the density. To discuss this, we introduce the ǫ-weight of a trajectory {z t , t ∈ I}, but now with respect to the Euler approximation: .
What we have done to arrive at the Equations (9,10) for the density is to take the limit ǫ → 0, then use Equation (6) in the special case of the Euler system (13), and finally take the limit ∆ → 0. That is, we have proved However, Definition (8) basically requires to take these limits the other way round: A simple example (following Dutra et al. (2014)) will show that interchanging these two limits will, in general, give different results. It is evident that the density should be independent of what scheme we use to approximate solutions of SDE's, and the Euler scheme is not the only scheme. To arrive at another scheme for numerically solving SDE's, we consider other approximations of the integral in Equation (12), for instance tn tn−1 f (X s )ds ∼ = (λf (X tn−1 ) + (1 − λ)f (X tn ))∆ for some λ ∈ [0, 1], leading to the implicit scheme . This is an equally valid approximation scheme for SDE's, see for instance Kloeden and Platen (1992), Chapter 12. Note however that X (∆) tn is now a nonlinear function of the noise (W tn − W tn−1 ). Using the same logic as before (see Appendix B) one arrives at the conclusion that the functional A in Equation (9) of the density should be So not only does another term −(1 − λ) I f ′ (z t ) dt appear in the exponent, but we can generate an entire spectrum of candidate functionals by varying λ. This result evidently draws the entire methodology into question. We note that λ = 1/2 gives the Onsager-Machlup functional, that is, A 1/2 = A OM . This however does not prove that A OM is indeed the correct functional. So far, we do not have any reason to believe that λ = 1/2 is in any way special.
Numerical experiment
It was already mentioned in the last section (and will be discussed further in the Appendix) that A OM is the appropriate density functional for paths of a stochastic differential equation. In particular, this implies that the minimiser of A OM can be interpreted as the MAP estimator or "most probable" path of the stochastic differential equation. We have also considered discrete time approximations to the stochastic differential equations, for instance the Euler scheme, and it emerged that the densities derived from these approximations do not, in general, agree with the Onsager-Machlup functional even approximately. This raises questions as to what the right functional should be in practice, since apart from the rare situation where explicit solutions are available, stochastic differential equations inevitably have to be approximated by numerical schemes which are discrete in time. But suppose we approximate a stochastic differential equation of the form (7) with the Euler scheme (13). We know that in this situation, Equation (14) is the correct density of solutions, so what is the link between solutions of the Euler scheme and the functional A OM , and why should we care about it?
We will examine the situation with a numerical example. We consider a stochastic differential equation of the form (7) with approximation by the Euler scheme (Equ. 13). Here f (x) = 2 π arctan(ax), with a = 6 and ρ = 0.3. All solutions start from the fixed initial condition ξ = 0. Figure 2 shows 20 independent approximate solutions of Equation (7); "approximate" because these are solutions of the Euler scheme (13). The density of these solutions is given by Equation (14), and according to this expression the most probable solution is equal to zero for all times. The picture though we see in Figure 2 seems to contradict this. It is evident that very few solutions seem to be concentrating around the abscissa. This is easy to understand qualitatively. For small times, the variability of the solution grows exponentially as the origin is an unstable fixed point for this dynamics. Sooner or later, the solution will enter regions where the arctan is flat and the drift is essentially either +1 or −1. The solution might from time to time transit between these two regimes, but these transits become progressively rarer until it behaves essentially like a random walk with constant drift. (7) with f (x) = 2 π arctan(ax) and a = 6 and ρ = 0.3 are shown in grey, obtained with an Euler scheme with ∆ = 1.14 · 10 −4 . The two solid lines represent the most probable trajectories according to the Onsager-Machlup functional, and the dashed line represents the most probable trajectory according to the energy functional. It is evident that simulations are more likely to accumulate around the former.
The solid lines in Figure 2 represent the optimal paths of the Onsager-Machlup functional A OM . These have been calculated numerically by solving the Euler Lagrange equations associated with the Onsager-Machlup functional A OM (the functional displays a symmetry whence there are two solutions symmetric about the abscissa). These solutions seem to capture much better the "big picture", indicating where solutions of our simulations tend to be. So it seems that the Onsager-Machlup functional provides a better description of the density, even though the solutions have been obtained with a discrete time system and thus strictly speaking Equation (14) provides the correct density.
To resolve this apparent paradox, we remember that the density at some reference path {z t } is the probability that the solution of our dynamics lies in a thin sausage of width ǫ around that reference path, relative to the probability that the driving Wiener process lies in a thin sausage of width ǫ around zero. These probabilities, or rather the ǫ-weight α ∆ (ǫ, {z t }) can be estimated using a Monte Carlo approach in order to study the dependence on ǫ and ∆. For simplicity, the reference path was taken to be zero. Note that this is the most probable path according to A E . In Figure 3 Figure 3 which shows an interesting crossover behaviour. With ǫ decreasing, α first approaches the value given by the Onsager-Machlup functional. If ǫ reaches a sufficiently small value though (depending on ∆), the curves start to diverge from this value and approach one, consistent with the energy functional. The smaller ∆, the longer α stays close to the Onsager-Machlup value for decreasing ǫ, or in other words, for smaller ∆ a smaller ǫ has to be chosen in order for exp(−A E ) to become a relevant approximation for α.
For a rough estimate on how small ǫ has to be in order for the crossover to take place, we observe that for a reference path z, Hence for fixed ∆, the increments of X (∆) tn − z tn in Equation (19) have a characteristic size (at time t n ), namely It seems plausible that α starts to approach the energy functional as soon as ǫ becomes smaller than the typical increment of X which is just ǫ ∼ = ρ √ ∆ in our case, or log(( ǫ ρ ) 2 ) = log(∆). For the experiments shown in Figure 3, we used the following values of log(∆): -13.7 (▽), -10.5 (△), -9.1 (♦), -5.9 ( ), -4.5 ( ). This appears to be roughly consistent with the values of of log(( ǫ ρ ) 2 ) at which the crossover takes place.
The Onsager-Machlup functional in higher dimensions and for multiplicative noise
In this section we will provide additional (and well known) results regarding the Onsager-Machlup functional in higher dimensions and with multiplicative noise. We will see that in the case of multiplicative noise, further terms appear in the Onsager-Machlup functional; the effect of these terms in data assimilation applications remains to be investigated. We consider a general SDE where the state space E is the d-dimensional Euclidean space, f is a vector field on E and ρ is a state dependent d-by-d matrix. For SDE's with multiplicative noise as in Equation (20), different mathematical interpretations are possible, most prominently the Itô and the Stratonovič interpretation (see e.g. Øksendal, 1998;Ikeda and Watanabe, 1989). We will interprete the SDE (20) in the sense of Stratonovič; and Itô equation can always be converted to a Stratonovič equation.
The expression for the Onsager-Machlup functional given in Equation (22) below is valid if the noise is nondegenerate, that is ρ(x)ρ T (x) ≥ α½ for some α > 0. In this situation, the matrix g(x) = (ρ(x)ρ T (x)) −1 defines a Riemannian metric. For any vector field f , the divergence divf will be understood with respect to this metric, that is Further, let R(x) be the scalar (Ricci) curvature and m(x, y) the (geodesic) distance between points x, y ∈ E. These concepts are defined with respect to the metric g as well (see Gallot et al., 2004, for an introduction to Riemannian geometry). Then the Onsager-Machlup functional is defined as , and as is proved for instance in Ikeda and Watanabe (1989); Zeitouni and Dembo (1987), it has the expression As was already discussed in Section 4 (in the context of a one-dimensional example), the effect of the second term (containing divf ) is to discourage the most probable path from staying in regions where the dynamics is unstable, as this causes strong amplification of the noise and thus typical solutions of the SDE quickly escape from such regions. The effect of the second term involving the Ricci curvature is not so clear at this point and is subject to future investigation.
In the remainder of this section we discuss what terms need adding to the Onsager-Machlup functional if observations are present. Let the observations be a discrete time series {Y n , n = 1, . . . , N }. The Onsager-Machlup functional is now defined as .
(We will use the notation F OM to designate the Onsager-Machlup functional with observations; A OM still defined as in Eq. 21.) A commonly made assumption is that the observations are conditionally independent given the underlying trajectory {X t , t ∈ [0, T ]}, and that the distribution of Y n depends on X tn only for n = 1, . . . , N and a series of sampling times t 1 , . . . , t N . Let q n (y, x) be the density of Y n given X tn . In this case, the full Onsager-Machlup functional reads as If for instance Y n given X tn is Gaussian with mean h(X tn ) and variance γ (where h and γ are often called the observation function and observation error covariance, respectively), then the additional term in the Onsager-Machlup functional reads as − N n=1 log(q n (Y n , z tn )) = 1 2 N n=1 (y n − h(z tn )) T γ −1 (y n − h(z tn )).
Conclusions for discrete time simulations and data assimilation
When modelling a dynamical process with a stochastic differential equation, then any practical implementation will use a discrete time approximation of one form or another. If (as part of a data assimilation experiment for instance) one is interested in a most probable path of that dynamical process, then our considerations imply that the appropriate functional is the Onsager-Machlup functional (or a discrete time approximation thereof), even though the density of discrete time approximations might differ from the Onsager-Machlup functional. The Onsager-Machlup functional provides results which are robust with respect to the particular approximation scheme, and in particular with respect to the chosen time discretisation, which does not have any intrinsic meaning in terms of the problem specification. More specifically, the Onsager-Machlup functional gives approximately the ǫ-weight of a reference path, that is the probability that the solutions of the stochastic differential equation stay in an ǫ sausage around the reference path, and a discrete time approximation of the SDE will assign approximately the same ǫ-weight to that path, unless ǫ reaches the scale of typical increments in that approximation. In other words, the Onsager-Machlup functional provides an approximation to the ǫ-weight of a path with respect to the stochastic differential equation and approximations thereof, save approximations that employ increments which are typically larger than ǫ. Such approximations do not appropriately represent the fast fluctuations of the Wiener process that are still relevant for the dynamics, even when the amplitude of Wiener process is constrained to be small.
For these reasons, most probable paths should be determined using the Onsager-Machlup functional, since such paths carry the largest possible ǫ-weight, no matter if this weight is calculated from the stochastic differential equation or any reasonable approximation, as long as that approximation uses increments which are smaller than ǫ. Paths which are minimisers of the energy functional or any other functional do not possess this universality property. The implication for data assimilation is that minimising paths of the Onsager-Machlup functional are more typical for the dynamics and in fact carry a rigorous interpretation as MAP estimators, different from maximum energy paths which do not.
These arguments do not apply though if the process under consideration is intrinsically discrete in time. In this situation, it does not make sense to consider the limit ∆ → 0 which brings about the extra term in the Onsager-Machlup functional. Systems like this might appear in the context of seasonal or diurnal cycles, or more generally systems with an internal clocking mechanism.
where H is the Heaviside function. We might use Equation (23) in (24) with tN ) is a solution to the Euler approximation (13). Note that (X is then a solution of the system (19). We therefore obtain In terms of the limits ∆ → 0 and ǫ → 0, the first two terms A and B will converge to and zero, respectively, no matter in which order the limits are taken. The third term however shows different behaviour depending on whether ∆ → 0 or ǫ → 0 first. If we take ∆ → 0 first, it can be shown that a well defined random variable obtains 3 which can be written as an Ito integral We do not expect the reader to be familiar with the theory of Ito integrals -relevant here is that the limit of this expression for ǫ → 0 will not be zero but A demonstration of this fact 4 for the case where f is linear is given here for illustration. If f (x) = ax for some a ∈ R, then C = a ρ N n=1 (z tn−1 + ρW tn−1 )(W tn − W tn−1 ) = a ρ N n=1 z tn−1 (W tn − W tn−1 ) + a N n=1 W tn−1 (W tn − W tn−1 ) = a ρ C 1 + aC 2 .
It is easy to see that C 1 → 0 if ∆ → 0 and ǫ → 0, no matter in which order these limits are taken. After some algebra, we can write C 2 as Considering the mean and the variance of the second term, we obtain 1 2 T and 1 2 T ∆, respectively, implying that (at least in a mean square sense) the second term converges to its mean 1 2 T if ∆ → 0. Hence Therefore, taking the limits ∆ → 0 and then ǫ → 0 in Equation (29) and using Equation (30) we obtain lim ǫ→0 lim ∆→0 C = − a 2 T which is the same as Equation (28) for this special case. Using Equation (28) and the expression in Equation (27) in (25) we obtain that for small ǫ È(sup so that we can conclude = exp(−A OM ).
4 Strictly speaking this "fact" is only correct in a much weaker sense but still sufficient to derive the Onsager-Machlup functional; The correct statement is that for ǫ → 0, see Ikeda and Watanabe (1989).
Note that if we used Equations (25,26) as a starting point but subsequently took the limits in the wrong order, that is, first ǫ → 0 and then ∆ → 0, we would have B, C → 0, so we would obtain the energy estimator A E . As a final remark, by looking back at the calculations the reader will see that the only term that does not permit interchange of the limits is a second order or "quadratic" term N n=1 (W tn − W tn−1 ) 2 which would vanish with ∆ → 0 if W were a differentiable function but converges to T in case of the Wiener process. Roughly speaking, this is because W tn − W tn−1 is of order √ ∆, which more generally gives rise to the extra terms in the Ito calculus.
We evaluate this expression with x k = z t k for k = 1, . . . , N where {z t } is some trajectory on the interval I = [0, T ] and N = T /∆. Since log(1 + w) ∼ = w for small w, we can write the exponent approximately as | 2018-04-17T11:45:00.000Z | 2018-04-17T00:00:00.000 | {
"year": 2018,
"sha1": "3bf3228a48bb14f95e6e07a186a3a6c50fb29196",
"oa_license": null,
"oa_url": "http://centaur.reading.ac.uk/76304/1/main.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a9c9e2fade9a7ef9b505ef5edef8719d23f062cc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238941276 | pes2o/s2orc | v3-fos-license | A LINGUISTIC LANDSCAPE IN HEALTH CONTEXT: A STUDY CASE IN TWO CAMPUSES IN A TOWN
This article focuses on a linguistic landscape that appear in Faculty of Health Universitas Muhammadiyah Purwokerto and Health Polytechnic Semarang. The aims of this study is to compare the linguistic landscape in two faculty of health which were in Universitas Muhammadiyah and Health Polytechnic Semarang. The writers use the theory from Spolsky and Cooper to analyze the linguistic landscape according to the language used in the sign and according to the function of the signs. The writers also used observation instrument. Thus, the writers visited both places and take a pictures ome signs that appeared. There are two dominant languages found among those selected sites. Those language are English language and Indonesian language. Six types are provided to categorize the linguistic landscape: direction signs, advertising signs, warning notices and prohibitions, building names, informative signs, and instructions. Furthermore, the functions of linguistic landscape for students are as motivation, the form of hopes from campus to students, for the media for communication, information, and creating a culture.
concerned language in use and also how these represented in public areas.LL can be used as a device for increasing school students' awareness of the linguistic diversity of the country so as to improve their perceptions of distinction. Bradshaw (2014) claims that LL as an educational tool engages students in authentic literacy activities that extend beyond the classroom and school walls, and thereby links learners' life in school to their community existence.
The research questions for this study are the following :
B. Literature Review
In this public sign, it can be found some little texts that show some instructions or offers about services in simple texts. The verbal form of the public signs is represented by little texts because they are the expressions used in full or right context of situation. The context of the situation is I Gusti Ngurah Rai International Airport, related to flight services for passengers of certain categories. Although these public signs are only phrases or texts, the forms of the public signs communicate all the messages. Because of this, all the phrases or texts in public signs are called "little texts" (Halliday, 1985, 373-377). The characteristic of little text is that it contains incomplete forms in a text. (Teguh Baladewa, G, 2016, 57-58).
A language is a structured system of communication used by humans, based on speech and gesture (spoken language), sign, or often writing. The structure of language is its grammar and the free components are its vocabulary. Many languages, including the most widely-spoken ones, have writing systems that enable sounds or signs to be recorded for later reactivation. (Agha: 2006) The landscape is everything you can see when you look across an area of land, including hills, rivers, buildings, trees, and plants. (Collins:2020) A landscape is all the features that are important in a particular situation.
A landscape is a painting which shows a scene in the countryside. (Collins:2020).
C. Research Methodology
The signs were photographed from two faculty of healths, which were Faculty of Health Universitas Muhammadiyah Purwokerto and Health Polytechnic Semarang. A camera phone was used to take the picture of the signs. There were total 66 pictures, with 29 taken in Faculty of Health UMP, and 37 taken in Health Polytechnic Semarang.
The analysis of the signs was by using mixed methods. Mixed methods research combines quantitative and qualitative research methods in various ways, with each approach adding something to the understanding of the phenomenon (Aryetal, 2006 : 559). This approach enabled both distinctive research types to be complemented by each other, so that quantitative data can be given detailed explanations to clarify the meaning of any figures collected (Dornyei, 2007:45).
The quantitative study here provided the numerical statistics about the languages that were found in the LL Health Universitas Muhammadiyah Purwokerto and Health Polytechnic Semarang.
The results were classified into the language used in the sign and the function and use of the signs.
On other hand, the qualitative aspect of the study focused on analyzing the transcript of information recorded from interviewing the headmaster of each school about the function of LL.
D. Finding
The finding of this paper are divided into two points based on the research sites: a) Faculty of Health Universitas Muhammadiyah Purwokerto and b) Health Polytechnic Semarang. Every point contains two sub-points to answer the research questions. The first sub-point counters the question about the presence of languages in the LL of targeted sites. This question is answered with the number of languages found in the sign of both campuses. The writers also categories the languages used into monolingual, and bilingual. Monolingual is for the signs that contain only one language, and bilingual is for the signs that contain two languages.
The second sub-point answers the question about the type of LL in the targeted sites. There are six categories provided by the writers to classify the type of signs: direction signs, advertising signs, warning notices and prohibitions, building names, informative signs, and instructions. The researcher limits the number of divisions by looking for similarities between the signs so that more signs could fit in one group. For example: direction, push and pull door label include into direction sign; events, buying and selling, job vacancies, services, and promotion include in advertising sign; warning and probation include in warning notices and prohibitions sign; every room name includes in building name sign; schedule, label, announcement, information, achievement, and commemoration include in informative sign; and procedure, instruction, also motto/slogan include in instructions.
Signs in Faculty of Health Universitas Muhammadiyah Purwokerto
The Faculty of Healthis second campusin UniversitasMuhammadiyahPurwokerto. Located in Jl. Letjen.SoepardjoRoestam Km. 7 Purwokerto. They have 5 study program such as, Nursing D III study program, Nursing S1 study program, Midwifery D III study program, Midwifery D III study program, and nurses profession. Below are their signs.
a. Languages on Sign
In this part, the quantitative method of the study regarding the number and variety of visible languages in the linguistic landscape of Faculty of Health Universitas Muhammadiyah Purwokerto are examined. The researcher tabulates the total of signs and categorizes them into monolingual signs, bilingual signs, and multilingual signs. The results shown in table below. There are total 28 signs found in Faculty of Health Universitas Muhammadiyah Purwokerto.
Monolingual
According to the table above (see Table 1), Indonesian language (17,24%) is the dominant language for monolingual signs. Indonesian language is found in the 5 signs from total 9 monolingual signs.
Most of monolingual Indonesian language signs are about campus information such as non-smoking area and special parking area for lectures/ employees. Whereas monolingual signs written in English found (13,79). Only 4 signs found is written in English which most of them are about food selling, for direction, and informative information.
b. Language use in the campus LL
The language use in the school LL is described and explained based on the emerging patterns. Following is the language use in the monolingual, and bilingual signs.
c. The language use in the monolingual signs
The The use of English -like that of Bahasa Indonesia-was also found in faculty of health.
Following are examples of signs in English. Sign figure 3 was seen as a direction sign which is very useful in order to facilitate people to know their building that's they will go. Sign figure 4 was on the above door's canteen as welcoming students or buyers when enter the room and as informative that every buyers should try to speak English. Table 2 show that the most sign that used in Faculty of health is building names (41,37%) as seen in the figure 5 &6. The function of this kind of sign is to show the name of building with Indonesian language and English.
The second percentages are followed by signs for displaying information or informative signs (37,93%). Various type of signs can be included in this category, such as schedule, label, announcement, and any information. The type of informative signs that is frequently found is in the form of poster signs. For example figure 7 was on the bulletin board. It informs that will be sport event held. So the faculty send several sports team to join in that competitions.
Figure 7 Informative signs in the form of poster signs
The third, warning notices and prohibitions consist of 2 signs (6,89%). As the name suggested, this category includes any kind of warning and prohibition signs. As seen in the figure 8 and 9. Both of them have clearly message or prohibit students even lecture and employee to do not smoking in those areas. The figure 8 also as warning notice to every people not to bring food or drink in the room.
Table 3 Variety of languages displayed in the LL of Health Polytechic Semarang
To show finding of the study in Health Polytechic Semarang, the researcher tabulates the total of signs found and categorizes them into monolingual, bilingual, and multilingual. The use of
LL in Health Polytechic Semarang is more than in Faculty of Health Universitas Muhammadiyah
Purwokerto. It is clearly showed by the total signs found in Health Polytechic Semarangare35 signs.
English language takes the first place in monolingual sign with the total 19 signs (54,24%) are found.
The uses of English language are mostly about motivational quotes. Followed by Indonesian with the total of only 8 signs (22,85%) which mostly about informative signs and the others from Latin language with the total only 4 signs (11,42%) are mostly about building names.
a. Language use in the campus LL
The language use in the school LL is described and explained based on the emerging patterns. Following is the language use in the monolingual, bilingual and multilingual signs.
b. The language use in the monolingual signs
The languages found in the monolingual signs in the Health Polytechnic' LL are Bahasa Indonesia, English and Latin. English language becomes dominant in there because it can improve students' English. The uses of monolingual sign mostly about frame motivation sign that displayed in front of the wall class. For example the motivation quotes about nurse as seen in the figure 13 which describe the work of nurse who is full of tenderness, patience and tolerance towards patient.
While in the figur 14 it shows direction sign which if people want to go emergency can follow the direction.
Figure 19The examples of bilingual signs written Indonesian-English d. Types of Sign
According to the function and use of the signs, six categories are presented here: direction signs, advertising signs, warning notices and prohibitions, building names, informative signs, and instructions. Instructions sign is the third percentages. Instructions sign can be called as a umbrella of procedure sign, motto or slogan, and any kind of instructions. For example in the sign figure 20. It was located on the dormitory wall which mean every students must put their shoes in that place.
Whilesign figure 21 is intruction to the students to close the door when enter or leave the room. Direction is the last type of signs found in Health Polytechic Semarang. There are 4 signs (11,42%) of direction are displayed. It gives direction to various rooms in the campus (see figure 16).
E. DISCUSSION
This study has contributed to the study of linguistic landscape in two sites: Faculty of Health English are found in both places, but some differences are also found in many aspects such as in the used of monolingual signs. In the former, Bahasa Indonesia is very dominant in the form of in the form of instruction, informative and prohibition signs. It was not suprising because bahasa Indonesia is the official language and every people used it in daily life. While in the latter, the most monolingual sign is using English language. There are many of motivation quotes using English language but the used of the language is important to improve students' English. However the use of bilingual signs is not different. Both campuses displayindonesian-Englishsigns. Health Polythecnic also used latin words as well. | 2021-09-04T17:59:03.366Z | 2020-11-30T00:00:00.000 | {
"year": 2020,
"sha1": "36d50a0e4fe43e5973feaa5156afa2155b27b049",
"oa_license": "CCBYSA",
"oa_url": "https://ojs.unsiq.ac.id/index.php/cllient/article/download/1955/1179",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "965d8cbdd5c623060c22d0e9a2dd13c622dabba0",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
202480548 | pes2o/s2orc | v3-fos-license | Analysis of Characteristics of Industrial Power Load in a Certain Region
Cluster analysis was done on the industrial load in the same class industry, based on the industrial daily load data collected in a certain region. The results obtained indicate that even in the same industry, there are still large differences in load characteristics, and there is a certain similarity between loads in different industries. Compared with the traditional results of cluster analysis of the overall industrial load, cluster analysis by industry will help power companies to carry out more precise load management.
Introduction
The industrial load accounts for a large proportion of the existing power load, so it is important to study its load characteristics. The current industrial load analysis mostly clusters the daily load curves of all industries collected in a certain area [1]- [5]. This analysis method can cluster the industrial daily load curves of the region into several typical characteristic load curves, such as continuous production, peak production, midnight production, and reverse peak production [6]. Basically, a characteristic type load can represent an industry typical load. However, with the upgrade of the power company's measuring equipment, the number of users that can be collected in each industry has also increased exponentially. At this time, if cluster analysis is only performed on the overall load of the area, this is not precise enough. Characteristic analysis should also be carried out for certain industry loads with a large proportion of load in the region. In this way, the grid manager is provided with more detailed load information, which can help them to make decisions. For example, in the development plan, they should introduce which type of load in the area, to have a positive impact on the peak-filling and valley-loading and load management capabilities of the area.
Analysis
Taken the industrial load of a certain area as an example, according to the industrial user load data of a certain working day in 2018 provided by the regional power grid company, the cluster analysis is performed on the load of several industrial users with more data collection. First, according to the national economic industry classification, the industrial load in the region can be divided into casting and other metal products manufacturing, communication equipment manufacturing, and electronic device manufacturing. For the collected raw data, the number of users under each type is first counted, and then the characteristics of each type of power load are analyzed.
Detection and processing of data outliers
The data of each unit is collected locally by smart meter equipment and sent to the power company via the public communication network. Due to factors such as network conditions, the acquired values AEMCME 2019 IOP Conf. Series: Materials Science and Engineering 563 (2019) 052109 IOP Publishing doi:10.1088/1757-899X/563/5/052109 2 usually contain missing values and abnormal points. The lost data at a certain moment can be replaced by the average of the simultaneous load of other similar days. The outliers are detected using the Grubbs test. The process of discrimination is as follows.
• Calculate the mean and standard deviation of a set of load data.
Calculating the Grubbs statistic.
•
Determine the detection level α . Refer to GB4883 to obtain the corresponding Grubbs test , i x is the highly outlier, eliminating the entire set of data.
Normalization of data
Due to the different nature of the power users, the size of their electricity consumption data may vary greatly. In this way, in the clustering process, data with a large load value may affect the data with a small load value, and even annihilate it, so that the latter's role should not be reflected. To ensure that each individual has the same status for the entire data set in the analysis, the data is standardized. In this paper, the maximum normalization method is used, that is, each set of data is divided by the maximum value of the set of data, and converted into a number between 0-1.
Cluster analysis
After the above data processing, then the daily load curve analysis of each industry is carried out. The most widely used clustering algorithm is k-means. First select k objects from all data objects as the initial cluster center. For the remaining objects, they are assigned to the cluster closest to them based on their distance from these cluster centers. Then recalculate the average of each new cluster as a new clustering center. This process is repeated until the criterion function converges. Usually the sum of the mean squared deviations of all data is used as the criterion function.
In this paper, a total of 1,440 power user data were screened, and the data collection density was one point every 15 minutes, a total of 96 points a day. Each of the data represents a working day load curve of a power user. A total of 188 industrial sector categories. Among them, the number of loads is more in the manufacture of foundry and other metal products, structural metal products manufacturing, cotton textile and printing and dyeing finishing, metal processing machinery manufacturing, and cement manufacturing, Manufacture of general-purpose components, manufacture of electronic components and electronic materials, manufacture of electronic components, manufacture of ships and related equipment, manufacture of transportation equipment and other transportation equipment. The load clustering results for each of the above industries are shown as Figure 1. Figure 1. Industrial load analysis in different industries As can be seen from Figure 1, although the same is the industrial load, the load curve is also different between different industries. But there are some similar load curves, such as cluster 2 of graph (a), cluster 1 of graph (b), cluster 2 of graph (d), cluster 1 of graph (f), and cluster1 of graph (j). The shape of the load curve is similar, but it is still slightly different. Therefore, it is necessary to use quantitative indicators to make a difference. In this paper, the load rate is used as a quantitative indicator, and the calculation method is as follows. The calculation results of typical load curve load rates in various industries are shown in Table 1 Figure (a), at 9:00-21:00, the load of cluster1 is very low, which should be not put into the main productive load. At 0:00-7:00 and 22:00-24:00, the load is at a high level, which is a typical peak production load. Cluster2 has always maintained a high load level. At around 12:00, there is a low load, which is consistent with the general work schedule of workers. Cluster3 has the same trend as cluster1, but the average load and load rate are different. Figures (b), (d), (f) and (j) are similar to the case of Figure (a). In Figure (c), cluster4 is a typical inverse peak production type, but a high level of load occurs between 12:00 and 18:00. The load type of Figure (e) is quite different from that of other industries, and it is mainly the reverse peak production type. The load curves of figure (g), (h) and (i) are similar, and can be divided into continuous production type and daytime production type, in which the load rate of daytime production type is quite different. | 2019-09-11T14:13:11.286Z | 2019-08-09T00:00:00.000 | {
"year": 2019,
"sha1": "1bd9207a0bc955ee5a8a22fdffce1ecc7edec86b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/563/5/052109",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8075fb73ba30abfae0e051307fcc3868111109ce",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
15362353 | pes2o/s2orc | v3-fos-license | Preparation and Photocatalytic Activity of Potassium-Incorporated Titanium Oxide Nanostructures Produced by the Wet Corrosion Process Using Various Titanium Alloys
Nanostructured potassium-incorporated Ti-based oxides have attracted much attention because the incorporated potassium can influence their structural and physico-chemical properties. With the aim of tuning the structural and physical properties, we have demonstrated the wet corrosion process (WCP) as a simple method for nanostructure fabrication using various Ti-based materials, namely Ti–6Al–4V alloy (TAV), Ti–Ni (TN) alloy and pure Ti, which have 90%, 50% and 100% initial Ti content, respectively. We have systematically investigated the relationship between the Ti content in the initial metal and the precise condition of WCP to control the structural and physical properties of the resulting nanostructures. The WCP treatment involved various concentrations of KOH solutions. The precise conditions for producing K-incorporated nanostructured titanium oxide films (nTOFs) were strongly dependent on the Ti content of the initial metal. Ti and TAV yielded one-dimensional nanowires of K-incorporated nTOFs after treatment with 10 mol/L-KOH solution, whereas TN required a higher concentration (20 mol/L-KOH solution) to produce comparable nanostructures. The obtained nanostructures revealed a blue-shift in UV absorption spectra due to the quantum confinement effects. A significant enhancement of the photocatalytic activity was observed via the chromomeric change and the intermediate formation of methylene blue molecules under UV irradiation. This study demonstrates the WCP as a simple, versatile and scalable method for the production of nanostructured K-incorporated nTOFs to be used as high-performance photocatalysts for environmental and energy applications.
Abstract: Nanostructured potassium-incorporated Ti-based oxides have attracted much attention because the incorporated potassium can influence their structural and physico-chemical properties. With the aim of tuning the structural and physical properties, we have demonstrated the wet corrosion process (WCP) as a simple method for nanostructure fabrication using various Ti-based materials, namely Ti-6Al-4V alloy (TAV), Ti-Ni (TN) alloy and pure Ti, which have 90%, 50% and 100% initial Ti content, respectively. We have systematically investigated the relationship between the Ti content in the initial metal and the precise condition of WCP to control the structural and physical properties of the resulting nanostructures. The WCP treatment involved various concentrations of KOH solutions. The precise conditions for producing K-incorporated nanostructured titanium oxide films (nTOFs) were strongly dependent on the Ti content of the initial metal. Ti and TAV yielded one-dimensional nanowires of K-incorporated nTOFs after treatment with 10 mol/L-KOH solution, whereas TN required a higher concentration (20 mol/L-KOH solution) to produce comparable nanostructures. The obtained nanostructures OPEN ACCESS
In particular, nanostructured alkali metal incorporated titanates containing A-Ti-O (A = alkali element and/or H) bonds have been produced as nanotubes, nanorods, nanofibers, and nanosheets. They have attracted considerable interest because of their unique layered structure [18][19][20][21][22]. Since the presence of the alkali metal in these titanates can significantly influence the physical properties, they have so far been studied extensively in order to tune their physical properties beyond the limitations of conventional materials, as reported for instance in applications like electrical devices, photocatalyst, energy storage, gas sensors, and photovoltaics [23][24][25][26]. Among them, potassium (K)-incorporated titanates have been of particular interest due to their specific photochemical properties or their artificial cage type structure [27][28][29][30]. Although these K-incorporated nanostructured titanium oxides films (nTOFs) have been one of the leaders in this new class of materials, their synthesis has still limitations to tackle. Although several synthesis methods such as the sol-gel process, template-mediated process, and hydrothermal route have been developed to obtain the desired structural, chemical and physical properties, nanostructure fabrication has generally involved complicated processes, low reproducibility and high costs for well-controlled chemical modification [21][22][23][24][25][26][27][28][29][30][31][32][33][34]. Since the physical properties can be greatly affected by the size and morphology of the nanostructures as well as the precise incorporated K amount, several processes had to be combined, exacerbating these drawbacks. Despite great expectations about these materials, there is still a lack of systematic research and strategy for nanostructure fabrication while simultaneously controlling the physical properties. This bottleneck strongly limits their fast implementation in potential applications [35][36][37]. Hence, elaborating a process that allows for easy synthesis and a simple tuning of the desired morphology and properties via a single-step process is currently one of the key issues in this research field.
As previously reported, K-incorporated nTOFs could be produced via the wet corrosion process (WCP) by using pure Ti metal [23,38,39]. WCP is a simple one-step process that uses alkali solutions. Moreover, this process produces nanostructures as a thin film onto a metal surface directly, which is particularly advantageous for diverse applications. Since the nanostructures are immobilized on the metal surface, the problem of homogenous dispersion, as very often occurs with powders, can be avoided. The previous works on WCP demonstrated that it could control the morphology as well as improve the electrical properties of K-incorporated nTOFs by using various concentrations of the KOH aqueous solutions without any additional processes. Moreover, a direct correlation between structural, and physical properties and the incorporated K amount was illustrated. In the present work, we extended our studies to different Ti-based metal substrates and, for the first time, systematically investigated the characteristics of the resulting nTOFs depending on the initial Ti content prior to the WCP. The results clearly indicate direct dependence of the nanostructure fabrication on the Ti content in the initial material. Moreover, the optical property of the nTOFs could be correlated with the structural properties. Understanding the relationship between the Ti content of the initial metal and the condition of WCP to control the structural and physical properties makes it possible to produce nTOFs on demand, for various applications.
Recently, Ti-based materials have attracted significant attention as multifunctional semiconductor photocatalysts for eco-friendly alternative technology, in particular because of their chemical and biological inertness, photostability and low cost of production [40,41]. So far, a number of effective approaches have been developed to enhance the photocatalytic activity by tuning the structural, chemical and physical properties of the modified Ti-based materials [42][43][44]. However, the relationship between the effect of nanostructure fabrication and the resulting photocatalytic property is still unclear. In this study we demonstrate that the WCP yields highly reproducible nanostructures of K-incorporated nTOFs with high potential for treating water containing organics, removing metals from water and splitting water. Figure 1 shows the scanning electron microscope (SEM) images of the surfaces of the Ti, Ti-Al-V (TAV) and TiNi (TN) substrates treated with 1, 10 and 20 mol/L-KOH solution at room temperature for 24 h. In our previous works, we have already described the morphology obtained from pure Ti plates after the KOH treatment of various concentrations [21,36,37]. Identical to these previous reports, 3D network structures consisting of elongated nanowires were formed on the surface of Ti plates (Figure 1a). The resulting network structures were strongly determined by the precise concentration of the KOH solution. Only within the window of 10-20 mol/L-KOH treatment could 3D network structures of nanowires be obtained. The diameter of these nanowires ranged from 10 to 100 nm, and their length was several tens of micrometers. Interestingly, using TAV substrates, very comparable morphology evolution was obtained at the same KOH conditions (Figure 1b). In contrast, the morphology obtained after treatment of TN substrates indicated that the condition for establishing nanowire structures was different from that of Ti and TAV samples. For the TN samples, the process window to successfully synthesize nanostructures was shifted up to higher KOH concentration ranges. After treatment with 1 mol/L-KOH solution, ball-like structures were formed on the TN substrates, while at the same conditions, thick and short nanowires were produced on the Ti, and TAV surfaces. Such thick and short nanowires could only be obtained using TN substrates when the KOH concentration was increased to 10 mol/L. The comparable 3D network structures with long and thin nanowires obtained from Ti and TAV substrates by treating with KOH with a concentration of <10 mol/L-KOH could be produced by treating TN substrates with 20 mol/L-KOH solution. Although the concentrations of the KOH solutions to produce nanostructures differ for the used initial materials, the trend of resulting morphology of nanostructures was almost the same: Short and thick nanowires formed first at a lower concentration, whereas at a higher concentration (>10 mol/L-KOH solution), 3D network structures with thin and long nanowires were favored, which disappeared at much higher concentrations. It has to be emphasized that our results show for the first time that various morphologies can be synthesized using various Ti-based materials via WCP by fine-tuning the treating KOH solutions. As already mentioned in the previous reports, K become incorporated into the resulting titanium oxide nanostructures due to the KOH treatment [23,38,39]. Thus, chemical bonds rearrange during the fabrication process and change the surface chemical compositions. In order to verify the surface chemical composition of the obtained nanostructured oxide films, X-ray photoelectron spectroscopy (XPS) analysis was performed. As already reported for Ti substrates [23], the incorporation of K into the titanium oxide films could be evidenced by the existence of the K2p peak in the XPS spectrum. The XPS data obtained from the Ti substrate were in agreement with the previous results and are therefore not shown here. Figure 2a shows a wide range XPS spectrum of the surface of the TAV substrate subjected to the 10 mol/L-KOH solution. The XPS spectrum of the untreated TAV substrate is included for comparison. The untreated TAV substrate showed the Ti2p doublet peaks of Ti2p1/2 and Ti2p3/2 at 453.8 eV and 458.9 eV, respectively, attributed to the Ti-Ti bond. However, the 10 mol/L-KOH-treated substrates showed these peaks at a higher binding energy of 458.7 eV and 464.4 eV due to the Ti-O bond formation. Note that an Al holder and an X-ray source of 600 W excitation were used to obtain the XPS data, and we slightly controlled the aperture. Because of this, the Al intensity was very low and detection was very difficult. The V2p peaks could be detected at 509.1 eV on the untreated TAV substrate, which disappeared after the KOH treatment, indicating the removal of the alloying species Al and V during the KOH treatment and the formation of potassium titanates on the surface without incorporation of Al and V [45]. Conversely, the K2p peak could clearly be detected at 293.9 eV in contrast to the untreated substrate. This peak position, however, does not match the pure K phase but rather the oxidized state K + , which can be explained by the formation of the Ti-O-K bonding [23]. The O1s peak was detected on the untreated substrate surface due to the natural passive oxide layer, but a very strong O1s peak appeared at 530.9 eV after the KOH treatment. This peak corresponds to the binding energy of the O1s peak ascribed to the Ti-O bond, owing to the potassium titanates formed on its surface [46,47]. Figure 2b shows the wide range XPS spectrum of the TN substrates after treatment with 20 mol/L-KOH solution as well as the spectrum of the untreated substrate for comparison. The untreated TN substrate also showed the Ti2p doublet peaks of Ti2p1/2 and Ti2p3/2 at 454.2 eV and 458.1 eV, respectively, which were very comparable with the TAV substrate. These peaks are attributed to the Ti-Ti bond. The 20 mol/L-KOH-treated substrates also showed a chemical shift towards higher binding energies, namely at 458.3 eV and 463.2 eV, respectively, again indicating the formation of Ti-O bonds in potassium titanates. Strongly pronounced Ni2p doublet peaks of Ni2p1/2 and Ni2p3/2 were detected in the 850-890 eV region of the spectrum of the KOH-treated substrate, compared with the untreated substrate. Similar to the chemical shift to higher binding energy, as observed for the Ti2p peak in the KOH-treated TAV substrate, the main Ni2p peak also shifted to a higher binding energy of 854.9 and 874.5 eV due to the Ni-O bond formed, leading to the oxidation state of Ni 2+ and Ni 3+ [48][49][50][51]. This means that Ni oxide components coexist in the obtained oxide film, unlike the TAV substrates where the alloying species of Al and V are released during the KOH treatment. Hence, although TAV and TN are both Ti-based alloys, the behavior of its alloying species after the KOH treatment is different: most of Al and V are released, whereas a small amount may remain in the potassium titanate nanostructures. This is in agreement with the trend reported in literature, where up to 50% of Al and V release was observed during the KOH treatment [45]. In the case of TN substrates, although Ni is released from the alloy during the KOH treatment, the remnants of Ni exist in the obtained oxide films due to the large amount of Ni present in the TN substrate. This phenomenon can clearly be seen in the XPS spectrum by the occurring peaks of Ti2p and Ni2p at 458.3 eV and 854.9 eV in the KOH-treated TN substrates. These Ti2p and Ni2p peaks thus correspond to Ti Ni-Ti and Ni Ni-Ti . This result is also consistent with the X-ray fluorescence (XRF) results, which reveal that there is no significant change in the content of Ni before and after the KOH treatment (data not shown). Comparably to the TAV-series, the K2p peak is clearly absent in the untreated substrate, but could be detected at 295.7 eV after the KOH treatment. This result also clearly confirms the formation of the Ti-O-K bonding on the surface. The evolution of the O1s peak is similar to that observed in TAV substrates: a very strong O1s peak at 530.4 eV clearly demonstrates the strongly pronounced formation of Ti-O bonds owing to the conversion to potassium titanates after the KOH treatment. For pure Ti substrates, the synthetic process of potassium titanate nanostructures via the WCP was explained in the previous work [21]. To summarize briefly, Ti metal is partially dissolved by KOH solution and then forms Ti-O bonds. The involved chemical reactions are described as follows:
Results and Discussion
Since this process forms negatively charged HTiO3 − ·nH2O which attracts positively charged K + ions, parts of the Ti-O bonds turn into Ti-O-K due to incorporation of K ions. In order to study the generated chemical bond in more detail, a Raman analysis was performed. The Raman spectra of the KOH-treated Ti-, TAV-and TN-series are presented in Figure 3. For comparison, untreated Ti was used as reference and 10 mol/L-KOH-treated Ti & TAV and 20 mol/L-KOH-treated TN substrates were selected in order to correlate the same nanostructure morphology with the Raman signals. As can be seen in Figure 3, the gradual change of Raman spectra are remarkably similar for the samples with comparable morphology originating from different substrates and different WCP conditions, when compared with the untreated sample. In the present studies, a Raman spectrometer with the back-scattering configuration was used. Generally, Raman shifts are affected by the vibration of the electronic polarization of the constituents in the films, which depend on bonding characteristics such as the atomic distance and the bonding angle [52]. Hence, Raman results presented in Figure 3 suggest that the chemical changes occurring during the WCP synthesis process are the same for all starting materials, in agreement with the XPS results shown in Figure 2. Together with SEM and XPS results (Figures 1 and 2, respectively), we can conclude that the nanostructures, formed from different starting metal substrates and under different WCP conditions, are not only of the same morphology but also contain the same chemical bonds. As we previously reported, nanostructures originating from Ti plates via the WCP consisted of two components, namely TiO2 phase containing Ti-O bonds and K-incorporated Ti-O-K bonds. Both were generated by rearrangement of atoms in the Ti metal during the WCP [23]. The K-incorporated Ti-O-K bonds give rise to a new peak in the Raman spectrum close to 280 cm −1 , which is present in the Raman spectra of all samples (Figure 3) Based on the results of the XPS and Raman data, we can conclude that the K-incorporated nTOFs could be obtained from the TAV and TN substrates via the WCP as from the pure Ti-substrates, following the same chemical reactions mentioned above. A rough estimate of the volume of the Ti-O components and Ti-O-K components in the oxide films produced after the WCP of Ti, TAV and TN samples can be obtained by integration of the Raman peaks. Figure 4 summarizes the evolution of the Raman peak intensities as a function of the KOH concentrations for the different samples. To be precise, for this estimate the relative intensities of the Ti-O-K contributions (Ti series: 280 cm −1 , TAV series: 284 cm −1 , TN series: 173, and 273 cm −1 ) and that of the various Ti-O phases (Ti series: 442, 657, 815, and 917 cm −1 , TAV series: 446, 666, 827, and 920 cm −1 , TN series: 722 cm −1 ) have been taken into account for both components, respectively [53][54][55]. As can be seen, the intensities clearly increase with higher concentration of the KOH solution for all different substrate materials. This phenomenon can be explained with the increase of the Ti-O and Ti-O-K components in the resulting oxide films together with the rearrangement of the chemical bonds after the WCP. For both Ti-O and Ti-O-K components, the integrated intensity follows the same trend and increases with the KOH concentration, indicating that the potassium titanates and TiO2 phase are synthesized simultaneously during the WCP and the amount of incorporated potassium is directly correlated with the amount of the Ti-O-K component. This result is in agreement with our previous observation on pure Ti metal plates, where Ti partly dissolved in alkaline solution, leading to synthesis of TiO2 substances. However, the new results illustrate that the general trend for volume changes of the Ti-O and Ti-O-K components is remarkably similar for different starting materials with various Ti content within the process window, leading to nanostructures of the same morphology. This result indicates that the various Ti-containing starting materials follow the same chemical reaction of WCP and that the Ti content in the initial metal determines the amount of Ti-O-K component formed. Figure 5 shows the results of the XRF measurements for the Ti-, TAV-, and TN-samples as a function of the K content. It is noteworthy that the evolution of the K content in the Ti and TAV series is almost the same, while again the TN series show a different trend. As already mentioned, the KOH-treated Ti and TAV series behave similarly at the same KOH concentration conditions, leading to very comparable morphology and structure of nanostructures. The fact that the evolution of the K content in the Ti and TAV series is also similar within the same KOH condition indicates that the Ti and TAV samples can be considered as very similar with respect to the WCP process. This observation also supports the Raman and XPS results showing Al and V mostly released from the alloy during the KOH treatment and leaving Ti behind to undergo the chemical reactions mentioned above. In contrast, the TN substrates behave differently. The morphology and structural properties of 20 mol/L-KOH-treated TN substrates were close to the results obtained from the 10 mol/L-KOH-treated Ti and TAV substrates. Hence, the chemical process window shifted to a higher concentration region. The XRF results clearly demonstrate that this shift also goes together with the K content: In the 20 mol/L-KOH-treated TN substrates, the K content is close to the value obtained from the 10 mol/L-KOH-treated Ti and TAV substrates. Taking these results into account, we can conclude that (i) the resulting morphology is strongly correlated with the K content in the nTOFs; and (ii) the K content is an essential parameter for the formation of 3D network structures with long and thin nanowires in the Ti, TAV, and TN samples. The WCP reaction commonly occurs between the initial material and the solution, but the process window for producing 3D network structures greatly depends on materials and the process condition. This implies that there is a limitation in the reaction. The Ti content in the initial material in each series can be considered the key factor in fabricating the nanostructured morphology via WCP. Therefore, the similar Ti content in Ti and TAV substrates leads to very similar results for the WCP, unlike the TN substrates with significantly lower Ti content. In order to form titanium oxide, Ti has to dissolve first in alkaline solution to establish Ti-O bonds, which represent the main bonds of the nanostructures. It was reported that the structure of the potassium titanates consist of layered sheets containing Ti-O bonds linked to K atoms [59]. Consequently, we expect K to be partly incorporated into the oxide structures and to form the final structure on the surface of the treated substrates. Hence, the resulting morphology is governed by the amount of Ti-O bonds formed during the WCP. Indeed, TEM studies confirmed nanowires containing a layered structure ( Figure S1). In agreement with the XRF data, electron energy loss spectroscopy and energy dispersive X-ray analysis revealed the presence of K in individual nanowires. Comparing the Ti and TAV series with the TN series, the TN substrates can only make a significantly smaller amount of Ti-O bonds because of the reduced Ti content. This shortage of Ti content can be compensated for by using higher concentrations of KOH solution, which strongly promotes Ti-O bond formation. Consequently, the 3D network structures with nanowires form at the higher KOH concentration. Thus, the amount of Ti in the Ti-based metals and incorporated K are the key factors in producing nanostructured morphology.
To investigate the electronic structure of the obtained nanostructured Ti, TAV and TN series, UV-vis measurements were carried out. Figure 6 presents UV-vis absorption spectra of the Ti, TAV, and TN series with the same morphology. The enhanced absorption peaks were clearly seen after the KOH treatment in all series, which indicate that a change of the optical band gap occurred after the
Concentration of KOH solution (mol/L)
Ti Ti-Al-V Ti-Ni KOH treatment of the substrates. From the absorption spectra, this change can be estimated following the direct correlation between the optical band gap energy E and the measured wavelength λ [60]: where the parameters h and c correspond to the Planck constant and the velocity of light, respectively. In the Ti series, the absorption peak was observed at approximately 360 nm corresponding to a band gap of 3.44 eV in the untreated substrates, while the absorption peak shifted to approximately 317 nm in the 10 mol/L-KOH-treated substrates. This corresponds to a modified band gap of 3.91 eV. In the TAV series, the absorption peak was observed at approximately 348 nm, which is consistent with a band gap of 3.56 eV in the untreated substrates. In the 10 mol/L-KOH-treated substrates, the absorption peak was observed at approximately 318 nm, corresponding to a band gap of 3.89 eV. In contrast, the absorption peak of untreated and 20 mol/L-KOH-treated TN substrates appeared at approximately 356 nm and 267 nm, corresponding to a band gap of 3.48 eV and 4.64 eV, respectively. Compared to the optical band gap reported in literature for TiO2 (3.20 eV) [61][62][63], it has to be emphasized that a blue shift of the absorption peak was observed in all series after the KOH-treatment. It is well known that a shift of the absorption peak depends on the atomic composition of metallic films and is related to the quantum size effects in nanomaterials [64][65][66]. Since the atomic rearrangement results in the nanostructures with Ti-O and Ti-O-K components during the WCP process, we think that the energy levels change due to the atomic rearrangement leading to a shift in the absorption peak. Consequently, the role of K atoms in the WCP can be considered as follows: in their first-principles study of KNbO3 that the K electron states were located in the upper part of the conduction band (>8 eV) and in the lower part of the valence band (<−10 eV) [67]. Thus, the bottom of the conduction band and the top of the valence band were determined by Nb4d and O2p states and not influenced by K. Consequently, we can conclude that the blue shift observed in the vicinity of the TiO2 phase (around 3.2 eV) is determined by Ti and O states. More detailed study is needed in order to fully understand the origin of this blue shift. Nevertheless, this variation of the absorption property of the obtained nanostructured Ti-based oxide films is of great promise for tuning the electronic and optical properties for a wide range of future technological applications. In order to examine the photocatalytic activity of the K-incorporated nanostructures obtained from the Ti, TAV and TN series, photodegradation of methylene blue (MB) was evaluated under UV light irradiation. The UV-vis absorption spectra presented in Figure 7 demonstrate the decomposition of MB dye leading to decolorization. For comparison, the absorption spectra of pristine MB aqueous solution as well as that of an untreated sample are also presented. The major absorption peaks appear at about 612 and 664 nm which are characteristic for MB dye [68]. Their decrease in intensity can be used as a direct indication for photocatalytically activated dye degradation. Indeed, all nanostructures produced from Ti, TAV, and TN alloys show a higher photocatalytic activity than untreated samples. This observation is in agreement with the remarkable photocatalytic properties reported for nanostructured titanium oxide materials with enhanced surface area and increased electron transfer ability [69,70]. In Figure 7 the results of 10 mol/L-KOH-treated Ti and TAV samples are presented together with that of the 20 mol/L-KOH-treated TN sample. As presented in Figure 1, all three samples yield comparable nanostructures under the respective WCP conditions. Interestingly, the 10 mol/L-KOH-treated Ti and TAV samples show a higher photocatalytic activity than the 20 mol/L-KOH-treated TN sample despite their similar morphology. The intensity of the MB peaks decreased to a blank level for Ti and TAV samples whereas it was reduced to about half for the TN sample after 2 h of UV exposure. This difference in Ti (or TAV) samples and TN samples can be explained as follows: Generally, the photocatalytic process is based on the photogeneration of electron-hole pairs, which will initiate redox reactions with the species adsorbed on the surface of the catalysts. In the photocatalytic process, OH radicals originating from the oxidation of OH − or H2O by the photogenerated electron-hole pairs in the presence of oxygen have been considered as the major reactant responsible for the photocatalytic oxidation of organic materials and degradation of pollutants [71][72][73]. Consequently, improved electronic properties can be linked to enhanced photocatalytic activity. In our work, the electronic properties are governed by the amount of Ti-O-K components. Therefore, the 20 mol/L-KOH-treated TN sample, which has fewer Ti-O-K components, shows less pronounced photocatalytic activity. From these results, we can conclude that an efficient charge/energy transfer occurs in the KOH-treated samples under photo-irradiation and leads to the improved photocatalytic activity, as evidenced by the drastic diminishment of the UV absorption peaks. These results clearly demonstrate that the K-incorporated nTOFs can be a cost-effective, highly efficient, and environmental-friendly photocatalyst.
Preparation of Nanostructured Titanium Oxide Films with Ti, Ti-Al-V Alloy, and TiNi Alloy
Commercially available, pure Ti metal, Ti-Al-V alloy, and TiNi alloy substrates were used in the present study. Although the substrates used in this work were all titanium-based metals, the total Ti content was different. Ti and Ti-Al-V plates were purchased from ChemPUR with the composition as mentioned in Table 1. TiNi was a hot-rolled plate with Ti:Ni ratio close to 50:50 but with some traces of Cu. Ti and TAV contained 100% and 90% Ti, respectively, whereas TN consisted of 50% Ti.
The detailed compositions are listed in Table 1. All the metal substrates, 10 × 10 × 1.0 mm 3 in size, were polished with #400-#2000 SiC papers and washed with pure acetone and distilled water in an ultrasonic cleaner. Then, alkali treatment was performed by soaking all these substrates in 5 mL of KOH aqueous solutions with concentrations of 1, 5, 10, 15, 20, 25 and 30 mol/L at room temperature for 24 h. In this paper, we have mainly shown the results of the 1, 10 and 20 mol/L treated samples because they were most representative. For instance, the 10 mol/L-KOH-treated Ti and TAV samples and the 20 mol/L-KOH-treated TN samples showed significantly different morphology compared to 5 mol/L, which yielded rather similar morphology to the 1 mol/L treatment. After the alkali treatment, all the metal substrates were gently washed with distilled water.
Characterizations
Changes in the surface structure, shape, and size of the Ti, Ti-Al-V, and TiNi specimens were observed by using a scanning electron microscope-focused ion beam (Dual Beam FEI NOVA SEM: FEI Company, ModelNoba 600 NanoLAb, Hillsboro, OR, USA) which were operated at 15 kV with top view and 45° tilted samples. To investigate the structural property of the obtained product, Raman spectroscopy (Renishaw, in Via Raman microscope) was performed using 514.5 nm Ar laser radiation as the excitation source. For analyzing the elements in all fabricated specimens, X-ray fluorescence (XRF: Philips, Model XRF Spectrometer Philips PW 2400, Amsterdam, The Netherlands) measurements were carried out using a powerful X-ray source (50 kV, 40 mA). Elemental composition and chemical state of the elements present in all specimens were determined by X-ray photoelectron spectroscopy (XPS: JEOL, Model JPS-9010 MC Photoelectron Spectrometer, Tokyo, Japan) with a focused monochromatic Al K αX-ray source (600 W) for excitation. The selected region spectra were recorded covering the Ti2p, O1s, K2p, Ni2p and C1s photoelectron peaks. The acquisition conditions for such high-resolution spectra were 20 eV pass energy with the step of 0.1 eV. In order to evaluate the optical properties and the electronic state of the obtained nanostructured Ti-based oxide films, UV-vis Spectroscopy (UV: JASCO Inc., Model JASCO V-570, Tokyo, Japan) was carried out. The optical density was measured in absorption condition.
Photocatalytic Activity Testing of the K-Incorporated nTOFs
Methylene blue (MB) at the concentration of 10 mg·L −1 was used as the testing solution. For the photocatalytic assessments, a UV exposure unit (LV202-E, Farnell, Leeds, UK) was used with a working area of the light source of 159 mm × 229 mm and the distance between the light source and the sample of about 10 cm. The UV output was 2 times 8 W with wavelength in the range 350-450 nm. To be precise, 350 µL of the MB solution was added onto the top of surface of the untreated Ti and K-incorporated nTOFs. The UV radiation was applied to the reaction sample plates for 2 h and the photocatalytic activities were evaluated by collecting the UV-vis (UV: Tecan Model infinite M2000 PRO, Männedorf, Switzerland) absorption spectra of the solution.
Conclusions
We fabricated nanostructured K-incorporated titanium oxide structures via WCP using the Ti, TAV, and TN substrates. Systematically the relation between the Ti content of the initial metal and the condition of WCP was studied, with the conclusion that these parameters strongly control the morphology and physical properties of the generated nanostructures. One-dimensional nanostructures could be obtained within a specific window of the process condition: Ti and TAV metals which contain more than 90% Ti yielded elongated nanostructures of K-incorporated titanium oxide films after treatment with 10-20 mol/L-KOH solution. In contrast, TN metal with about 50% of Ti content required a KOH treatment with >20 mol/L-KOH solution indicating that the process window shifted to higher concentration. For all samples the morphology evolution with the WCP condition was similar, but the precise Ti content of the initial material determined the amount of incorporated potassium as well as the KOH concentration at which elongated nanostructures were produced. This phenomenon can be explained as follows: a TiO2 phase consisting of Ti-O bonds and K-incorporated titanium oxide phase consisting of Ti-O-K bonds are generated after treatment with KOH solution. During this process, atoms in the Ti metal rearrange, forming Ti-O and Ti-O-K bonds. Furthermore, as demonstrated, the physical properties of the resulting material varied with the WCP conditions. In particular, the blue-shift of the UV-vis spectra indicated a quantum confinement effect as well as an increase of the optical bandgap. The K-incorporated nTOFs also exhibited high photocatalytic activity which can particularly be attributed to the enhanced charge/energy transfer owing to the present Ti-O-K components. These results demonstrate that the WCP is a simple and highly controllable technique for the fabrication of K-incorporated titanium oxide nanostructures. The possibility to precisely tune the morphology as well as physical properties by controlling the KOH concentration and by the selection of the starting Ti-based material makes this technique highly versatile and interesting, especially for diverse large-scale applications. Especially for the production of nanostructures to be used in photocatalysis, water treatment and potentially in biomedical applications, such as antibacterial control systems, the WCP holds great promise. | 2015-09-18T23:22:04.000Z | 2015-08-21T00:00:00.000 | {
"year": 2015,
"sha1": "bd94c2c43c5a6bf977381e59d18a4aab9033118d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/5/3/1397/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd94c2c43c5a6bf977381e59d18a4aab9033118d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
237501017 | pes2o/s2orc | v3-fos-license | The impact of COVID-19 on the male reproductive tract and fertility: A systematic review
ABSTRACT Objective The COVID-19 pandemic, caused by the acute respiratory syndrome-coronavirus 2 (SARS-CoV-2), remains an ongoing public health challenge. Although males are affected slightly more than females, the impact of SARS-COV-2 on male reproductive system remains unclear. This systematic review aims to provide a concise update on the effects of COVID-19 on male reproductive health, including the presence of viral RNA in semen, and the impact on semen quality, testicular histology, testicular pain and male reproductive hormones. The global health is fronting an immediate as well as impending threat from the novel coronavirus (SARS-CoV-2) causing coronavirus disease (COVID-19), that inflicts more males than females. Evidence suggest that male reproductive system is susceptible to this viral infection. However, there are still several pertinent queries that remain to be fully explained regarding the mechanism in testicular SARS-CoV-2 dynamics and the exact mode of its actions. Thus, the present systematic review aims to provide a concise update on the effects of coronavirus disease 2019 (COVID-19) on male reproduction.. Methods A systematic review was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines searching the PubMed database. Eligibility for inclusion were original human studies evaluating the impact of COVID-19 on male reproductive health. Specific outcomes required for inclusion were at least one of the following: i) seminal detection of mRNA virus, or evaluation of ii) semen analysis, iii) testicular histology or ultrasonography, iv) testicular clinical symptoms and/or v) male reproductive hormones in COVID-19-positive patients. Results Of 553 retrieved articles, 25 met the inclusion criteria. This included studies primarily investigating the presence of viral RNA in semen (n = 12), semen quality (n = 2), testicular histology (n = 5), testicular pain (n = 2) and male reproductive hormones (n= 4). Results show little evidence for the presence of viral RNA in semen, although COVID-19 seems to affect seminal parameters, induce orchitis, and cause hypogonadism. Mortality cases suggest severe histological disruption of testicular architecture, probably due to a systemic and local reproductive tract inflammatory response and oxidative stress-induced damage. Conclusions Clinical evaluation of the male reproductive tract, seminal parameters and reproductive hormones is recommended in patients with current or a history of COVID-19, particularly in males undergoing fertility treatment. Any long-term negative impact on male reproduction remains unexplored and an important future consideration.
Introduction
The coronavirus disease 2019 is caused by the severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) [1,2], which is a highly transmissible coronavirus (CoV) first identified in 2019. COVID-19 has subsequently emerged as a global pandemic and is a significant threat to public health and safety [2]. On the 17 March 2021, >120 million confirmed cases were reported, along with >2.5 million deaths globally [3].
The SARS-CoV-2 virus gains access to human cells through binding to cellular angiotensin-converting enzyme 2 (ACE2) receptors via the receptor-binding domain as part of the spike (S) protein. This further requires priming by cellular transmembrane protease, serine 2 (TMPRSS2) [7,8]. Following infection through exposure to respiratory droplets, the incubation period is usually 3-7 days [8]. The most frequent symptoms reported include fever (>75%), cough (>60%), fatigue (>25%), dyspnoea (>20%) and sputum production (>18.0%). Other less frequent but common symptoms include anorexia, myalgia, pharyngitis, rhinitis and diarrhoea [9][10][11]. COVID-19 pathophysiology is considered a two-phase disease. The first phase is characterised by increased viral transmission rates and replication, particularly through the respiratory tract due to the high expression of ACE2 and TMPRSS2. Phase 2 is a host-specific (including gender-and agespecific) characterised by a hyperinflammatory response, hypercytokinaemia and collateral tissue damage that may result in acute respiratory distress and systemic failure [7,8]. Increased risk for phase 2 complications includes older age, obesity, hypertension, diabetes and chronic respiratory disease [12].
Although epidemiological data suggests that men may have an increased risk of COVID-19-related morbidity and mortality [13][14][15][16][17][18][19], the potential impact on male reproduction and any mechanisms remain poorly understood. Numerous review articles have postulated the potential of SARS-CoV-2 to infect male reproductive tissues due to the presence of ACE2 receptors. This has led to the hypothesis that SARS-CoV-2 may gain access to the reproductive tract, may be sexually transmitted, and that COVID-19 may directly or indirectly impair reproductive function and/or disrupt the hypothalamus-pituitary-testis axis [20][21][22][23][24][25].
As COVID-19 remains an ongoing public health challenge, there is an increasing number of studies investigating its impact on testicular functions and reproductive hormones. Therefore, the present systematic review aims to provide a concise update on the effects of COVID-19 on male reproductive health. This will focus on presence of viral RNA in semen, semen quality, testicular histology, testicular pain and male reproductive hormones.
Methods
The literature search was conducted based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [26]. The PubMed database was searched (last updated search on 7 February 2021) using the following keyword string adapted from Khalili et al. (2020) [27]: ('severe acute respiratory syndrome-coronavirus 2' OR 'severe acute respiratory syndrome coronavirus 2' OR 'COVID-19' OR 'SARS-CoV-2' OR 'SARS CoV2' OR 'SARS CoV 2') AND ('semen' OR 'sperm*' OR 'seminal' OR 'testes' OR 'testicular' OR 'male fertil*' OR 'male infertil*' OR 'epididymis' OR 'prostate' OR 'testosterone' OR 'LH' OR 'FSH' OR 'cryopreservation'). Inclusion criteria were original human studies evaluating the impact of COVID-19 on male reproductive health. Specific outcomes required for inclusion were at least one of the following: i) seminal detection of mRNA virus, ii) semen analysis, iii) testicular histology or ultrasonography (US), iv) testicular clinical symptoms, and/or v) male reproductive hormones (testosterone, LH and/or FSH). Exclusion criteria were narrative or systematic reviews, and metaanalysis, pre-clinical studies and non-English studies. Retrieved articles were manually screened based on the titles and abstracts, and subsequently verified for eligibility by two authors (P.S. and K.L.), and any disputes were settled by a third author (A.A.).
Results
A total of 553 articles were retrieved following the PubMed database extraction. Of these, 523 articles were removed following screening of titles and abstracts. A total of 30 full-text articles were assessed for eligibility, with five articles removed as they were review articles (three) or not a COVID-19 cohort (two). Therefore, a total of 25 articles were included into this review ( Figure 1). These were further classified based on the primary objective of the study: presence of viral RNA in semen (12) ( Table 1), semen quality (two) ( Table 2), testicular histology (five) ( Table 3), testicular pain (two) ( Table 4), and male reproductive hormones (four) ( Table 5)
SARS-CoV-2 in human semen
The results of this review retrieved 12 studies that primarily investigated the presence of SARS-CoV-2 RNA in human seminal fluid, with only two studies detecting the presence of the virus in semen (Table 1). In 38 male patients (aged >15 years) that tested positive for COVID-19 at Shangqiu Municipal Hospital, SARS-CoV-2 RNA was detected in the semen samples of four acute stage patients and two recovery phase patients [28]. Gacci et al. [29] (2021) reported only one patient in a cohort of 43 sexually active men recovered from COVID-19 to have evidence of SARS-CoV-2 RNA in seminal fluid, who subsequently tested negative on a nasopharyngeal test the next day. In an additional study that included a sub-analysis investigating the impact of COVID-19 on semen analysis in 12 SAR2-CoV-2-positive patients, Ma et al. [30] (2021) reported the presence of the viral RNA in one case of mild infection, with the remaining 11 cases (moderate infection) being negative. Achua et al. [31] (2021) reported testicular viral spike protein particles in one out of four COVID-19-positive case autopsies on microscopy, and from a sample obtained from one alive positive COVID-19 patient.
The remaining studies did not detect the virus in seminal fluid. Kayaaslan et al. [32] (2020) investigated semen of 16 hospitalised patients (median age 33.5 years) with SARS-CoV-2 infection confirmed on nasopharyngeal swabs in a cross-sectional cohort study, with 11 semen samples collected within 1 day of a positive diagnosis and all samples within 7 days. In these acutely ill patients, no viral RNA was detected in the semen. Similarly, no viral RNA detection in seminal fluid was reported in a case-controlled study of 74 men (median age 34 years) who had been hospitalised and confirmed positive for COVID-19 and subsequently were recovering. The authors reported that 14.9% were asymptomatic (mild), 41.9% were classified as moderate, and 43.2% had severe pneumonia [33]. In 34 Chinese male patients (median age 37 years) recovering from COVID-19, the virus was not detected at a median of 31 days after a confirmed nasopharyngeal diagnosis [34]. Another case-controlled cross-sectional study that investigated confirmed and probable COVID-19 cases before (10) and after treatment (10) compared to control (10) (mean [SD] age for entire cohort 37.2 [8.6] years) also found no evidence of SARS-CoV-2 RNA in seminal fluid [35]. Similarly, Song et al. [36] (2020) reported no viral RNA in semen of 12 men (aged 22-38 years) recovering from COVID-19 (two negative tests following a positive diagnosis) in Wuhan, China, where 11 patients had mild-tomoderate symptoms and one was asymptomatic. In a case-controlled cohort study that included 16 patients recovering from COVID-19 (mean [SD] age 42.2 [9.9] years), two patients with acute COVID-19 and 14 control patients (mean [SD] age 33.4 [13.1] years) that were negative for SARS-CoV-2, no viral RNA was found in seminal fluid in any participant [37]. A small cohort of six men (aged 28-45 years) that were diagnosed with COVID-19 based on symptoms and a positive nasopharyngeal swab were negative for seminal viral RNA, while still positive for saliva and nasal swab viral RNA [38]. In a cohort of nine Italian men (median age 42 years) positive for nasopharyngeal swab and a diagnosis of mild (eight) or asymptomatic (one) COVID-19 that did not require hospitalisation, none of the semen samples showed the presence of viral RNA [39]. Similarly, no viral RNA was identified in seminal fluid of 23 men (age 20-62 years) within the acute phase or convalescence period of COVID-19. The median time from diagnosis to the semen sample was 32 days. Among these 23 men, the virus had been cleared in 11 of them at time of semen analysis, with 12 men having positive viral RNA in sputum or faecal specimens [40]. This is supported by a case study by Paoli et al. [41] (2020) on a 31-year-old man presenting with fever, myalgia, anosmia, and ageusia, tested positive for SARS-CoV-2 on pharyngeal swab, yet also tested negative for viral RNA in semen and urine 8 days later. Furthermore, in a study that investigated testicular samples on autopsy, Flaifel et al. [42] (2021) reported no evidence of viral RNA in testes of 10 patients who had Uninfected controls with similar comorbidities and age distribution (7) Acute testicular injury reported that is related to oxidative stress (spermatocytes elongation and sloughing with Sertoli cell swelling) compared to chronic damage in controls (decreased spermatogenesis and Leydig cells) No SARS-CoV-2 RNA detected in testes on autopsy 45 Crosssectional casecontrolled study COVID-19 patients (5) Uninfected controls (3) succumbed to COVID-19 while they remained positive for nasopharyngeal tests. This is supported by a postmortem investigation of a 67-year-old male, reporting a negative test on the testis samples studied [36].
SARS-CoV-2 and semen quality
Two studies were found that primarily investigated the impact of SARS-CoV-2 infection on semen quality, reporting a negative impact on sperm quality ( [37] (2020) reported that moderate COVID-19 infection significantly reduced sperm concentration, progressive motility and total number of complete motility compared to men recovered from a mild infection and the control group. Semen parameters were found to be impacted in a case-controlled study of 74 men (median age 34 years) recovering from a confirmed infection, where 14.9% were asymptomatic (mild), 41.9% were classified as moderate, and 43.2% had severe pneumonia. Sperm concentration, total sperm count, and total motility were affected, but not volume or progressive motility. Only sperm concentration showed a correlation with disease severity, and longer recovery times (>90 days) also had significantly lower sperm concentration that shorter recovery (<90 days) [33]. Of 23 men (age range 20-62 years) in acute or convalescence stage of COVID-19, all patients had total sperm count, total motility and morphology within normal parameters [40]. In a casecontrolled cross-sectional study that included confirmed and suspected COVID-19 cases pre-treatment (n= 10) and post-treatment (n= 10) to a healthy control group (n= 10), no significant differences in semen parameters were reported. However, normal sperm morphology was found to be reduced in the pretreatment group compared to control, but this was not observed post-treatment [35]. Gacci et al. [29] (2021) reported in a cross-sectional cohort of sexually active men that were confirmed to be recovered from COVID-19 (n= 43) that 25% where found to be oligocrypto-azoospermic, and this was related to severity of COVID-19 infections, exceeding the rates of found in the average population. Additionally, this study reported increased seminal IL-6 in 76% of the cohort, further suggesting that reproductive tract inflammation may be associated with COVID-19 infections in males. Furthermore, Ma et al. [30] (2021) reported a cohort of 12 COVID-19 male patients, where 66.7% of the patients had normal sperm parameters and low sperm DNA fragmentation (SDF), whereas 33.3% of the patients had low sperm motility with higher SDF.
SARS-CoV-2 on testicular histology
The present review identified five studies that reported testicular histology or US outcomes in patients infected with COVID-19 (Table 3).
Flaifel et al. [42] (2021) investigated testes and epididymis specimens from 10 patients (age range 23-83 years) who had succumbed to COVID-19 following positive nasopharyngeal swabs for viral RNA at time of admission to hospital. Each patient had one or more comorbidities, including type 2 diabetes mellitus and hypertension, and remained positive for viral RNA in the nasopharyngeal tract but not the testicular samples at autopsy. The authors found that seven COVID-19 samples had morphological changes consistent with oxidative stress, including altered chromatin condensation, acidophilic cytoplasm, and DNA fragmentation. Spermatocytes were also found to be sloughed into the tubules in the epididymis. Furthermore, the Sertoli cells showed cellular swelling and vacuolisation. In cases that had a longer duration, hypertrophy of the tubular basement membrane and reduced intratubular cell mass was reported. Interestingly, two cases also showed signs of multiple microthrombi, associated with increased platelets in the testicular tissues. Only one case showed signs compatible with orchitis, namely influx of cluster of differentiation 8 (CD8)positive dominated mononuclear leucocytes into the interstitial space.
Ma et al. [45] (2021) examined testicular tissue in five males who had died from COVID-19 complications (age range 51-83 years) and compared these to non-COVID -19 (age range 71-80 years) affected testicular samples. All five COVID-19 patients all showed degenerated germ cells (GCs) sloughed off in the seminiferous tubules, whereas the controls showed normal GC development within the seminiferous tubules. They observed that two patients showed almost no GCs, appearing similarly to Sertoli cell-only syndrome. The patients with COVID-19 also showed reduced DDX4-positive GC needed for regulation of GC proliferation and differentiation. However, there was no difference found with Sertoli cells between patients and controls. SARS-CoV-2 may therefore impair GC development, but not affect the Sertoli cells.
Achua et al. [31] (2021) analysed testicular specimens on autopsy from six COVID-19-positive males (age range 20-87 years) compared to three COVID-19-negative controls (age range 28-77 years), with all patients having relevant comorbidities. It was observed that three of the COVID-19 cases showed impaired spermatogenesis. Only one case reported infiltration of macrophages and lymphocytes within the testicular tissues. All other cases did not show any signs of inflammation. Interestingly, in the three patients with COVID-19 with normal spermatogenesis, ACE2 receptor expressions was significantly lower compared to the patients with COVID-19 with impaired spermatogenesis. The patients with impaired spermatogenesis and reduced ACE2 receptor expression also showed Sertoli cell-only syndrome, early maturation arrest, and sclerosis of the seminiferous tubules.
Yang et al. [46] (2020) analysed 11 patients' testes on autopsy who had died from COVID-19 (age range 42-87 years), where 10 of these patients had a history of fever, and 10 received low-dose steroidal therapy. This was compared to five uninfected control autopsy cases negative, for COVID-19 Sertoli cells were most notably affected by 'ballooning' morphological changes, with swelling and hydrolysation alongside detachment from the basement membrane. These findings were classified as 18.2%, 45.5%, and 36.4% of 11 cases showing mild, moderate, and several injuries, respectively, compared to controls that showed no injuries or mild tubular derangements only. In addition, Leydig cell counts in COVID-19 cases was significantly reduced compared to the controls. Furthermore, there was oedema and mild inflammatory cellular influx.
Chen et al. [47] (2020), in a retrospective study of a cohort of 142 confirmed COVID-19-positive patients (age range 24-93 years), examined US reports made at the bedside. These cases were classified as mild or moderate (41.9%) and severe or critical (58.5%), with the latter group significantly older and more likely to have coronary heart disease or chronic obstructive pulmonary disease compared to the mild/moderate group. Acute orchitis was present in 10% of all patients, with 7% presenting with acute epididymitis, and 15% presenting with acute epididymo-orchitis. Additional manifestations observed in all patients was a thickened tunica albuginea (25.4%), increased testicular vascular flow (20,4%), increased epididymal vascular flow (16,9%), heterogeneous echogenicity of testis (9.9%), scrotal swelling (8.5%), enlargement of epididymis (7.7%), hydrocele (7,7%), enlargement of testis (7%), heterogeneous echogenicity of epididymis (5.6%), and abscesses in the epididymis (2.8%). Severe and critical cases had significantly increased percentage of patients with thickened tunica albuginea, heterogeneous echogenicity of the testis, increased testicular and epididymal vascular flow and absence in the epididymis compared to mild and moderate cases. Importantly, there was an increased risk of scrotal infection within this age group of the cohort.
Additional information is reported in a study categorised under semen quality, where Li et al. [43] (2020) reported testicular damage in six age-matched casecontrolled autopsies on testicular tissue. This included inflammatory infiltration into testicular and epididymal tissue, signs of oedema, congestion and red blood cell exudates. The seminiferous tubules were thinned with an increased number of apoptotic cells were present, as well as increased CD3 + and CD68 + in the interstitial cells of testicular tissue and the presence of immunoglobulin G (IgG) within seminiferous tubules of autopsied testes. Furthermore, Song et al. [36] (2020) reported that there was no detection of SARS-CoV-2 viral RNA in the testis of an autopsied 67-year-old male who had succumbed to COVID-19.
SARS-CoV-2 on testicular clinical presentations
There were two studies primarily reporting testicular clinical symptoms in COVID-19 infections (Table 4). Patients aged between 18 and 75 years diagnosed with COVID-19 (n= 91) were evaluated for testicular pain or epididymo-orchitis. Here, 11% of patients presented with pain, and only one was clinically diagnosed with epididymo-orchitis. Subsequent comparison of patients with the presence (n = 10) or absence (n = 81) of testicular pain showed no difference for neutrophil count, lymphocyte count, C-reactive protein (CRP), D-dimers or duration of COVID-19 infection between these groups, where lymphopenia was significantly more common in older patients, but not testicular pain or swelling [48]. Kim et al. [49] (2020) reported the case of 42-year-old adult male presenting in the emergency room with severe abdominal pain and testicular pain, without respiratory symptoms, that was subsequently diagnosed with COVID-19. In a cohort of 34 male patients recovering from COVID-19, Pan et al. [34] (2020) reported a prevalence of 19% for symptoms indicative of orchitis.
SARS-CoV-2 infection and male reproductive hormones
There were two studies primarily reporting reproductive outcomes in COVID-19 infections (Table 5). Kadihasanoglu [7.8] years). The COVID-19 cohort was described as mild (52.8%), moderate (33.7%) and severe (13.5%). Testosterone was significantly lower, and LH and prolactin were higher in the COVID-19 cohort compared to non-COVID-19 infections and control groups, with no differences between groups for FSH. Furthermore, patients with COVID-19 reported significantly lower white blood cell count (WBC) and lymphocyte count and higher CRP compared to controls, where WBC and lymphocytes, but not CRP, were significantly increased in patients with COVID-19 compared to non-COVID-19 patients. These variables were found to be associated with the severity of disease and were further significantly worse in patients classified as severe COVID-19. Testosterone also negatively correlated with hospital duration and positively correlated with oxygen saturation in patients with COVID-19, but not in the non-COVID-19 infection group. No correlations were found for testosterone compared to neutrophil count, lymphocyte count or CRP. In four patients who succumbed to COVID-19, the testosterone levels of these cases was below the median for the COVID-19 cohort [50].
Ma et al. [30] (2021) investigated 119 male patients (age range 20-49 years) for reproductive hormones, where all patients had a stable clinical status during the study in the hospital setting. This was compared to 273 age-matched controls (age range 24-49 years) who were undergoing reproductive hormonal analysis for fertility prior to marriage or planning parenthood. The COVID-19 cohort was described as mild (84.0%), moderate (11.8%) and severe (1.96%), and case management included use of corticosterone (14.8%), arbidol (45.3%), oseltamivir (33.6%), or intravenous antibiotics (56.3%). In 39% of the cases liver function (elevated alanine aminotransferase [ALT] and aspartate aminotransferase [AST]) was impaired. The reproductive hormonal analysis showed that LH was signifi-cantly higher in the COVID-19 cohort compared to the control, whereas no difference in testosterone, FSH and oestrogen levels were found between groups.
Okçelik et al. [51] (2020) studied 44 patients (mean-[SD] age 35.5 [9.9} years) in a COVID-19 outpatient clinic. Testosterone, LH and FSH were not significantly different in patients who tested positive for COVID-19 (n = 24) compared to negative patients (n = 20). In 42 patients that had a chest CT scan, COVID-19 pneumonia was diagnosed in 23. Testosterone was significantly lower in the pneumonia group compared to those without pneumonia.
Rastrelli et al. [52] (2020) investigated 31 patients recovering from SARS-CoV-2 pneumonia in the respiratory intensive care unit of Carlo Poma Hospital, Mantua, Italy. It was found that 67.7% improved in clinical presentation (age range 55-66 years), 19.4% remained stable (age range 33-83 years) and 12.9% worsened or died (age range 59-85 years) at time of analysis. There was a significantly lower total and free testosterone and higher LH in the severe/deceased group compared to the other groups. These patients also reported significantly increased CRP, procalcitonin and neutrophil count with lower lymphocytes. The authors concluded that lower baseline levels of testosterone predict mortality outcomes in patients with SARS-CoV-2 infections that are in intensive care units.
Discussion
Initial fears of a negative impact of SARS-CoV-2 infection on male fertility resulted in many couples delaying pregnancy and fertility treatment, although many have now resumed their pursuit of parenthood and are accepting the uncertainty [22]. Guidelines for assisted reproductive technology (ART) management have also been issued by professional societies such as the American Society for Reproductive Medicine (ASRM), European Society of Human Reproduction and Embryology (ESHRE) and International Federation of Fertility Societies (IFFS) [53], with concerns raised on impact and transmission of the virus on fertility and ART [54,55]. This has included recommendation for gonadal function assessment and semen analysis in males of reproductive age who have recovered from COVID-19 [56]. There is a further concern that SARS-CoV-2 may be transmitted sexually, or via ART [22,23,25], which requires the presence of the virus in seminal or vaginal fluids for such a mode of transmission [57].
SARS-CoV-2 gains access to cells via ACE2 receptor binding with TMPRSS2 priming [7,8]. The testes are reported to express ACE2, particularly in the spermatogonia, Leydig cells and Sertoli cells, which leads to the hypothesis of SARS-CoV-2 gaining access to testicular tissues and seminal fluid, directly affecting male reproductive health [58]. TMPRSS2 has been reported to be present in prostate epithelial cells, where the expression is regulated by androgens, and present in semen as components of prostasomes [59,60]. These proteasomes or the 'exosome-like structures' reportedly may incorporate TMPRSS2 in the sperm membrane [60]. Therefore, although ACE2 is expressed in male reproductive tissues, there is generally a lack of TMPRSS2 modulatory protein coexpression except in prostatic epithelial cells [57,59], and the presence of ACE2 and TMPRSS2 together in male reproductive tissues is considered relatively low [24].
Although ACE2 receptors are found to be expressed in male reproductive tissue, specifically the testis [58], and TMPRSS2 has been reported to be present in prostate epithelial cells and in semen [59,60], it is still not clear whether SARS-CoV-2 does gain access directly to the male reproductive tract. In an early systematic review, Khalili et al. [27] (2020) reported that the virus may be present in seminal fluid, with limited evidence of reduced semen parameters and testosterone. Yao et al. [61] (2021) concluded that there is limited evidence for the virus in seminal fluid; however, fatal cases appear to have damaged testicular structures in the absence of the virus. A subsequent systematic review investigating viral presence of SARS-CoV-2 in male and female reproductive tracts concluded that the virus is unlikely to be considered a sexually transmitted virus. This included semen, testicular biopsies, prostatic fluid, vaginal fluids, and oocyte samples. However, the induction of orchitis, or having an impact on spermatogenesis and semen analysis findings or testosterone level is still possible [57]. These results are generally consistent with the results of this systematic review, where there is little evidence suggesting viral presence in the seminal fluid or reproductive tract. There is also little evidence to suggest SARS-CoV-1 is present in seminal fluid [62][63][64]. Therefore, the evidence for sexual transmission is also currently lacking, particularly as the presence of viral nucleic acids in seminal fluid does not equate to an infectious intact virus [64].
Although there have been 27 viruses reported to gain access to semen through viraemia, including Hepatitis B and C, Adenovirus, HIV, Mumps, Ebola, Zika and numerous herpes family viruses reported in seminal fluid [62][63][64], the seemingly low risk of SARS-CoV-2 presence in seminal fluid may be due to the low levels of both ACE2 and TMPRSS2 expressed in the testis [24]. Pan et al. [34] (2020) suggested that ACE2 mediated viral entry into target cells is unlikely to occur within the human testis as single-cell transcriptome analysis demonstrates sparse expression of ACE2 and TMPRSS2. Further evidence supporting this is from a retrospective cohort study using cryopreserved semen samples obtained from young healthy adult sperm donors at the Hunan Province Human Sperm Bank (China), where paired blood and semen samples from 50 donors during the first wave of the pandemic, and 50 donors collected upon work resumption, where analysed. This study reported that all samples from during and after the pandemic wave were free of SARS-CoV-2 detection and safe for external use [65].
The 27 viruses detected in human semen all come from diverse families, suggesting that mechanisms for viral access may not be unique to specific viral epitopes. Non-specific mechanisms could be through serum viral load, inflammatory mediators altering the blood-testis barrier (BTB), testicular immunosuppression that may protect viruses from testicular immune surveillance, and pyrexia [54,62,64,66]. The few accounts of the presence of SARS-CoV-2 may be through a BTB breach, and may account for subsequent damage to GCs and testicular interstitial tissues [24]. However, there are still too few studies for appropriate conclusions in COVID-19, and these studies currently have small sample sizes, and mostly confined to men with mild-to-moderate symptoms.
Reproductive tract inflammation and infections are causes of male infertility, most prominently bacterial infections, environmental causes and autoimmunity [66]. This may also include systemic autoimmune disease and chronic asymptomatic inflammation associated with obesity, metabolic syndrome and diabetes mellitus [67,68]. It is known that viral infections may negatively affect fertility, including adenovirus, herpes simplex virus, HIV, hepatitis B and C [63,[69][70][71]. More recently, in was reported that the Zika virus may also alter semen parameters and reduce male fertility [63,64]. Although there remains a paucity of studies, there is current evidence for impaired sperm parameters in COVID-19, mostly in males recovering from infection. This is consistent with the review conducted by Meng et al. [72] (2020), suggesting that SARS-CoV-2 can cause spermatogenesis dysfunction and impaired sperm parameters. However, as there is little evidence suggesting the presence of the virus in reproductive tissues, this is most likely mediated by non-specific mechanisms. The viral-induced damage is proposed through changes to testicular structure and/or function, immune cell infiltration into the testicular compartment, a systemic inflammatory response, pyrexia, reduced testosterone, and potential incorporation of the virus genetic material into the GC genome [63,66,69].
Although inflammatory cytokines such as TNF-α, IL-1β, and IL-6 have physiological functions, increased levels with inflammation can negatively impact spermatogenesis. Inflammation is also typically associated with oxidative stress [67,73]. Males are also commonly found to have acute or chronic inflammation of the reproductive tract in an infertility assessment [67,73].
Effects of acute febrile illness induced in rats showed impaired spermatogenesis and increased GC apoptosis that was not consistent with hormonal changes, suggesting direct impact of the immune response [74]. GCs have also been shown to have receptors for TNF, IL-1 and IL-6, supporting a direct action of these cytokines in the induction of apoptosis [75]. Furthermore, febrile illness due to viral infection, including influenza and SARS-CoV-1, is known to negatively impact sperm parameters, including concentration, motility, and morphology [63]. Based on the limited evidence available, it is likely that inflammation and oxidative stress associated with COVID-19 may negatively impact spermatogenesis. This is supported by histological studies suggesting inflammatory and oxidative stress mediated damage. Therefore, clinical investigation for males recovering from COVID-19 may be warranted, as well as consideration in the clinical assessment of infertile males. Furthermore, the current evidence may warrant the consideration of cryopreservation in ART patients at high risk of SARS-CoV-2 exposure or a severe clinical course of COVID-19, particularly as long-term reversibility is not yet established.
Numerous viruses that cause viraemia have been associated with orchitis, including influenza, Coxsackie B, SARS-CoV-1, rubella, smallpox, echovirus, and parvovirus [62][63][64]. Mumps orchitis is arguably the most well-known viral disease that can be complicated with orchitis in males, and may cause testicular atrophy and infertility [76]. Furthermore, orchitis was found to be a complication of SARS-CoV-1 viral infections [77]. The limited results of the present review suggest that orchitis is a possible complication in COVID-19. Viral orchitis is associated with infiltration of neutrophils, macrophages and T and B lymphocytes, with degeneration of germinal epithelium and few to no spermatogonia and Sertoli cells, lamina propria hypertrophy and fibrosis of the tubules. However, Leydig cells generally do not show signs of cellular injury [66]. Orchitis and histological damage to testicular tissues in COVID-19 may further be mediated by a vasculitis, as hypercoagulability and the segmental vascularisation structure of the testis may induce inflammation [56]. This is consistent with evidence of testicular autopsy in patients with severe phase 2 infection who ultimately died. This was also reported for SARS-CoV-1 autopsy specimens of patients (n= 6) that had died, where the testes also showed GC destruction, few spermatozoa in the seminiferous tubules, basement membrane hypertrophy, leucocyte infiltration (CD3 + T lymphocytes and CD68 + macrophages) and IgG precipitation in seminiferous epithelium. This was also without detection of SARS-CoV-1 genomic sequences in the samples, further supporting immune-mediated damage to testicular tissues [77].
Although there are studies suggesting female predominance or no sex disparity, there is increasing evidence of a male predominance for COVID-19 morbidity and mortality [10,12]. In a meta-analysis, Peckham et al. [13] (2020) reported no difference between male and female gender for confirmed infections, where males were reported to have a 2.84 increased risk of intensive care treatment and 1.39 increased risk of death. Similar increased sex differences in COVID-19 fatality has been reported in data from Europe, China and Korea [14][15][16][17]. Moreover, observational studies further report that the severity of COVID-19 intensive care and mortality rates are increased in males [18,19]. This has also been found to be similar regarding retrospective data that was retrieved from patients with SARS-CoV-1 [16]. Males may also shed the SARS-CoV-2 virus for longer duration compared to females [78]. This has led to the suggestion that androgens may be involved in the pathogenesis of COVID-19.
Testosterone is known to be reduced in comorbidities associated with COVID-19, including ageing, obesity, diabetes and chronic obstructive pulmonary disease [79]. Hypogonadism is further associated with increased inflammatory cytokines, and testosterone reduces IL-1β, IL-6, and TNF-α, which is accentuated in ageing. Although only four studies are included that investigated androgens in patients with COVID-19, the results of this review suggest that testosterone may be reduced in male patients with COVID-19 compared to controls, and this is negatively associated with CRP as a systematic marker of inflammation. Furthermore, in a non-peer reviewed published article, Schroeder et al. [80] (2020) reported that 68.6% of males with COVID-19 in intensive care units (Hamburg, Germany) had lower serum testosterone levels and 48.6% had lower dihydrotestosterone levels, which was negatively correlated with severity and death. Lower testosterone was negatively correlated with IL-2 and INF-γ, and oestradiol positively correlated with IL-6, as did disease severity. Reduced testosterone may therefore be involved in the pathogenesis of the cytokine storm and complications of phase 2 COVID-19 [79]. This reduced testosterone may further weaken respiratory muscles and reduce lung function parameters such as forced expiratory volume [79]. However, female patients may be generally less susceptible to viral infections through a better immune response due to the activity of oestrogen in activating T cells, a reduced expression of inflammatory cytokines, and increasing antibody formation in B-lymphocyte humoral immune response [12]. Male patients also have higher expression of ACE2 in the renal system compared to females, which may further explain a gender disparity [12].
Conclusion
There is little evidence suggesting SARS-CoV-2 viral presence in the male reproductive tract and that sexual transmission may occur. However, COVID-19 infections may have a negative impact on spermatogenesis and male fertility. The current evidence available suggests that non-specific mechanisms associated with the systemic and local reproductive immune response to the SARS-CoV-2 virus could explain the impact. This may also be associated with orchitis and associated testicular ultrasonography changes. COVID-19 may further decrease testosterone, which in turn exacerbates the inflammatory response. The clinical evaluation of male reproductive tract, seminal parameters and reproductive hormones are recommended in men undergoing fertility treatment. Any long-term negative impact on male reproduction remains unexplored and requires further future consideration.
Disclosure Statement
No potential conflict of interest was reported by the authors. | 2021-08-13T13:09:38.808Z | 2021-07-03T00:00:00.000 | {
"year": 2021,
"sha1": "17cc66799ce61947e544cba88eaeff21b2cac662",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2090598X.2021.1955554?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "810a7fa51f606f67d14e26e7dd774e2c7b1d4c8d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
60671366 | pes2o/s2orc | v3-fos-license | Three-Dimensional Digital Colour Camera
Digital colour camera now has been a popular consumer equipment, has widely used in our daily life, and is highly suited for the next generation of cellular phones, personal digital assistants and other portable communication devices. The main applications of digital colour camera are used to take a digital picture for personal use, picture editing and desktop publishing, high-quality printing, and image processing for advanced academic research. The digital picture is a two-dimensional (2D) form with three-colour components (Red, Green, Blue). The most of efforts for a digital camera producer are focusing on the improvements of image compression, image quality, image resolution, and optical/ digital zooming. However, consider the wide field of computer vision, the depth for a captured object may be another very useful and helpful information, such as surface measurement, virtual reality, object modelling and animation, it will be valuable if the depth information can be obtained while a picture is being captured. In other words, three-dimensional (3D) imaging devices promise to open a very wide variety of applications, particularly, those involving a need to know the precise 3D shape of the human body, e.g. e-commerce (clothing), medicine (assessment, diagnosis), anthropometry (vehicle design), postproduction (virtual actors) and industrial design (workspace design) (Siebert & Marshall, 2000). To achieve this significant function, a novel 3-D digital colour camera has been successfully developed by Industrial Technology Research Institute, Opto Electronics & Systems Laboratories (ITRI-OES), in Taiwan. In this article, the previous works, algorithms, structure of our 3D digital colour camera, and 3D results, will be briefly presented. To obtain 3D information of a given object, the approach may be considered in between a passive scheme and an active scheme. The widely known passive scheme is stereovision, which is useful to measure surfaces with well-defined boundary edges and vertexes. An algorithm to recognize singular points may be used to solve the problem of correspondence between points on both image planes. However the traditional stereoscopic system becomes rather inefficient to measure continuous surfaces, where there are not many reference points. It has also several problems in textural surfaces or in surfaces with lots of discontinuities. Under such an environment, the abundance of reference points can produce
matching mistakes. Thus, an active system based on a structured light concept will be useful (Siebert & Marshall, 2000;Rocchini et al., 2001;Chen & Chen, 2003). In our 3D camera system, the constraint that codifies the pattern projected on the surface has been simplified by using a random speckle pattern, the correspondence problem can be solved by a local spatial-distance computation scheme (Chen & Chen, 2003) or a so-called compressed image correlation algorithm (Hart, 1998). In our original design, the 3D camera system includes a stereoscopic dual-camera setup, a speckle generator, and a computer capable of high-speed computation. Figure 1(a) shows the first version of our 3D camera system including two CCD cameras needing a distance of 10 cm between its 2 lenses, and a video projector, where the used random speckle pattern in Fig. 1(b) is sent from the computer and projected via the video projector on the measuring object. Each of two cameras takes the snapshot from its own viewpoint, and can do the simultaneous colour image capturing. A local spatial-distance computation scheme or a compressed image correlation (CIC) algorithm then finds some specific speckles on the two camera images. Each of the selected speckles would have its position shown twice, one on each image. After establishing the statistic correlation of the corresponding vectors on the two images, the 3D coordinates of the spots on the object surface will be known from the 3D triangulation. Not only the traditional stereoscopic systems (Siebert & Marshall, 2000;Rocchini et al., 2001) but also the above mentioned system (Chen & Chen, 2003) are all not easy to be used friendly and popularly due to large system scale, complicated operations, and expensiveness. Hence, to achieve the valuable features (portable, easy operation, inexpensiveness) as possessed by a 2D digital camera, we present a novel design which can be applied to a commercial digital still camera (DSC), and make the 2D camera be able to capture 3D information (Chang et al., 2002). The proposed 3D hand-held camera (the second version of our 3D measurement system) contains three main components: a commercial DSC (Nikon D1 camera body), a patented three-hole aperture lens (Huang, 2001;Chen & Huang, 2002), and a flash. The flash projects the speckle pattern onto the object and the camera captures a single snapshot at the same time. Accordingly, our 3-D hand-held camera design integrating together the speckle generating projector and the colour digital camera makes the system be able to move around freely when taking pictures. The rest of this article is organized as follows. Section 2 reviews briefly our previous works. Section 3 presents algorithms for improving 3D measurements. The structure of our novel 3D camera is described in Section 4. Finally a conclusion is given in Section 5. Because the found 3D information should be visualized for the use, all the 3D results are currently manipulated and displayed by our TriD system (TriD, 2002), which is a powerful and versatile modelling tool for 3D captured data, developed by ITRI-OES in Taiwan.
Previous works
Our original 3D measurement system shown in Fig. 1(a) includes two CCD cameras and a video projector, where the used random speckle pattern shown in Fig. 1(b) is sent from the computer and projected via the video projector onto the object to be measured. In this system, to solve the correspondence problem of measuring a 3D surface, the random speckle pattern was adopted to simplify the constraint that codifies the pattern projected on the surface and the technique of spatial distance computation was applied to find the correspondence vector (or the correlation vector used in the later of this article).
To effectively perform the correspondence vector finding task, the binarization for the captured image is used in our 3D system developments. The following is our adaptive thresholding method for binarization. Let a grey block image be defined as G having the size of mm × . The correspondence problem is based on the local matching between two binary block images. Therefore it is important to determine the thresholding value TH, for obtaining the binary block image B.
To overcome the uneven-brightness and out-of-focus problem arising from the lighting environment and different CCD cameras, the brightness equalization and image binarization are used. Let 2 m be the total number of pixels of a block image, and cdf(z), z = 0~255 (the grey value index, where each pixel is quantized to a 8-bit data) be the cumulative distribution function of G, then a thresholding controlled by the percentile p = 0~100% is defined Thus for a percentile p each grey block image G will have a thresholding value p TH to obtain its corresponding binary block image B, and we have where 1 and 0 denote the nonzero (white) pixel and the zero (black) pixel, respectively. Note here that the higher the p is, the smaller the data amount having nonzero pixels. In our previous work, our distance computation approach for finding correspondence vectors is simply described as follows. Let starting at the location (u 0 , v 0 ) in the right-captured image, will be in the range of u 0 in [x 0x R , x 0 + x R ] and , then the vector from (x 0 , y 0 ) to (u f , v f ) is defined to be the found correspondence vector. Because the corresponding information used in the stereoscopic system are usually represented with the subpixel level, in this version of 3D system, a simple averaging with an area A of size ww × containing the found correspondence results (u f , v f )s is used to obtain the desired subpixel coordinate ( * f u , * f v ) and is expressed by ) 11 and The more details of measuring a 3D surface using this distance computation scheme can be found in the literature (Chen & Chen, 2003). A result is given in Fig. 2 The reconstructed 3D surface with the method presented in (Chen & Chen, 2003), where p = 65%, s = 4, and a 55 × support for subpixel compensation were used.
Algorithms for improving 3D measurement
In order to investigate the accuracy of 3D information, we have developed another approach different to our previous spatial distance computation for improving our system. This idea comes from the analysis of partical image velocimetry using compressed image correlation (Hart, 1998). In the following, under a hierarchical search scheme, pixel level computation and subpixel level computation combined with brightness compensation will be presented for approaching to the goal of improving 3D measurement.
Pixel level computation
A hierarchical search schem is adopted in pixel level computation. First let the left image be divided into a set of larger fixed-size blocks, and called level 1 the top layer. Consider a block 1 l B in left image, if one block 1 r B in right image has the best correlation then the vector www.intechopen.com 1 V from the coordinate of 1 l B to that of 1 r B is found. Based on the facility of coarse-to-fine, the next search is confined to the range indicated by 1 V in right image and the execution time can be further reduced. Hence next, let the block image 1 l B in level 1 be further divided into four subblocks, this is the level 2. Consider the subblock 2 l B in 1 l B having the same coordinate, by the vector 1 V , the correlation process is further performed only on the neighboring subblocks centered at the coordinate of 1 r B . The best correlation conducting the vector 2 V from the coordinate of 2 l B to one subblock 2 r B is found. Continue this process, if the best match is found and ended at level n, then the final vector of best correlation may be expressed as In order to reduce the computation time of correlation, a so-called correlation error function for an M N × image is used and defined as follows (Hart, 1998).
This function only uses addition and subtraction, thus the time reduction is expectable. Note here that the processed images Is are binarized after adaptive thresholding as described in Section 2, thus the information of Is are only either 1 or 0.
Subpixel level computation
In order to increase the accuracy of the correspondence finding, two schemes are combined for achieving this purpose. One is grey scale interpolation, the other is brightness compensation. For grey scale interpolation, a linear scheme is performed on the third layer of right image. In our study, the block size of the third layer is 88 × . The processing includes two steps as follows.
Step 1. Use the pixel grey levels in vertical direction to interpolate the subpixel grey level, e.g., 3-point interpolation, between two neighboring pixels.
Step 2. Based on the pixel and subpixel grey levels found in Step 1 to interpolate the subpixel grey leves in horizontal direction. In this case, the 3-point interpolation is aslo considered as example. A comparision among pixel level, subpixel level, and after interpolation is illustrated in Fig. 3(a)-(c), respectively. Here we observe the image in Fig. 3(c) that the smoothness is improved greatly within the middle image but the randomness becomes more serious at two sides. It results from the ununiform brightness between the two CCD cameras. Hence a brightness compensation schem is presented to solve this problem. As mentioned before for correlation error function in (5), the used correlation function (CF) may be redefined as if , 2 otherwise.
Consider (7), if two block images I 1 and I 2 have different brightness, the correlation from I 1 to I 2 will be different to that from I 2 to I 1 . Furthermore, it will be dominated by the block image having lower grey level distribution. As a result, the more uniform the two block image distribution, the higher accuracy the correlation; and vice versa. To compensate such ununiform brightness between two block images and reduce the error, a local compensation factor (LCF) is introduced as thus now (6) is modified as below and named CF with brightness compensation (BC).
According to (9), results in Fig. 3(d) and 3(e) show that a good quality can be obtained. Here BC 32 means that 32 feature points are used in the subcorrelation. In our experiments, the accuracy can be increased 0.2-0.3 mm by the scheme of interpolation with brightness compensation; however a trade-off is that 4-5 times of computational time will be spent.
Results
Consider the two captured images shown in Fig. 2(a) and 2(b) respectively, three reconstructed results using pixel level computation, subpixel level computation, and the further improvement by interpolation with BC are shown in Fig. 4(a), 4(b), and 4(c), respectively. Obviously, the later result shows a better performance. In order to further increase the accuracy of reconstructing 3D object, a suitable method is to use a high resolution CCD system for capturing more data for an object. For example, in our system, Fig. 5(a) shows a normal resolution result with 652 512 × , whereas Fig. 5(b) shows a high resolution result with 1304 1024 × . Their specifications are listed in Table 1. For a high resolution CCD system, due to more data to be processed we present a simplified procedure to solve the time-consuming problem. Consider the case of 1304 1024 × , the processing procedure is as follows.
Step 1. Down sampling. The image is reduced to a 652 512 × resolution.
Step 2. Pixel level correlation with 3 levels is performed on the 652 512 × image. In this step, the coarse 80 64 × correlation vectors are obtained at the lowest level. 83 × 62 a 180 × 150 Image Density Resolution 7.7 × 7.7 b 7.2 × 6.8 a " View Range" is defined as the (object width) × (object height). b " Image Density Resolution" is defined as (image width/ object width)×(image height/ object height), thus the unit is (pixel/ mm) ×(pixel/ mm). Table 1. Comparison between a normal and a high resolution CCD system in our study. www.intechopen.com Three-Dimensional Digital Colour Camera For further demonstrating the quality of our algorithms, four objects in Fig. 6(a) and their reconstructed results in Fig. 6(b) and 6(c) are given. Note here that these results are only obtained from one view, thus they can be regarded as a 2.5D range image data. If multiple views are adopted and manipulated by our TriD system, the totally 3D result can be generated and illustrated in Fig. 7. As a result, a set of effective algorithms have been successfully developed for our 3D measurement system.
3-D Camera
The proposed 3D hand-held camera contains three main components: a commercial DSC (Nikon D1 camera body), a patented three-hole aperture lens (Huang, 2001;Chen & Huang, 2002), and a flash as shown in Fig. 8(a). The flash projects the speckle pattern onto the object and the camera captures a single snapshot at the same time. To embed the 3D information in one captured image, we devise a novel lens containing three off-axis apertures, where each aperture was attached one colour filter as depicted in Fig. 8(b), so that a captured image carries the information from three different viewing directions. Since the three different images can be extracted from filtering the captured image with red, green, and blue component, respectively, the depth information may be obtained from these images by using the algorithms introduced in Section 3. For the sake of illustrating the principle of our tri-aperture structure, an example of lens with two apertures is depicted in Fig. 8(c). Three points, P1, P2, and P3 are set on the central www.intechopen.com axis, where P2 is located at focal plane; P1 and P3 located at a far and near points with respect to the lens. The rays reflected from P2 pass through aperture A and B will intersect at the same location on the image plane, whereas P1 or P3 will image two different points. Accordingly the depth information of P1 and P3 may be computed from the disparity of their corresponding points on the image plane. To extract the depth information from the single image, the image should be separated based on the colour filter. The colour composition and decomposition in Fig. 9 and an example shown in Fig. 10 are given for illustration. In Fig. 10(a), the image shows a mixture of R, G, B colour pixels since it merges the images from different dircection and colour filters. After colour separation process, three distinguished images based on R, G, B components are obtained as shown in Fig. 10(b)-10(d). Based on our depth computation algorithm embedded in TriD system, the range data is obtained as Fig. 10(e) shows. If a grey image is applied, we can obtain a grey textrued 2.5D image as Fig. 10(f) using a rendering process in TriD. Similarly, once a colour image is fed into our system, a colour textured 2.5D image may also be obtained as shown in Fig. 10(g) and 10(h) with different view angles. Note here that in this processing, a cross-talk problem may be rised, i.e., G and B components may corrupt the R-filtering image for example. In our study, this problem may be solved by increasing image intensity while an image is being captured. The processing stages of our acquisition system using the proposed 3D camera are as follows. The camera captures two images of the target. The first snap gets speckled image (for 3D information computation), which will be spilt into three images based on the colour decomposition described before. Then the correlation process is used to compute depth www.intechopen.com information. The second snap gets original image (as a texture image) for further model rendering. For example, the human face of one author (Chang, I. C.) of this article is used for modelling. The speckled and texture images are captured and shown in Fig. 11(a) and 11(b), respectively. After the TriD software system, a face 3D model and its mesh model are obtained as shown in Fig. 11(c) and 11(d), respectively. This result demonstrates the feasibility of our 3-D hand-held camera system. Fig. 11. The speckled image (a) and texture image (b) taken by our 3D hand-held camera system. The 3D face mode (c) and its mesh model (d) manipulated by our TriD system.
As described above, using the typical digital camera with our patented three-hole aperture lens along with a high accuracy calculation, the entire 3D image capturing process can now be done directly with a single lens in our 3D camera system. The three-hole aperture provides more 3D information than the dual-camera system because of their multi-view property. The depth resolution can therefore be increased considerably. Currently this 3D camera system has reached precision of sub-millimetre. The main system specifications are listed in Table 2. As a result, our 3D hand-held camera design integrating together the speckle generating projector and the colour digital camera makes the system be able to move around freely when taking pictures. Plug-in module in TriD system Table 2. System specifications in our 3-D hand-held camera design.
Conclusion
Three-dimensional information wanted has been an important topic and interested to many real applications. However, it is not easy to obtain the 3D information due to several inherent constraints on real objects and imaging devices. In this article, based on our study in recent years, we present effective algorithms using random speckle pattern projected on an object to obtain the useful correspondence or correlation vectors and thus reconstruct the 3D information for an object. Original two CCD cameras system has also been moved to a novel 3D hand-held camera containing a DSC, a patented three-hole aperture lens and a flash projecting random speckle pattern. Based on the manipulations of our There are six sections in this book. The first section presents basic image processing techniques, such as image acquisition, storage, retrieval, transformation, filtering, and parallel computing. Then, some applications, such as road sign recognition, air quality monitoring, remote sensed image analysis, and diagnosis of industrial parts are considered. Subsequently, the application of image processing for the special eye examination and a newly three-dimensional digital camera are introduced. On the other hand, the section of medical imaging will show the applications of nuclear imaging, ultrasound imaging, and biology. The section of neural fuzzy presents the topics of image recognition, self-learning, image restoration, as well as evolutionary. The final section will show how to implement the hardware design based on the SoC or FPGA to accelerate image processing. | 2017-10-15T06:47:42.287Z | 2009-12-01T00:00:00.000 | {
"year": 2009,
"sha1": "0918c41b281eb5e2e99e4d654850d740a6e1f61f",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/6680",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7b4ae06f7e7485f9461aa08128acca80916ae56f",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
228834667 | pes2o/s2orc | v3-fos-license | Numerical assessment of cavitation erosion risk using incompressible simulation of cavitating flows
In this paper, a numerical method to assess the risk of cavitation erosion is proposed, which can be applied to incompressible simulation approaches. The method is based on the energy description of cavitation erosion, which considers an energy transfer between the collapsing cavities and the eroded surface. The proposed framework provides two improvements compared with other published methods. First, it is based on the kinetic energy in the surrounding liquid during the collapse instead of the potential energy of collapsing cavities, which avoids the uncertainty regarding the calculation of the collapse driving pressure in the potential energy equation. Secondly, the approach considers both micro-jets and shock-waves as the mechanisms for cavitation erosion, while previous methods have taken into account only one of these erosion mechanisms. For validation, the proposed method is applied to the cavitating axisymmetric nozzle flow of Franc et al. (2011), and the predicted risk of cavitation erosion is compared with the experimental erosion pattern. This comparison shows that the areas predicted with high erosion risk agree qualitatively well with the experimental erosion pattern. Furthermore, as the current method can be used to study the relationship between the cavity dynamics and the risk of cavitation erosion, the hydrodynamic mechanism responsible for the high risk of cavitation erosion at the inception region of the sheet cavity is investigated in detail. It is shown for the first time that the risk of cavitation erosion in this region is closely tied to the separation of the flow entering the nozzle.
Introduction
Hydrodynamic cavitation is unavoidable in high-performance hydraulic machineries, such as propellers, water turbines, pumps, and diesel injectors.This phenomenon occurs when the pressure drop in an accelerating liquid flow leads to the formation of pockets of vapor, known as cavities.The collapse of these cavities near the surface is associated with a high mechanical load in the material, which can eventually lead to material loss and cavitation erosion.This material loss significantly increases operating costs of hydraulic machinery; therefore, it is essential to assess the risk of cavitation erosion in the design process.Traditionally, the cavitation erosion risk is assessed by applying experimental methods on the prototype of a newly designed machine.These experimental methods include visual assessment of collapsing cavities using high-speed videos [1] complemented by paint test and/or acoustic measurement [2,3,4].Such experimental methods are, however, expensive and mostly used in the late stage of the design process.Therefore, numerical methods capable of assessing the risk of cavitation erosion are an attractive alternative as they can be applied in the early stage of the design.
Owing to significant progress over the past decade, current numerical simulations of cavitating flows are capable of reproducing the largescale cavity dynamics controlling cavitation erosion.With this capability, it has recently become feasible to develop methods that can predict the cavitation erosion risk based on numerical results.Several of such approaches have been proposed in the literature, and they can be categorized into two groups: methods based on compressible simulations or incompressible simulations.In the case of compressible simulation, the strength of collapse-induced shock-waves captured by a compressible simulation are analyzed to assess the risk of cavitation erosion.These methods have been applied mostly to high-speed cavitating flows where the time-scale of the flow is comparable to the timescale of collapse-induced shock waves.Koukouvinis et al. [5] and Örley et al. [6] investigated cavitation erosion in high-speed cavitating flows in diesel injectors using compressible methods and investigated the hydrodynamic mechanisms leading to cavitation erosion.Mihatsch et al. [7] studied a cavitating flow in an axisymmetric nozzle with the aim to assess the risk of cavitation erosion.They showed that the areas predicted with high risk of cavitation erosion agree well with the experimental erosion pattern by Franc et al. [8].Blume and Skoda [9] examined a high-speed cavitating flow over a hydrofoil with the aim to assess risk of cavitation erosion.Using the location of aggressive collapses and pressure peaks on the foil surface, they were able to predict the areas with high risk of cavitation erosion.It should be mentioned that compressible methods have been applied to low-speed cavitating flows around propellers and foils by Budich et al. [10,11] and Arabnejad et al. [12].However, the simulations in these studies are inviscid due to the high computational cost of compressible methods for low-speed cavitating flows.
Alternatives to the above compressible approaches are methods where incompressible simulations are used to assess the risk of cavitation erosion.The aim then is to estimate the risk of cavitation erosion based on the flow properties in the simulation.Ochiai et al. [13] used a method where Lagrangian bubbles are injected in the simulation of cavitating flow, and the risk of cavitation erosion is assessed based on acoustic pressure emitted from these bubbles.Similarly, Peters and el Moctar [14] proposed an erosion assessment method based on Lagrangian bubbles present in a multi-scale Euler-Lagrange simulation of cavitating flows.Krumenacker et al. [15] developed a numerical erosion assessment using the acoustic energy of bubble/cavity implosion.This acoustic energy was obtained from Rayleigh-Plesset software which takes input from Eulerian simulation of cavitating flows.Li et al. [16] developed a method to assess the risk of cavitation erosion based on the accumulation of time derivative of pressure on the surface.The method was successfully applied to a cavitating flow over a foil; however, the predicted cavitation erosion risk by this method is highly dependent on the threshold of the method according to Eskilsson and Bensow [17].Alternatively, Koukouvinis et al. [18] proposed an erosion risk indicator as a function of the total derivative of pressure and vapor fraction which then was applied to the cavitating flow in the experiment of Franc et al. [8].Dular et al. [19] developed a method to estimate the risk of cavitation erosion based on the micro-jet hypothesis.Peters et al. [20,21] proposed a similar method considering the micro-jet mechanism of cavitation erosion and applied the method to the cavitating flows around a ship propeller in model-and full-scale.Eskilsson and Bensow [17] compared three of the above mentioned numerical erosion indicator and concluded that further research is necessary to develop reliable numerical erosion assessment methods.
A sub-category of incompressible erosion assessment methods, which has been identified by Van et al. [22] to be more suitable for numerical erosion assessment, is based on energy description of cavitation erosion [1,23,24].This description considers a balance between the risk of cavitation erosion and the potential energy of collapsing vapor structures, which is assumed to be proportional to the vapor content of collapsing cavity structures and the collapse driving pressure.Using the energy description of cavitation erosion, a few methods have been proposed [24,25] and applied [26,27,28] in the literature.All of these methods, however, possess the uncertainty regarding the definition of the collapse driving pressure, which is also noted by Schenke et al. [25].According to Vogel and Lauterborn [29], the collapse driving pressure for a single collapsing bubble can be reasonably approximated by the pressure measured far from the collapse center; however, for complex unsteady cavitating flows with several cavities interacting with each other, it remains uncertain how this driving pressure should be obtained.
In this paper, a new numerical method to assess the risk of cavitation erosion is presented.Similar to Fortes-Patella et al. [24] and Schenke et al. [25], the present method uses the energy balance between the collapsing cavities and cavitation erosion.However, in order to avoid the uncertainty regarding the definition of collapse driving pressure, the developed method is based on the kinetic energy in the surrounding liquid of collapsing cavities instead of the potential energy stored in these cavities.The method is then applied to a cavitating axisymmetric nozzle stagnation flow, and the erosion pattern obtained by the present method is compared with the experimental material removal by Franc et al. [8].This paper is divided into five sections.Following this introduction, the developed method is presented starting with the theoretical framework of estimating the energy released by a collapsing cavity followed by some implementation details on how the cavity dynamics is traced leading to the erosion risk estimate.Then the numerical set-up and test case used for validation of the method are described.The results are presented including a detailed discussion on the cavitation development and the hydrodynamic mechanisms leading to erosion in this flow as well as the comparison between the predicted risk of cavitation erosion using the developed method and the experimental erosion pattern.
Estimation of erosion risk
The energy-based description of cavitation erosion caused by a macro-scale cloud cavity containing a large number of bubbles is shown in Fig. 1.This description suggests that when a macro-scale cloud cavity is created in the low-pressure region, the surrounding liquid gains potential energy (Fig. 1a).As this cloud cavity moves into the pressure recovery region and the collapse starts, most of the potential energy converts into the kinetic energy stored in the inward motion of the liquid while the rest dissipates away from the cavity (Fig. 1b).The dissipated energy can be in the form of the internal energy due to viscosity or acoustic energy when the shock-waves upon the collapse of bubbles in the boundary of the cloud cavity radiate away from the cavity.At the end of the collapse, depending on the distance between the cloud cavity and the nearby surface, the kinetic energy of the liquid is converted into acoustic energy carried by collapse-induced shock-waves as a result of bubbles collapsing away from the surface and/or focused into the micro-jets due to the collapse of some bubbles near the surface (Fig. 1c and d).When these shock-waves or micro-jets hit the surface, a portion of their acoustic or kinetic energy is absorbed by the material (Fig. 1e).According to Fortes-Patella et al. [30], if the absorbed energy by the material exceeds a certain threshold, which is a function of material properties, cavitation erosion can occur.It can be noted from the energy-based description that the kinetic energy in the surrounding liquid of a collapsing cloud cavity is transferred to the nearby material, normally considered occurring through two mechanisms, shock-waves and micro-jets, at the level of bubble-scale.Capturing the detail of this energy transfer directly in engineering simulations is not computationally possible, both considering the very high mesh resolutions needed in the Eulerian approach as well as considering compressibility effects and its time scale.Here, we instead present the development of a framework which can model this energy cascade based on the flow quantities at the level of macro-scale cavities in the Eulerian incompressible simulation of cavitating flows.For this modeling, first the dynamics of collapsing macro-scale cavities are traced during the simulation and the kinetic energy in the surrounding liquid of these collapsing cavities is estimated.Then, a subgrid modeling is provided which determines the portion of this kinetic energy which is transmitted to the material surface either through an approximation of a pressure wave or a micro-jet.In the following subsections, first, the theoretical description of estimating this kinetic energy is presented and then the implementation of tracking the cavity dynamics and the erosion estimation is described.
Theoretical description of estimating the kinetic energy
The kinetic energy in the surrounding liquid of a collapsing cavity can be obtained from, where V l is the volume of the surrounding liquid, ρ l is the liquid density, and U r is the collapse-induced velocity in the surrounding liquid.The discretized form of equation ( 1) over a finite volume mesh can be written as, where ρ i and V i are, respectively, the density and the volume of the cell i located in the surrounding liquid, and U r,i is the collapse-induced velocity in the cell i which can be obtained from, In the above equation, U → i is the velocity in the cell i, U → c is the volume-averaged velocity of the cells containing the collapsing cavity, and d → i is the vector connecting the centers of cell i and the center of collapsing cavity.
Using equation ( 2) to obtain the kinetic energy of the surrounding liquid requires a loop over the cells in the entire domain at each time step for each collapsing cavity which is computationally expensive.As a remedy for this high computational cost, the surrounding liquid of a collapsing cavity is split into two volumes, a near-field volume and farfield one.The near-field volume, V l1 in Fig. 2b and c, includes the liquid inside the sphere or spherical cap with radius five times larger than the radius of the sphere with the same volume of the collapsing cavity while the far-field volume, V l2 in Fig. 2b and c, contains the liquid outside of this sphere or spherical cap.The choice of the radius of the sphere or the spherical cap is somewhat arbitrary, balancing that choosing a larger near-field volume increases the computational cost of the method, compromising the applicability of the current method in an industrial application while selecting a smaller near-field volume may increase the error related to approximating the kinetic energy in the far-field volume.It should be mentioned that the sensitivity of the predicted risk of cavitation erosion to the chosen value for the radius of the sphere or the spherical cap is examined in the results section.Based on the volume split, the kinetic energy in the surrounding liquid is split into the nearfield kinetic energy, E k,1 , and the far-field kinetic energy, E k,2 , The near-field kinetic energy can be obtained directly from equation (2) while the far-field kinetic energy, E k,2 is approximated with the assumption that the collapse-induced velocity in the far-field volume, U r , is only a function of the distance from the center of the cavity, r.It should be mentioned that this assumption is true for a cavity with arbitrary shape if the distance from the cavity center is significantly larger than the characteristic length of the cavity.Applying this assumption and using the volume increments, dV, in Fig. 3a, b, and c, E k,2 can be approximated as where R s and R sc are, respectively, the radius of the sphere and spherical cap, shown in Fig. 2b and c, and h is the normal distance between the center of the cavity and the nearest wall.To be able to compute the above integral, the distribution of the collapse-induced radial velocity, U r , should be estimated.The continuity equation for the mixture enclosed by the volume increments in Fig. 3a, b, and c gives, where A is the surface area of the volume increment and Vv is the time Fig. 3. Volume increments (red transparent sphere or spherical cap shells) used in the integration of the kinetic energy in far-field (the solid gray sphere or spherical cap represents the interface of the volume split shown in Fig. 2).(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) derivative of vapor content in the cloud cavity.Note that the left hand side of equation ( 6) is the liquid mass flux across the volume increment and the right hand side is the mass transfer rate inside the volume increment.Assuming that ρ l ≫ρ v and substituting the definition of surface areas A in Fig. 3a, b, and c, the distribution of induced radial velocity can be obtained from, Substituting the above equation in equation ( 5) and evaluating the integral gives the far-field kinetic energy as, ) ln Using equations ( 2), ( 4) and ( 8), the final form of the kinetic energy in the surrounding liquid can be written as, ) ln In order to validate the modeling presented above, a series of collapsing spherical mixture cloud cavities with different distances from the wall is simulated.The vapor fraction in these mixture clouds is initialized using a Gaussian distribution with the maximum vapor fraction of 0.8 at the center of the cloud.Using this method of initialization, the resultant clouds represent cloud cavities in the homogeneous mixture approach which are obtained by volume averaging spherical clouds of bubbles on a coarse mesh.The series of simulations consists of variations in different ambient pressure, p ∞ = [1, 2, 3, 4, 10] bar; initial radius R 0 = [2, 4, 8] mm; and initial vapor content, V v,0 = [4.23 × 10 − 7 , 8.42 × 10 − 7 ,1.71 × 10 − 6 ]m 3 .The results are consistent for all conditions and we here limit the presentation to Fig. 4 and the mixture cloud cavity with p ∞ = 10 bar, R 0 = 8 mm, and V v,0 = 1.71 × 10 − 6 m 3 .For each collapsing cavity, the exact instantaneous kinetic energy is obtained by summing the kinetic energy in the cells of the computational domain, and the approximate kinetic energy is calculated by equation ( 9).Fig. 4 shows the evolution of the ratio between the kinetic energy in the surrounding liquid, obtained by both exact and approximate formulations during the collapse, and the initial potential energy, E p0 = p ∞ V v,0 .The comparisons between the exact and approximate formulation indicate that the approximate kinetic energy agrees well with the exact one.It can also be seen that when the collapse starts, the collapse-induced kinetic energy in the surrounding liquid increases progressively.This kinetic energy reaches its maximum, around 65% of the initial potential energy, before the end of the collapse and suddenly decreases to almost zero at the end of the collapse.It should be mentioned that the 35% of the potential energy which is not converted to the kinetic energy is dissipated due to viscous effects.As collapse proceeds, the velocity gradient in the surrounding liquid becomes very high.This high gradient then actives the viscous terms in the momentum equations which are responsible for the dissipation of the potential energy.It is also interesting to note that regardless of the distance between the wall and the cavity, the ratio between the volume of the cavity at maximum kinetic energy and the initial volume of the cavity is 0.07.This value will be used later to determine which mechanism of cavitation erosion, microjet or shock-wave, is important.
Implementation of the method
According to the energy approach shown in Fig. 1, the maximum kinetic energy near the end of collapse, E k,max , is focused to the material through shock-wave or micro-jet mechanism; therefore E k,max should be used to estimate the aggressiveness of collapsing events.However, obtaining this maximum kinetic energy for each collapsing cavity requires that the cavity is tracked up to its eventual collapse as E k,max occurs before the end of collapse (shown in Fig. 4).In the following sections, the algorithms used for detecting the collapsing cavities in each time step and tracking them between consecutive time steps are explained.
Detecting collapsing cavities
The algorithm used to identify collapsing cavities is similar to the one used by Vallier [31].At each time step, a list of cells with vapor fraction, α v , larger than a threshold (α v > 0.01) and negative total time derivative of vapor content ( Vv < 0.0) is created.The latter condition simply implies that the collapse has been initiated in these cells.The algorithm goes through this list and extracts the collapsing cavities from the cells in the list that are neighbours.In order to reduce the computational cost of the implemented tool, the collapsing cavities are detected close to their eventual collapse, when all of the cells containing the cavity have a negative total time derivative of vapor content.For each detected collapsing cavity, i, the kinetic energy in the surrounding liquid, E k,i , is obtained from equation (9).The volume of the cavity, V i , and the cavity center of volume, C i , are also calculated from, where k is the number of cells containing the collapsing cavity and V cell,j and C cell,j are, respectively, the volume and the center of the cell j.
Cavity tracking
In order to track the collapsing cavities between consecutive time steps, the cavity extracted in the previous section are stored in a list called cavityListNew.The collapsing cavities in the previous time step are also kept and stored in a list called cavityListOld.The main task in tracking the cavities is to find a best match for a collapsing cavity in cavityListNew from cavityListOld.The methodology for finding the best match is based on overlap detection and it is similar to the one presented in Silver and Wang [32].In each time step, a table called overlap table is created by checking the overlap between cavities in cavityListNew and cavityListOld.A schematic view of the overlap table is shown in Fig. 5.The table has m rows and n columns where m and n are, respectively, the size of cavityListNew and cavityListOld.Initially, the values in the table are set to zero.After the overlap detection, the value stored in the row i and column j of this table is set to 1 if the ith cavity in cavityListNew overlaps with the jth cavity in cavityListOld.Based on the non-zero values in overlap table, the following events can be detected.
• Creation: If all of the values in row i are zero, the cavity i in cav-ityListNew does not overlap with any cavity in cavityListOld, therefore it is a newly detected collapsing cavity.For this cavity, the maximum kinetic energy in the surrounding liquid, E k,max,i , is equal to the kinetic energy calculated from equation ( 9).• Continuation: If the value in row i and column j is one while other values in the row and the column are zero, the cavity i in cav-ityListNew and the cavity j in cavityListOld overlap only with each other.In this case, the cavity i is the continuation of the cavity j.The maximum kinetic energy of cavity i, E k,max,i , is then obtained from, where E k,i is the instantaneous kinetic energy of the cavity i and E k,max,j is the maximum kinetic energy of the cavity j.
• Merge/Break up: If there are several non-zero values in the row i, it is assumed that several cavities in cavityListOld have merged together and formed the cavity i in cavityListNew.Similarly, if there exists more than one non-zero values in the column j, it is assumed that the cavity j in cavityListOld has broken up into several cavities in cav-ityListNew.In both cases, the cavities in cavityListNew formed by break-up or merge are treated as newly detected collapsing cavities, therefore the maximum kinetic energy in the surrounding liquid, E k,max,i , is equal to the kinetic energy calculated from equation ( 9).• Collapse: If all of the values in column j are zero, the cavity j in cavityListOld does not have overlap with any cavities in cavityListNew.
It is then assumed that the cavity j in cavityListOld has collapsed in the new time step.
For each collapsed cavity detected by the above algorithm, the collapse locations, C i , the maximum kinetic energy, E k,max,i , and the volume of the cavity at maximum kinetic energy, V k,max,i , are written out as the output of the cavity tracking algorithm.
Indicator of cavitation erosion risk
Based on the energy approach, the kinetic energy in the surrounding liquid of collapsing cavities is transferred to the nearby material by shock-wave or micro-jet mechanisms.These collapsing cavities are mostly cloud of bubbles which are transported to high-pressure regions by the flow.For the cloud cavities collapsing far from the surface, the collapse of the bubbles inside the cloud produce shock-waves, therefore it can be assumed that the kinetic energy in the surrounding liquid is converted to acoustic energy carried by the collapse-induced spherical shock-waves.When these shock-waves hit the surface, a fraction of their acoustic energy is transferred to the surface.This fraction is estimated by Leclercq et al. [33] using a discrete solid angle projection on a triangular surface element.Similarly, Schenke and van Terwisga [34] introduced a continuous form of the solid angle projection, which does not require the projection on triangular surface elements.Using this continuous form, the amount of energy absorbed by a surface element j due to the collapsing cavity i can be calculated from, where E mat.,j is the energy absorbed by the surface element j, d → i,j is the vector connecting the center of the collapse and the center of the surface element, n j → is the normal unit vector of the surface element, A j is the area of the surface element and E ac,i is the acoustic energy in the shockwave due to the collapse.Note that in equation ( 13), collapse-induced shock-waves are assumed to decay according to linear acoustic theory, which is inline with the work by Johnsen and Colonius [35] who investigated the collapse of gas bubbles using a compressible solver.For the collapsing cavities on the surface, the bubbles inside the cavities undergo non-spherical collapse leading to the formation of micro-jets.
For these cavities, it can be then assumed that the kinetic energy in the surrounding liquid converts to the kinetic energy carried by micro-jets, E km,i .This kinetic energy is assumed to be uniformly transferred to the surface, which is hit by the micro-jets.For the collapsing cavity i, the surface includes the surface elements that are covered by the approximate projected area of the cavity on the nearby surface, A proj,i .This area can be obtained from, where V 0,i is the initial volume of the cavity i which can be approximated by V 0,i = (1.0 /0.07)V k,max,i according to Fig. 4. Using the above assumptions, the amount of energy absorbed by a surface element j due to the cavity i collapsing near the surface can be calculated from, where h i,j is the normal distance between the collapse center of cavity i and the surface element j, and E km,i is the kinetic energy stored in the micro-jet.
To determine which mechanism is dominant for collapsing cloud cavities, simulations or experimental investigations of collapsing clouds of bubble with different distances from the wall are needed.These simulations or experimental investigations are not available in the literature and preforming them is out of scope of this paper; therefore in the present work, we follow the works by Ochiai et al. [13] and Dular et al. [36] on collapsing single bubbles near the surface.These authors concluded that the mechanism for cavitation erosion for collapsing bubbles depends on the initial stand-off ratio, γ, of the bubbles which is defined as, where h is the distance between the bubble and the wall and V 0 is the initial volume of the bubble.The simulations by Ochiai et al. [13] have shown that for the bubbles with γ ≥ 3.0, the collapse is almost spherical, and the collapse-induced high pressure on the surface is generated by spherical shock-waves.Similarly, it is assumed here that for the cavities with γ ≥ 3.0, the collapse of these cavities produce only shock-waves and the kinetic energy in the surrounding liquid of these collapsing cavities is converted to acoustic energy of the shock wave (E ac,i = E k,max,i ).The energy absorbed by the material due to collapse of these cavities is then obtained from equation (13).For the collapsing bubble on the wall (γ = 0.0), Dular et al. [36] showed that cavitation erosion is solely caused by the micro-jet.Here, we assume that the same is true for the collapsing cavities on the surface, therefore the kinetic energy in the surrounding liquid is assumed to be focused to the kinetic energy of the micro-jet (E km,i = E k,max,i ) and that equation ( 15) is used to obtain the energy absorbed by the surface.For cloud cavities with 0 < γ < 3.0, it is assumed that a number of bubbles in these clouds, n s , collapse away from the surface leading to the formation of shock-waves while the rest collapse near the surface and produce micro-jets.For the bubbles collapsing away from the surface, a portion of the acoustic energy carried by the collapse-induced shock-waves is transmitted away from the cloud cavity which can cause erosion.The rest of this acoustic energy is transferred back into the cloud cavity due to acoustic interaction leading to a higher driving pressure for the bubbles collapsing near the surface.Using these assumptions, the distribution of the kinetic energy between the shock-wave and micro-jet mechanisms for collapsing cavities with 0 < γ < 3.0 can be written, where β is the portion of acoustic energy transmitted away from the cloud cavity and n t is the total number of bubbles inside the cloud.To obtain the exact distribution of the kinetic energy between the two erosion mechanisms from the above equation, ns nt and β should be known as the function of the initial stand-off ratio, γ, which requires a detailed investigation of collapsing clouds of bubbles with different distances from the wall.As mentioned earlier, this detailed investigation is not available in the literature; therefore, we simply assume that β ns nt changes linearly with γ with the conditions, To discuss the implication of the above mentioned linear assumption, we consider a cloud cavity with γ = 1.0 and assume that half of the acoustic energy produced by the collapsing bubbles away from the surface in this cloud is absorbed by the bubbles collapsing near the surface (β = 1/2).With this assumption, the assumed linear distribution indicates that 2/3 of the collapsing bubbles produce shock-waves while the rest produce micro-jets.We remark that the bubbles forming microjets are not restricted to bubbles which are near the wall at the beginning of the collapse.According to the simulation by Ma et al. [37], due to the non-uniform distribution of the pressure around the cloud collapsing near a wall, a jet like motion forms toward the wall which pierces the cloud.This jet brings bubbles from the location away from the wall to the regions near the wall leading to a larger number of bubbles producing micro-jets.Substituting the linear distribution of ns nt based on the conditions in equation (18) into equation (17), the kinetic energy in the surrounding liquid is divided between the kinetic energy in the micro-jet and the acoustic energy of the shock-wave based on the stand-off distance of the cavity as, and the absorbed energy by the surface element, j, is obtained from, According to the experimental study by Okada et al. [38] and the numerical study by Fortes-Patella et al. [30], there is a linear relationship between the volume loss due to cavitation erosion after the incubation period and the total energy stored in the eroded surface.Using this linear relationship, an indicator of cavitation erosion risk for the surface element, j, can be defined as, where t s is the simulation time and n i is the number of collapse events detected during the simulation.It should be noted that in equation ( 21), the total absorbed energy is divided by the simulation time and the area of the surface element to make the defined erosion indicator independent of these two parameters.
Numerical set-up
The above method is implemented in a modified version of the interPhaseChangeFoam solver from the OpenFOAM-2.2.x framework [39].The solver has been validated and used to study cavitating flows by Bensow and Bark [40], and Asnaghi et al. [41].The governing equations are the incompressible Navier Stokes equations for two-phase (liquid-vapor) isothermal flows.Using the homogeneous mixture assumption and applying LES low pass filter [42], the filtered equations for the mixture of liquid-vapor can be written as, ∂ ∂t where ρ, ũ, and p are, respectively, the phasic filtered density, the Favre phasic filtered velocity vector, and the phasic filtered pressure, I is the identity tensor, τ is the viscous stress tensor and τ sgs is the sub-grid scale tensor in the mixture momentum equations.Adopting the homogeneous mixture assumption and assuming that dynamic viscosity in each phase, μ k , is constant, the mixture viscous stress tensor, τ, can be obtained from, where S is the mixture strain tensor.To account for the effect of the subgrid scale turbulence, we adopted the wall-adapting local eddy-viscosity (WALE) model proposed by Nicoud and Ducros [43].In this model, the sub-grid scale tensor, τ sgs , is written as, where k sgs is the sub-grid kinetic energy and ν sgs is the sub-grid scale turbulent viscosity which can be obtained from, In the above equation, Δ is the cell length scale, C k , the model constant, is assumed to be 1.6 and k sgs , the sub-grid kinetic energy, can be calculated from, where S and Sd are, respectively, the resolved-scale strain rate tensor and traceless symmetric part of the square of the velocity gradient tensor, and C w , the model constant, is assumed to be 0.325.The cavity dynamics is captured by Transport Equation Modeling (TEM), where a transport equation for the liquid volume fraction, α l , is solved.This equation reads,
∂ ∂t
( where ṁ is the mass transfer term which accounts for vaporization and condensation.Here, the Schnerr-Sauer model [44] is used for this term.
The mass transfer term is written as the summation of condensation, ṁα l c , and vaporization, ṁα l v , terms as, where ṁα l v and ṁα l c are obtained from, In equations ( 30) and ( 31), C c and C v are set to 1, p v is the vapor pressure, α Nuc is the initial volume fraction of nuclei, and R B is the radius of the nuclei which is obtained from The initial volume fraction of nuclei is calculated from where the average number of nuclei per cubic meter of liquid volume, n 0 , and the initial nuclei diameter, d Nuc , are assumed to be 10 12 and 10 − 5 m, respectively.We remark that by selecting these values, the minimum pressure in the simulations becomes very close to the vapor pressure which mimics the equilibrium assumption made in barotropic cavitation models.
In order to discretize the convective and diffusion terms in the momentum equations, a TVD limited linear interpolation scheme, and a standard linear interpolation are, respectively, used.A first-order upwind scheme is used to discretize the convective term in the liquid fraction, and the temporal terms are discretized using a second-order implicit scheme.
Test case
In order to validate the method presented in this paper, the cavitating flow in an axisymmetric nozzle is simulated.This configuration reproduces the experiments by Franc et al. [8] in which cavitation erosion is investigated.Fig. 6a shows a schematic view of the flow configuration.The flow enters a converging nozzle, which is connected to a pipe with a diameter of 16 mm.The flow then is deflected by the plate, placed 2.5 mm away from the pipe exit, and discharges through a disk.At the edge of the pipe exit where the flow experiences a sharp turn, the pressure drops, and a sheet cavity, attached to the upper wall of the disk, forms.Fig. 6b shows the computational domain and mesh topology.The computational domain includes only 1/8 of the geometry with a symmetry boundary condition used for the side planes.The same approach is used in the numerical study by Gavaises et al. [45].Similar to the experiments by Franc et al. [8], the flow rate at the inlet is set to 6.25 l/s and the pressure at the outlet, p out is adjusted so that the cavitation number, σ, is 0.9.This cavitation number is defined as, where p in is the pressure at the inlet.
To investigate the effect of mesh resolution on the predicted erosion pattern, three grids have been created.Table 1 represents the description of these grids in the regions where the cavitation is expected to occur.In order to check whether these mesh resolutions are adequate for LES, we used the method proposed by Pope [46] where the ratio between the sub-grid scale kinetic energy and total kinetic energy is examined.For all three simulations, it was found that this ratio is smaller than 0.2 in the cavitating regions, indicating that the resolutions are enough for LES as more than 80% of total kinetic energy is resolved.In all three mesh resolutions, the averaged non-dimensional wall distance, y + , is around 1 while the averaged non-dimensional stream-wise distance in the cavity region, x + , varies from 700 to 350.The non-dimensional distance in tangential direction, z + , at radial location of r = 0.015 mm in these mesh resolutions changes between 670 and 335.It should be mentioned that this mesh resolution is not enough to capture the flow detail in the boundary layer.However, according to Franc and Michel [47], the cavity dynamics presented in this paper, i.e. unsteady sheet cavity due to re-entrant jet, is mostly inertia driven and viscous effects, such as boundary layer, do not play a role in this cavitation dynamics.For all simulations, a fixed time step is used, so that the maximum Courant number is around 1.
Results
Here, first the comparison of the cavity dynamics captured in the simulations with different mesh resolutions and the experiment by Gavaises et al. [45] is made.The hydrodynamic mechanisms of cavitation erosion are identified and discussed in some detail, including the flow features responsible for the erosion near the sheet inception; this has not previously been presented in the literature.The section then continues by comparing the predicted risk of cavitation erosion using the developed method and the experimental erosion pattern by Franc et al. [8], followed by discussion on the effect of mesh resolution on the predicted risk of cavitation erosion and the relation between the cavitation dynamics and the predicted risk.Lastly, the effect of simulation time and adding a threshold to the erosion risk indicator to mimic the response of material to aggressive collapse events is discussed.
The numerical results show that the cavitating flow studied in this paper exhibits an unsteady sheet cavity with periodic behaviour governed by multiple dominant frequencies.Similar observation has been made in the numerical studies by Peters et al. [20] and Mihatsch et al. [7].To be able to compare our numerical results with similar numerical and experimental studies in literature, the dominant frequencies are expressed in term of Strouhal number which is defined as, Fig. 7. Frequency spectra of the total vapor volume signal for the simulations with different mesh resolutions.
Table 2
Mean and standard deviation of the total vapor volume in the simulations with different mesh resolutions.
Simulation 〈Vc〉 where f is the frequency, L c is the maximum length of the sheet cavity, and p in is the inlet pressure.Fig. 7 shows the frequency spectra in the term of the Strouhal number for the simulations with three mesh resolutions, obtained by taking Fast Fourier Transform (FFT) of the history of the total vapor content in the entire domain.In all three simulations, two dominant frequencies can be seen.The high frequency (S r,h ) is related to the periodic shedding of the cloud cavity due to the re-entrant jet.This frequency corresponds to S r,h = 0.27 which agrees well with the numerical study by Mihatsch et al. [7] and the reported Strouhal number corresponding to unstable sheet cavity (0.25-0.35 according to Franc and Michel [47]).The harmonic of this frequency can also be seen in the frequency spectra which corresponds to 2S r,h = 0.54.The frequency spectra also shows that there exists a low dominant frequency (S r,l ) in all of the simulations.In contrast to the high dominant frequency, the low frequency, which corresponds to S r,l = 0.05 − 0.07, depends on the mesh resolution.Similar mesh dependent low dominant frequency has been observed by Mihatsch et al. [7], although they identified slightly different range for this frequency (S r,l = 0.07 − 0.1).It should be noted that the simulations by Mihatsch et al. [7] are obtained using a compressible inviscid solver, while in the present study, the simulations are viscous and based on incompressible approach.Both of these differences can explain the discrepancy between the range of the low dominant frequency in our study and the study by Mihatsch et al. [7].Table 2 presents the mean and standard deviation of the total vapor content which are denoted, respectively, by 〈V v 〉 and σ Vv .The mean value of total vapor content changes non-monotonically with the change in mesh resolution, while the standard deviation decreases by increasing mesh resolutions.This decrease is due to less cycle-to-cycle variation in the simulations with higher resolutions which will affect the sensitivity of the predicted risk to the simulation time as it will be shown later.Fig. 8 shows the cavity dynamics in one cycle of high-frequency shedding in the present numerical simulation and the experiment by Gavaises et al. [45].This cavity dynamics can be characterized by the following five steps.1) t 1 →t 2 : A large-scale cloud cavity is formed as the sheet cavity is pinched off from the upper wall (Fig. 8a).While this cloud cavity is transported downstream by the bulk flow, a new growing sheet cavity is formed on the upper wall (Fig. 8b).2) t 2 →t 3 : While the new sheet cavity is growing, cavitating structures in the shed cloud cavity collapse as they are transported downstream.Due to these collapses, the cloud cavity has become smaller in Fig. 8c.3) t 3 →t 4 : The growing sheet cavity reaches its maximum length (Fig. 8d) while all of the vapor content in the cloud cavity transforms into liquid (between Fig. 8c and d).4) t 4 →t 5 : An upstream moving liquid flow is formed at the end of the sheet cavity (between Fig. 8d and e).This liquid flow, often called re-entrant flow, interacts with the sheet cavity interface as it travels upstream.This interaction disturbs the interface of the sheet cavity (Fig. 8e).5) t 5 →t 6 : The re-entrant flow pinches off a large scale cloud cavity from the sharp turn and a new growing sheet cavity forms on the upper wall (between Fig. 8e and f).
Note that the shedding mechanism described above have been extensively observed and studied in the cavitating flow over hydrofoils [48][49][50].However, in most of these studies, the foil angle of attack is not high enough to create a separation zone at the inception region of sheet cavity while in the present flow configuration, this separation zone occurs due to a sharp turn at the inception region as it will be shown in the paper.
The comparison between cavity dynamics in the present simulations and the experiment in Fig. 8 shows that the large-scale dynamics of the cavitating structures is qualitatively captured in comparison to the experiments on all tested meshes, although it can be noted that the CM and MM simulations are not sufficiently resolved to correctly represent all physics in the flow.This figure shows that the maximum length of the sheet cavity in CM and MM simulations is larger than in FM simulation.As this sheet cavity transforms to the cloud cavity in step t 1 →t 2 , the resultant cloud cavity becomes larger in the CM and MM simulations.This larger cloud cavity can then travel further downstream, leading to collapse events at larger radial distances from the pipe exit.These differences between the size of the sheet and cloud cavities and the location of the cloud cavity collapse explain the slight mesh dependency of the predicted risk of cavitation erosion obtained on the simulations with different mesh resolutions, as detailed below.
As it can be seen in the zoom-in views of Fig. 8, the interface of the sheet cavity is disturbed at the inception point near the pipe exit.An example of these disturbances is marked in Fig. 8d.The comparison between the simulations also shows that these disturbances become more significant as the mesh resolution decreases.In order to explain the cause of these disturbances and their mesh dependency, Fig. 9 presents the instantaneous and averaged radial velocity on tangential planes where the disturbances occur.In this figure, the instantaneous and the averaged interface of the cavitating regions are also shown by white lines.The distribution of the instantaneous radial velocity (Fig. 9a) shows that when the flow exits the pipe, it separates from the upper-wall due to the sharp turn.This separation zone has a high value of negative velocity (marked by A) and can be seen in all simulations; note though that some physics responsible for this separation in the CM and MM simulations should be considered only qualitatively as the mesh resolution in these simulations is coarse.Fig. 9a shows that this reverse flow is connected to a liquid reverse flow originating from the closure line of the sheet cavity (marked by B).The reverse flow marked by B supplies packets of liquid at the downstream end of the separation zone which can travel even further upstream due to the reverse flow in the separation zone.If the upstream moving liquid packets have enough momentum to reach the pipe exit, it hits the interface of the sheet cavity at the pipe exit.This collision is responsible for the disturbance on the interface of the sheet cavity seen in the zoom view in Fig. 8. Fig. 9b shows the averaged radial velocity on the same tangential plane as Fig. 9a.A region with a negative averaged radial velocity can be seen near the wall, which indicates the presence of reverse flow in this region.The reverse flow is stronger and also thicker near the pipe exit due to the separation zone.The comparison between the zoom-in views for different simulations in Fig. 9b shows that the negative velocity of reverse flow in the separation zone decreases as the mesh resolution increases (lighter blue in zoom-in views as the mesh resolution increases).As mentioned above, the combined effect of this reverse flow in the separation zone and the reverse flow at the closure line of the sheet cavity leads to the disturbance on the sheet cavity near the pipe.Therefore it is expected that the disturbances are more significant in the CM simulation compared to the MM and FM simulations.Fig. 10 compares the experimental erosion pattern by Franc et al. [8] and the areas with high risk of cavitation erosion identified by the developed method.In the experiment, erosion can be seen in three main regions, a region on the lower wall with the radial extension 19 mm < r < 32mm, a region on the upper wall with the radial extension 17 mm < r < 27mm, and a region on the upper wall between the pipe exit and r = 11 mm.These regions are shown in Fig. 10b, respectively, by position 1 to 3, and their radial extension are marked by white lines in the numerical results.The comparison between the numerical results in Fig. 10 shows that regardless of the mesh resolution, the presented method predicts the areas with high risk of cavitation erosion, which are qualitatively comparable with the experimental erosion pattern.It is also seen that the change in the mesh resolution slightly affects the radial extension of 1 and 2 as well as the location of position 2. By increasing the mesh resolution, the radial extension of positions 1 and 2 decreases, and the location of position 2 slightly shifts toward lower radial locations.These differences are due to the larger sheet and cloud cavities in CM and MM simulations which is shown in Fig. 8. Slightly higher mesh dependency can be seen in position 3 where the predicted area with high erosion risk reduces progressively by increasing mesh resolution.
Fig. 11a shows the risk of cavitation erosion associated with the collapse of cavities with different stand-off ratios.It can be seen that the contribution of collapsing cavities with the stand-off ratio equal to or larger than 3 to the predicted risk of cavitation erosion is insignificant.As mentioned in section 2, the kinetic energy in the surrounding liquid of these collapsing cavities is assumed to be converted to acoustic energy in shock-waves.As these collapse events are far away from the surface and the acoustic energy of the shock-wave decays with the distance, the absorbed energy by surface due to the impacts of these shock-waves is expected to be small.Fig. 11b shows the contribution of different mechanisms of cavitation erosion, micro-jets and shock-waves, to the predicted risk of cavitation erosion.It can be seen that although the contribution of shock-waves are smaller compared to the contribution of micro-jets, the contribution of these two mechanisms are in the same order, which highlights the importance of considering both mechanisms in the numerical methods predicting the risk of cavitation erosion.We remark that our findings presented here are inline with the description of cavitation erosion by Dular and Coutier-Delgosha [51].In this description, it is assumed that the contribution of shock-waves due to the collapse of cavities away from the surface is small, which corresponds well with the results presented in Fig. 11a.Further, the description of cavitation erosion by these authors assumes that these shock-waves can trigger the collapse of bubbles near the surface which can cause erosion through micro-jets.In present study, this acoustic interaction is considered in the cavities with 0 < γ < 3.0.It is assumed that a number of bubbles in these cloud cavities which are away from the surface produce shock-waves.A portion of the acoustic energy carried by these shock-waves is then assumed to be transferred to bubbles near the surface which in turn can cause erosion through the micro-jet mechanism.
As described in section 2.1, the proposed numerical method requires the splitting of the liquid volume around collapsing cavities into nearfield and far-field volumes.The near-field volume is the liquid inside a sphere or spherical cap with radius of R s,sc .For the results presented so far in this section, this radius is selected to be five times larger than the radius of the sphere with the same volume of the collapsing cavity.In order to investigate the effect of this selection on the predicted risk of cavitation erosion, the results from two extra simulations are presented here.In these simulations which are performed on the fine mesh, the radius R s,sc is assumed to be 3 and 7 times larger than the cloud volumeequivalent radius.Fig. 12a shows the radial distribution of the predicted erosion risk on the lower wall using the three values of R s,sc .It can be seen that the predicted risk of cavitation erosion using R s,sc = 3 is slightly shifted toward larger radial positions compared to the predicted risk using the other two values of R s,sc .Further, the shows that this assessment is the same in these two simulations.The maximum risk of cavitation erosion in these simulations occurs in the same radial location and the radial extension of the predicted erosion risk is almost the same.Fig. 12b presents the areas with high risk of cavitation erosion on the lower and upper wall using the three values of R s,sc .The comparison in this figure shows that the predicted high-risk areas are qualitatively the same in the simulations with different values R s,sc .From this comparison, it can be concluded that the areas with high risk of cavitation erosion predicted by the developed method are not sensitive to the choice of R s,sc .
Fig. 13 shows the steps in cavitation dynamics as well as their associated risk of cavitation erosion for one shedding cycle in FM simulation.It can be seen in this figure that the risk of cavitation erosion in position 1 and 2, shown in Fig. 10, is not restricted to the cavity dynamics in a specific step.In the step t 1 →t 2 (Fig. 13a → Fig. 13b), as the detached cloud rolls downstream, aggressive collapse events occur in its upstream and downstream ends which induce a high risk of cavitation erosion on both walls (Fig. 13g).More aggressive collapse events can be seen in the step t 2 →t 3 (Fig. 13b → Fig. 13c) when the cloud cavity travels further downstream and starts to shrink.In the step t 3 → t 4 (Fig. 13c → Fig. 13d), the traveling cloud suddenly collapses leading to a high risk of cavitation erosion on both walls.The comparison between the erosion risk in this step and in other steps indicates that this large-scale collapse of the cloud cavity is associated with the highest risk of cavitation erosion compared to the other collapse events in the cycle.During the steps t 1 →t 4 , a new sheet cavity appears and grows to its maximum length which can be seen in Fig. 13d.In the step t 4 →t 5 , a re-entrant jet forms at the downstream end of the sheet cavity.According to Arabnejad et al. [42], the interaction between this re-entrant jet and the downstream end of the sheet cavity leads to the detachment of cavity structures.Similar detachment of cavity structures can be seen in Fig. 13e.These structures can collapse due to high-pressure around the closure line and produce a high risk of cavitation erosion as it can be seen in Fig. 13j.In step t 5 →t 6 (Fig. 13e → Fig. 13f), the re-entrant jet reaches the region near the pipe exit and pinches off a cloud cavity from the upper wall.Fig. 13k shows that in this step, collapse events occur underneath and on the top of the detached cavity which can cause a high risk of cavitation erosion on both walls.
The steps in cavitation dynamics and its associated risk of cavitation erosion in the position 3, shown in Fig. 10, are presented in the zoom-in views in Fig. 13.Comparison between the locations of high erosion risk and cavity dynamics indicates that the high risk of cavitation erosion occurs mostly in the region where there is a disturbance in the interface of the sheet cavity near the pipe exit.At the location of these disturbances as shown in Fig. 9, there is a liquid reverse flow augmented by the separation zone.When this liquid reverse flow reaches the pipe exit, it hits the flow exiting the pipe.This collision can increase the pressure locally leading to aggressive collapse events in the pipe exit which are responsible for the high risk of cavitation erosion in position 3.
Fig. 14 shows the tangentially averaged distribution of the erosion risk indicator, EI, on the lower wall in the simulations with different mesh resolutions.These distributions are obtained using different simulation times in order to investigate the effect of this parameter.It can be seen that the distribution of EI in all three simulations does not change significantly if the simulation time is larger than 20 shedding periods, T s .However, the sensitivity of this distribution to the simulation times smaller than 20T s is not the same in the simulations with different mesh resolutions.This sensitivity is higher in CM simulation compared to MM and FM simulations which can be due to higher cycleto-cycle variation in CM simulation as shown in Table 2. Comparison between converged distributions of EI (t ≥ 20T s ) in all simulations also shows that the maximum value of EI is not mesh dependent while the radial location of this maximum value and the extent of erosion risk slightly depend on the mesh resolution.As discussed earlier, this mesh dependency is due to a different dynamics of sheet and cloud cavities in the simulations with different mesh resolutions.Fig. 14d compares the converged distribution of EI in FM simulation (green line) with the erosion depth profile in the experiment by Franc et al. [8].It can be seen that the radial extension of EI distribution is quite larger than the extension of the erosion depth profile.This discrepancy is due to the definition of the erosion indicator in equation ( 21) which do not consider the response of material to the absorbed energy.Due to this deficiency, the energy transferred to the surface elements by all of the collapse events contributes to the risk of cavitation erosion while the amount of this transferred energy for some collapse events might not be high enough to cause erosion.To consider only the effect of highly aggressive events, one can modify the definition of the erosion indicator as, E mat.,j,iA j E mat.,j,iA j > th.0 E mat.,j,iA j ≤ th.
, (36) where th. is the threshold above which the absorbed energy per area is high enough to cause erosion.Obtaining this threshold as a function of the material properties is a subject of future work.However, to show that adding a threshold to the definition of the erosion indicator can improve the results, Fig. 14d presents the distribution of the modified erosion indicator for different values of threshold.The distributions are normalized by their maximum values to able to show them in one figure.
It can be seen that by increasing the threshold, the extension of the estimated risk of erosion becomes closer to the experimental erosion depth profile.
Conclusions
This paper presents a new method to assess the risk of cavitation erosion using incompressible simulations of cavitating flows.The method is based on the energy balance between the cavitating structures and cavitation erosion suggested by Hammitt [23].In contrast to previous methodologies [24,27,34] in which the potential energy of collapsing cavities has been used for erosion assessment, the presented method uses the kinetic energy in the surrounding liquid to estimate the risk of cavitation erosion.The developed method then estimates how to this kinetic energy is transfer to the surface through two well-known mechanisms of cavitation erosion, shock-waves and micro-jets.
In order to validate the method, the cavitating flow in a stagnation nozzle flow is simulated using three mesh resolutions, and the areas with predicted high risk of cavitation erosion are compared with the erosion pattern in the experiment by Franc et al. [8].It is shown that regardless of mesh resolution, the predicted areas with high erosion risk are qualitatively in good agreement with the experiment.The agreement with the experimental results improves with mesh resolution due to an improved prediction of the cavity extent and dynamics on the finer mesh.
Using the proposed method, the relationship between the cavity dynamics and the risk of cavitation erosion at the inception region of the sheet cavity is investigated.It is shown that the high risk of cavitation erosion in this region is closely related to the separation zone in this region.Due to this separation zone, the reverse liquid flow underneath the sheet cavity gains momentum and hits the flow exiting the pipe which increases the pressure locally.This high pressure can trigger collapse events with high risk of cavitation erosion near the inception region of the sheet cavity.
The results presented in this paper show that the proposed method is able to identify areas with high risk of cavitation erosion in a simple geometry such as an axisymmetric nozzle.In order to examine this capability in geometries relevant to marine applications, the proposed method will be applied to the cavitating flows in a commercial water jet pump and the results will be compared with the experimental erosion assessment as the future work.
Fig. 2 .
Fig. 2. Volume split of the surrounding liquid of a collapsing cavity.
Fig. 4 .
Fig. 4. Simulation of collapsing spherical mixture cloud cavities (a) simulation configuration, (b-f) the ratio between the kinetic energy in the surrounding liquid and the initial potential energy and the ratio between the cavity volume and the initial volume of the cavity as a function of time during the collapse.
Fig. 5 .
Fig. 5.A schematic view of the overlap table.
Fig. 6 .
Fig. 6.Configuration for an axis-symmetric nozzle stagnation flow, a) schematic view of the configuration and the expected cavitation pattern seen in the experiment, b)Computational domain and mesh topology.
Fig. 8 .Fig. 9 .
Fig. 8. Cavitation pattern in one cycle corresponding to the high dominant frequency in the numerical simulation and the experiment by Gavaises et al. [45] (The solid red lines in the simulation and dashed white lines in the experiment represent r = 25mm, T s and t are, respectively, the high-frequency shedding period and the reference time, and the cavitation pattern in the simulation is shown by iso-surfaces of α l = 0.9).(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 10 .
Fig. 10.Numerical and the experimental erosion pattern, (a) the predicted areas with high risk of cavitation erosion in the simulation with different mesh resolution (white lines represents the location of eroded areas in the experiment) and (b) the erosion pattern in the experiment by Franc et al. [8]).
Fig. 11 .
Fig. 11.Contribution of different mechanisms to the predicted risk of cavitation erosion in FM simulation, a) contribution of collapsing cavities with different standoff distances, b) contribution of micro-jets and shock-waves to the predicted risk of cavitation erosion.
Fig. 12 .
Fig. 12.Effect of the choice of the radius R s,sc on the predicted risk of cavitation erosion, a) radial distribution of erosion indicator on the lower wall, b) predicted areas with high risk of cavitation erosion on the lower and upper wall.
Fig. 13 . 3 √
Fig. 13.Hydrodynamic mechanism of cavitation erosion risk, a-f) steps in the cavity dynamics in one cycle, g-k) estimated risk of erosion during the steps in the cavity dynamics (The solid red lines represent r = 25mm and the white lines represent the eroded region in the experiment).(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 14 .
Fig. 14. of the erosion indicator on the lower wall obtained using different simulation times and mesh resolution.mesh resolutions and the distribution of the erosion depth in the experiment by Franc et al. [8].
Table 1
Description of the mesh resolutions used in this paper. | 2020-11-05T09:11:24.123Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "95bb1d3a40705599f1f39a034c1a5be802673a84",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.wear.2020.203529",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0ea27fff27dc59e2901018d53bf57436c81cadfd",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
269829532 | pes2o/s2orc | v3-fos-license | Organizational Citizenship Behavior (OCB): How Leadership and Competency Inspire Performance
Its goal is to assess the impact of Leadership and Competency on Performance through Organizational Citizenship Behavior (OCB)hraser!. This research is a case study of the Boalemo Regency Regional Government. The population in this study were all employees from the entire Boalemo Regency Regional Government with a total population of 1,569. Then it was simplified using the Slovin formula so that the sample was 236 and analyzed using Structural Equation Modeling (SEM). The research results show that: 1) Leadership has a positive and significant effect on Organizational Citizenship Behavior (OCB), 2) Competence has a positive but not significant effect on Organizational Citizenship Behavior (OCB), 3) Leadership has a positive and significant effect on performance, 4) Competency has a negative and insignificant effect on performance. 5) Organizational Citizenship Behavior (OCB) has a positive and significant effect on Performance, 6) Leadership through Organizational Citizenship Behavior (OCB) has a positive and significant effect on Performance, 7) Competence through Organizational Citizenship Behavior (OCB) has a negative and insignificant effect on performance.
INTRODUCTION
The greatest strategy to ensure survival and growth in the future is to make wise and effective use of human resources.as human resources are an organization's primary component.Humans participate actively in all activities, both inside and outside the organization, as planners and performers.Stated differently, an organization's most important resource is its people, who are its greatest power.State Civil Service Employees, also known as ASN Employees, are government workers and civil servants with a work agreement who are appointed by civil service development officials, given responsibilities in government positions or given other state tasks, and paid in accordance with statutory guidelines.Pengelolaan pegawai negeri sipil dengan tujuan menghasilkan pegawai negeri sipil yang beretika, profesional, bebas dari pengaruh politik, serta bebas dari nepotisme, korupsi, dan kolusi dikenal dengan istilah manajemen pegawai negeri sipil.
The evaluation of governance may be done by looking at performance outcomes.According to the work outcomes attained, a well-maintained and authoritative infrastructure leads to good government performance.The findings of the observations made indicate that the Boalemo Regency Government's organizational performance is generally subpar.This can be attributed to a number of issues, including a mismatch in the skills required for the job and the abilities of the employees, a lack of a cooperative culture between leaders and employees, a lackluster organizational capacity to adapt to change, and inadequate infrastructure availability.In the wake of scientific and technological advancements, trust is not being built.Therefore, in order to maximize the utilization of resources available to fulfill the organization's vision and goal and serve society, more thorough and long-lasting therapy is required in order to increase the caliber and productivity of employee labor.
Leadership is the element that will affect worker performance.According to Fiedler (2015), leadership is essentially a pattern of connections between people who use their power and influence to persuade others to cooperate in order to accomplish common objectives.According to Jhurgen's potential theory (2018:44), competence is the second aspect.It denotes that an individual makes every effort to manage and demonstrate the potential results that they possess.Potential efforts might show themselves as a person's IQ, skill, behavioral, social, and other talents, among other things.An individual's potential increases with their level of competence in the workplace.The third component is Organizational Citizenship action (OCB), which is defined by Robbins and Judge (in Cahyono, 2015) as an employee's voluntary action that contributes to the smooth operation of the company but does not fall within their official job duties.
The present study introduces fresh thoughts into the field of research by examining three areas: first, the relationship between Organizational Citizenship Behavior (OCB) variables and organizational performance is rarely examined by researchers; second, the theory underlying each variable is rarely employed as a measuring tool; and third, the research periodization is unique from that of previous studies.
LITERATURE REVIEW
According to Davis (2015), a great leader possesses qualities including intellect, initiative, openness, enthusiasm, honesty, compassion, and selfconfidence.This is known as the leadership trait hypothesis.According to Davis (2015:20), there are four general characteristics of leaders that impact organizational leadership: 1) intelligence in character; 2) maturity in maintaining social relationships; 3) drive for achievement; and 4) capacity to forge humanitarian relationships.The leadership traits that emerge from this trait theory include those of construction, consultation, delegating, and involvement in the role of leaders in organizations.
According to Jhurgen's potential theory (2018:44), every attempt is made by an individual to control and demonstrate their prospective outcomes.Potential efforts might show themselves as a person's IQ, skill, behavioral, social, and other talents, among other things.An individual's potential increases with their level of competence in the workplace.
In their book Organizational activity, Robbins & Judge ( 2008) describe OCB as voluntary activity that an employee chooses to engage in even when it is outside of their official job duties and nevertheless contributes to the smooth operation of the company.
According to the Results Theory (Stevant, 2018:84), a person has shown their performance when they are able to accomplish an accomplishment.The outcome of an employee's efforts to accomplish a goal is their performance or achievement.A method of comparing one's work results with preset criteria is to achieve work results.It is possible to declare that a performance has produced good results if the outcomes of the job performed by someone satisfy or even surpass the requirements.
Conceptual Framework and Hypothesis The Relationship of Leadership to Organizational Citizenship Behavior (OCB)
An organization's definition of Organizational Behavior (OCB) will undoubtedly benefit from optimal leadership, and the accomplishment of organizational goals will follow naturally from this foundation.Similar to Iis Kartini's (2017) research, the findings of her study demonstrate that leadership significantly and favorably influences organizational citizenship behavior (OCB).
The Relationship between Competence and Organizational Citizenship Behavior (OCB)
Employee competences will undoubtedly have a beneficial impact on Organizational Citizenship Behavior (OCB) and boost productivity inside the workplace.This is consistent with study by Astuti, Rohmi Irma, et al. (2023), whose findings indicate that Competency positively and significantly influences Organizational Citizenship Behavior (OCB).
The Relationship of Leadership to Performance
Employee performance in a business will undoubtedly benefit from optimal leadership.This is consistent with study done in 2017 by Untung Rahardja et al., which found that performance is positively and significantly impacted by leadership.
Relationship between Competency and Performance
Each employee's skills undoubtedly have a significant impact on their performance; the more competent a worker is, the better their performance will be.This is consistent with studies by Siswoyo Haroyono, et al. (2020), the findings of which demonstrate that competency has a noteworthy and favorable impact on performance.
The Relationship between Organizational Citizenship Behavior (OCB) and Performance
Organizational Citizenship Behavior is one of the factors that promotes success in an organization (OCB).The findings of Ruth Damayanti et al.'s research (2020), which demonstrate how organizational citizenship behavior (OCB) influences performance, corroborate this.
The Relationship of Leadership through Organizational Citizenship Behavior (OCB) to Performance
According to study by Sherly Rosalina Tanoto et al. ( 2023), there is a connection between performance in companies and leadership via organizational citizenship behavior (OCB).The study's findings demonstrate the substantial impact that Organizational Citizenship Behavior (OCB) leadership has on performance.
The Relationship between Competence through Organizational Citizenship Behavior (OCB) and Performance
There is a correlation between performance in organizations and competency through organizational citizenship behavior (OCB).This is consistent with research by Putu Ayu Rusmayanti, et al. (2022), the findings of which demonstrate that performance is positively and significantly impacted by competency through OCB.
Hypothesis:
H1: Leadership has a positive and significant effect on Organizational Citizenship Behavior (OCB).H2: Competence has a positive and significant effect on Organizational Citizenship Behavior (OCB).H3: Leadership has a positive and significant effect on performance.H4: Competence has a positive and significant effect on performance.H5: Organizational Citizenship Behavior (OCB) has a positive and significant effect on performance.H6: Leadership through Organizational Citizenship Behavior (OCB) has a positive and significant effect on performance.H7: Competence through Organizational Citizenship Behavior (OCB) has a positive and significant effect on performance.Explanation of theory here
METHODOLOGY
By elucidating a link between certain studies through hypothesis testing, this study employs explanation (explanatory research), namely causality (Ghazali, 2004).This kind of study was selected with the understanding that among the goals to be met are attempts to elucidate the connections and impacts among the questionnaires used as the main instruments for gathering data.This study employs positivism, often known as the postpositivist paradigm.
A total of 1,569 workers of the Boalemo Regency Government made up the study's population.Sugiyono (2013) defines a population as a generic region made up of items or subjects with certain numbers and attributes chosen by researchers to be investigated and conclusions made.The sample was determined using the Slovin formula (Umar, 1999) in order to ensure that the sample size was not excessively big.This allowed for the assertion that the sample is representative, whereby: Of the entire hypothesized five-path model, there were two paths that were not significant.The interpretation of Table 1 can be explained as follows: 1.The impact of leadership on organizational citizenship behavior (OCB) in the Boalemo Regency Regional Government is positive, with a significant value of 0.000 that is less than 0.05 (α = 5%).Therefore, it can be said that there is a significant relationship between leadership and OCB, with a magnitude of 0.792 and a positive influence direction.It has been established that the first research hypothesis-that leadership affects Organizational Citizenship Behavior (OCB) of the Regional Government of Boalemo Regency-is accurate.
Competency has a positive influence on Organizational Citizenship
Behavior (OCB) in the Regional Government of Boalemo Regency, with a significant value of 0.676 greater than 0.05 (α = 5%).Therefore, it can be concluded that Competency has no significant influence on OCB, with a magnitude of 0.062 and a positive influence direction.The second research hypothesis, according to which competence affects the Regional Government of Boalemo Regency's Organizational Citizenship Behavior (OCB), has not been demonstrated to be accurate.3. Enter your desired changes in this section.Next, press the button for paraphrasing.be3.With a significant value of 0.024, less than 0.05 (α = 5%), the influence of leadership on performance in the Boalemo Regency Regional Government is positive.Therefore, it can be concluded that performance leadership has a significant impact, with a magnitude of 0.374 and a positive influence direction.It has been established that the third research hypothesis-which holds that the performance of the Regional Government of Boalemo Regency is influenced by leadershipis accurate.low.It really is that simple! 4. The influence of Competency on Performance in the Regional Government of Boalemo Regency is positive with a significant value of 0.751 greater than 0.05 (α = 5%), so it can be concluded that the influence of Competency on Performance is not significant, the magnitude of the influence obtained is -0.040 with direction of negative influence.The fourth hypothesis in the research which states that Competency has a negative value and the influence of Competency on Performance in the Regional Government of Boalemo Regency in this research is not proven to be true. 5. Organizational Citizenship Behavior (OCB) has a positive influence on performance in the Regional Government of Boalemo Regency, with a significant value of 0.001, less than 0.05 (α = 5%).As a result, it can be said that OCB has a significant impact on performance, with a magnitude of 0.573 and a positive influence direction.It has been established that Organizational Citizenship Behavior (OCB) has a substantial impact on the Boalemo Regency Regional Government's performance, which is the fifth hypothesis in the study.6.It may be inferred that leadership significantly influences Organizational Citizenship Behavior (OCB) because the influence of leadership on OCB is positive and has a significant value (0.000) smaller than 0.05 (α = 5%).
Similarly, there is a positive correlation between Organizational Citizenship Behavior (OCB) and Performance, as indicated by a significance value (0.000) less than 0.05 (α = 5%).Therefore, it can be said that OCB significantly affects Performance.It may be concluded that Leadership effects Performance through Organizational Citizenship Behavior (OCB) as this indicates that the OCB variable can mediate the influence of the Leadership variable on Performance in the Boalemo Regency Regional Government.The influence of Competency on Organizational Citizenship Behavior (OCB) is positive with a significant value (0.676) greater than 0.05 (α = 5%), so it can be concluded that Competency has an insignificant effect on Organizational Citizenship Behavior (OCB).Likewise, the influence of Organizational Citizenship Behavior (OCB) on Performance is positive with a significance value (0.000) smaller than 0.05 (α = 5%), so it can be concluded that Organizational Citizenship Behavior (OCB) has a significant effect on Performance.However, the indirect relationship between Competency through Organizational Citizenship Behavior (OCB) and Performance with a significance value of 0.678 shows negative and insignificant results.This shows that the direct relationship between Competency and Organizational Citizenship Behavior (OCB) is positive and not significant and the direct relationship between Competency and Performance is negative and not significant, then the relationship between Organizational Citizenship Behavior (OCB) and Performance is positive and significant, but the relationship is indirect.between Competence through Organizational Citizenship Behavior (OCB) on Performance is negative and not significant.
CONCLUSIONS AND RECOMMENDATIONS
The study's findings indicate that: 1) Leadership significantly and favorably affects Organizational Citizenship Behavior (OCB); 2) Competency positively but not significantly affects OCB; 3) Leadership significantly and favorably affects Performance; and 4) Competency negatively and not significantly affects Performance.The performance of an organization is positively and significantly impacted by organizational citizenship behavior (OCB).Performance is positively and significantly impacted by 6) leadership via Organizational Citizenship Behavior (OCB) and negatively and insignificantly impacted by 7) competency through OCB.
ADVANCED RESEARCH
This research still has limitations so it is necessary to carry out further research related to the topic "Organizational Citizenship Behavior (OCB): How Leadership and Competency Inspire Performance" to perfect this research, as well as increase insight for readers.
Table 1 .
Hypothesis Testing and Path Coefficient Values | 2024-05-18T15:55:10.546Z | 2024-04-15T00:00:00.000 | {
"year": 2024,
"sha1": "caee4ad850b50d6c4f7e420d33eb2697002b7ffb",
"oa_license": "CCBY",
"oa_url": "https://journal.formosapublisher.org/index.php/ajma/article/download/8416/8689",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d2367015fe1b23eb087e01703a047ffc26aa09f5",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
249946419 | pes2o/s2orc | v3-fos-license | Influence of Titanium Surface Treatments on Viability of Periodontal Fibroblasts Grown in an Osteogenic Culture Medium
Background: The integrity of the protective seal provided by the gingiva in direct contact with the implant surface is one of the main factors involved in the prevention of peri-implantitis. Aim: The aim of this study was to assess the viability of periodontal fibroblasts grown in an osteogenic culture medium in contact with titanium surfaces treated either with acid etching alone or with acid etching + anodizing. Materials and Methods: Periodontal fibroblasts grown in an osteogenic culture medium were distributed in a control group, with cells grown in culture bottles, and two experimental groups, with cells grown in contact with titanium disks measuring 6 mm in diameter. The surface of the disks was subjected to acid etching alone (AEG, n = 25) or to acid etching + anodizing (ANG, n = 25), and then evaluated using scanning electron microscopy (SEM). Cell viability was assessed by the [3-(4,5-dimethylthiazol-2yl)-2,5-diphenyl tetrazolium] bromide test on days 1, 2, 3, 7, and 14 of the cell culture. The Mann–Whitney test was used for the statistical analysis (P < 0.05). Results: The SEM assessment revealed that the surface of AEG specimens had micrometric characteristics, whereas the surface of ANG specimens had nanometric characteristics. No significant difference was observed among the groups regarding cell viability at any of the evaluation time points. Conclusion: The titanium surface treatments tested did not affect the viability of periodontal fibroblasts in an osteogenic culture medium.
Introduction
Installation of transcutaneous implants such as dental implants, cochlear hearing devices, and other prostheses, can cause infections or various tissue changes caused by improper closure of the interface between implant biomaterial and soft tissue. [1] The skin or gingiva in direct contact with the implant surface provides a protective seal between the peri-implant bone and the external environment, and the integrity of this seal is one of the main factors involved in the prevention of infection. [2] The viability and differentiation of cells in contact with the surface of new implant materials are commonly investigated in vitro to ascertain the potential of these materials to promote cell adhesion. [3,4] Cell viability tests can be used to assess this characteristic on titanium (Ti) surfaces, the main metallic component of dental implants. [5] These tests have shown that gingival and/or periodontal fibroblasts can adhere to the Ti surface of This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: WKHLRPMedknow_reprints@wolterskluwer.com the cervical portion of the implant, or, in some cases, to the surface of the prosthetic components installed on the implant, thereby establishing an interface between the implant and gingival tissue. [6] However, the force of cell adhesion to Ti may not be sufficient to ensure the integrity of this interface, and new Ti alloy compositions and surface modifications have been developed to increase cell adhesion. [7] It has already been established that the physical characteristics of a Ti surface are critical for successful dental implant treatment and for the long-term health of peri-implant tissues. [8] The stromal cells of gingival tissue display very poor attachment to metallic surfaces and their direct contact with metallic biomaterials may lead to inflammation, infection, and decreased cell viability. [9][10][11] To this end, changes in implant surface roughness, topography, and chemical composition have been investigated to increase the viability of cells in contact with this surface. Another factor that can affect cell viability on the metallic surfaces of implants is the mediators of osteogenic differentiation, observed throughout the osteointegration phase. These mediators can reduce the ability of cells to proliferate by pushing them to differentiate. [12] Thus, the aim of this study was to evaluate the viability of periodontal fibroblasts grown in an osteogenesis-inducing culture medium in contact with Ti disk surfaces treated either with acid etching or anodizing techniques.
Materials and Methods
This study was approved by the research ethics committee of the institution where it was conducted (Approval no. 2017/0774).
Cell culture
Human periodontal ligament fibroblasts were grown in Dulbecco's Modified Eagle Culture Medium (DMEM; Cultilab, Campinas, SP, Brazil) supplemented with 10% fetal bovine serum (Cultilab) and 1% of an antibiotic-antimycotic solution (Sigma; St. Louis, MO, USA). The cell culture bottles were kept in an incubator at 37ºC under a humid atmosphere containing 95% oxygen and 5% carbon dioxide. The culture medium was changed every 48 h until the cells reached a confluency of 80%. The cells were then removed from the bottles and cryopreserved until use in the experiment.
Titanium disks
Fifty commercially pure (grade IV) Ti disks, 6 mm in diameter and 2 mm thick (Conexão Sistema de Prótese; Arujá, SP, Brazil), were distributed into two experimental groups according to the assigned surface treatment: Acid etching (AEG, n = 25) or acid etching + anodizing (ANG, n = 25) [ Figure 1]. Initially, the surfaces of all of the disks were sandblasted with 180-µm aluminum oxide particles at a pressure of 0.25 MPa, producing Ra values ranging from 1.5 to 2.5. Next, they were submitted to acid etching for 20 min in a 5 N HNO 3 + 5 N HF solution, at a temperature of 20°C. Half of the specimens were then submitted to an anodizing bath of 1.0 M phosphoric acid at 20°C, with a current density of 5 mA/cm 2 maintained by a stabilized voltage of 80 V. The anodizing process promoted the formation of a 120-µm thick oxide layer within approximately 30 s.
The disk surfaces were evaluated using scanning electron microscopy (SEM), and the microscope (JSM-6460LV; Jeol, Tokyo, Japan) was operated at an acceleration voltage of 20 keV. The SEM images were reconstructed using SMile View Map software (Digital Surf, Besançon, France; Jeol, Peabody, MA, USA). Representative images of scanned disk areas are shown in Figure 2. All of the images were acquired with a resolution of 1280 × 960 pixels.
Osteogenic culture medium and cell viability test
After left to thaw, the cells were grown in 75-mL bottles for 14 days in an osteogenic culture medium consisting of DMEM (Cultilab) with a high glucose concentration (4.5 g/L), L-glutamine (584 mg/L; Cultilab), sodium pyruvate (110 mg/L; Cultilab), 20% iron-supplemented fetal bovine serum (Cultilab), penicillin (100 IU/mL; Sigma), and streptomycin (100 µg/mL; Sigma). The medium was buffered with sodium bicarbonate (1 N; Sigma). The pH was adjusted to 7.2, and 0.5 µg/mL of ascorbic acid (Sigma), 10 mmol/L of β-glycerophosphate (Sigma), and 10 mmol/L of dexamethasone (Sigma) were added to the solution. The cells were kept in an incubator at 37ºC under a humid atmosphere containing 95% oxygen and 5% carbon dioxide. The culture medium was changed every 48 h.
Cells kept in the bottles formed the control group (CG); of these, 2.5 × 10 4 cells were then seeded onto the Ti disks described above, thus forming experimental groups AEG (acid etching alone) and ANG (acid etching + anodizing). Cell viability was assessed on days 1, 2, 3, 7, and 14 of cell culture (five disks per assessment time point) using the [3-(4,5-dimethylthiazol-2yl)-2,5-diphenyl tetrazolium] bromide test (MTT; MTT Assay Kit ab211091; Abcam, Boston, MA, USA), conducted according to the manufacturer's instructions. The cell metabolic activity results were read at 570 nm using a spectrophotometer (Epoch Microplate Spectrophotometer; BioTek Instruments, Winooski, VT, USA) and expressed in arbitrary absorbance units, where the greater the signal emitted, the greater the metabolic activity of the analyzed cells.
Statistical analysis
The data were analyzed using the Mann-Whitney test. P < 0.05 was considered statistically significant.
Results
The SEM assessment revealed that the disk surfaces of the AEG and ANG groups had distinguishable characteristics under both 50,000 and 150,000 magnification. The surfaces treated with acid etching alone displayed a shallow, irregular, and micrometric relief, whereas those treated with acid etching + anodizing displayed a tubular and nanometric relief. Table 1 presents the values of the readings of metabolic activity of the periodontal fibroblast cultures in the three study groups.
Discussion
There was no difference among AEG, ANG, and CG regarding the values of metabolic activity at any of the evaluation time points; therefore, the null hypothesis was not rejected. These results contrast with those of Kim et al., [13] who found that the viability of periodontal ligament cells gradually increased over time, and decreased with an increasing concentration of dexamethasone; and also with those of de Vries et al., [14] who found that both proliferation and viability of fibroblasts in an osteogenic medium increased over time in three-dimensional cultures.
The chemical, mechanical and topographic characteristics of implant surfaces can affect the adhesion of cells involved in bone formation at the bone-implant interface. In addition, these characteristics can favor cell proliferation and differentiation, and stimulate cells to deposit osteoid matrix as well. Surface treatments have been used to increase the bone-to-implant contact area, and to shorten the healing time before loading. [15,16] Several Ti surface treatments have been studied in the past decades with the goal of altering the postoperative time required for osseointegration. The most commonly used are those performed with acid etching and acid etching followed by sandblasting. [17] It has already been established that a treated Ti surface can increase the deposition of fibrin matrix, and thus lead to the formation of thicker blood clots than those produced in contact with a machined Ti surface. [18] Other authors have reported that platelets adhere significantly more effectively to treated Ti surfaces than to machined ones. This effect is particularly important for the immune response and for wound healing, since platelets secrete a multitude of factors involved in the activation of these processes, including platelet-derived growth factor, transforming growth factor beta, and vascular endothelial growth factor. [19][20][21] The presence of these growth factors is associated with the promotion of adhesion, dissemination, and migration of gingival and periodontal fibroblasts. This observation suggests the existence of a synergistic mechanism between blood and peripheral fibroblasts, which contributes to promoting faster tissue regeneration in contact with treated Ti surfaces. [22] The present study analyzed the relationship between cells and Ti surface using a cell viability test under conditions of osteogenic differentiation, [23,24] since osteogenic stimuli are the most abundant in the biological microenvironment immediately around dental implants, and in contact with the blood clot, especially in the early stages of regeneration. Furthermore, the Ti surface treatments performed by acid etching alone or acid etching + anodizing produced micro-and nanometric structures, respectively, and were used in the present study because they are among the currently available commercial surface structure modifications. [25,26] Both types of surface treatments have been associated with increased cell adhesion and differentiation; [26] however, previous studies [23,24,26] have used osteoprogenitor cells or osteoblasts, whereas the present study used periodontal fibroblasts to investigate which of these surfaces would better favor the adhesion and proliferation of fibroblasts after their migration. This choice is particularly relevant considering the soft-tissue bond requirement, which plays the role of a biological seal around the dental implant. To this end, an assessment of . CG: Control group (only culture medium, no disks); AEG: Culture medium applied onto titanium disk surfaces treated with acid etching alone; ANG: Culture medium applied onto titanium disk surfaces treated with acid etching+anodizing the ability of periodontal fibroblasts to adhere to modified Ti surfaces can be a useful input in the decision-making process faced by clinicians when choosing the appropriate implant for each treatment plan.
Conclusion
The Ti surface treatments tested, either with acid etching alone or with acid etching + anodizing, had no effect on the viability of periodontal fibroblasts grown in an osteogenic medium.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2022-06-23T15:14:43.204Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "519230c5e971f1f7a31eb1551ad45d1e74d0bc90",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ccd.ccd_1008_20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1acc8a2d9d9d14395e9578898045b21575c9ac36",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250684615 | pes2o/s2orc | v3-fos-license | Preliminary experimental characterization of the ambient humidity response of Bi3TiNbO9
A preliminary electrical characterization of Bi3TiNb09 pellets, prepared by mechanochemical activation shows a nearly exponential conductivity increase over 4 orders of magnitude from dry ambient to dew point of 10 °C, at 23 °C ambient temperature; or 5 order of magnitude in thick films over interdigitated electrodes. Relaxation currents, following bias stress, respond also, at a lower sensitivity level. Under different DP on either electrode, the lower DP value controls the overall current, which flows through the bulk, not through the mantle of the cylindrical pellets. Repetitive cycling does not deteriorate the response to the ambient humidity.
Introduction
The Aurivillius phase of Bi 3 TiNbO 9 (BTN), has received much attention, due to its high temperature ferroelectric and piezoelectric properties [1]. The synthesis of this material, 500 °C below the sintering temperature required by the traditional solid state reaction route was achieved by mechanochemical activation by A. Castro et. al. [2], using mechanochemical activation. In that work, after milling for up to 370 h in a low energy ball mill., a fluorite-like, metastable phase appeared, also, by heating at 370 °C. An early electrical study of that phase revealed its sharp sensitivity to ambient humidity [3].
In this work, the salient characteristics of BTN, prepared by high energy milling, are described, towards its application in resistive low humidity sensors.
Experimental
Powder samples of nominal composition 3Bi 2 O 3 : Nb 2 O 5 : 2TiO 2 were prepared from analytical grade powders by mechanochemical activation in a SPEX 8000 automatic mill, with tungsten carbide vial and hardened steel balls, for 24 h. The balls to powder mass ratio was set at 10:1.
Pellets were shaped by uniaxial pressing at pressure from 25 to 1830 MPa. Then sintered at 480 °C for 12 h, the sintered pellets were lightly sanded with 400X sand paper, dusted off by Ar gun, and coated with 10 nm thick Pt electrodes by magnetron sputtering (base pressure of 10 -5 mbar).
Pellets were shaped by uniaxial pressing , sintered, lightly sanded with 400X sand paper, dusted off by Ar gun, and coated with 10 nm thick Pt electrodes by magnetron sputtering (base pressure of 10 -5 mbar). Simple centered circle electrodes, as well as circle plus guard rings, were used to probe the current path through the pellets. A simple controlled humidity chamber was used for testing the response of pellets with simple electrodes. Pellets with guard ring were used to probe for the current path: either through the bulk or the mantle of the cylindrical pellets. A dual-chamber with independent flux and humidity regulations over either face of the pellet was development to test for possible current asymmetries related to dry or humid ambients on the cathode or anode.
BTN thick films were prepared with AREMCO ceramabind 643 as a binder on interdigitated Pt electrodes on alumina. The thick films were dried at 200°C during 2 h and then sintered at 480°C for 12 h.
Current-voltage and pulsed voltage current-time measurements was made using a Keithley 237 electrometer, at room temperature (23 °C). For measurements at variable ambient humidity, the latter was controlled by passing the moisture saturated carrier gas through a condenser block at the desired dew point (DP) temperature (down to dew point of 1°C). Lower dew point values, were prepared by mixing dry and low humidity gas flows. The resulting humidity was monitored by a chilled mirror hygrometer after passage through the pellet testing chambers.
Results and Discussion
The conductance of BTN pellets in vacuum is thermally activated and, at 107 °C, it reaches the value of 3.1×10ohm -1 m -1 after a 51 h settling time.
Conductivity measurements at fixed bias and variable DP, result in an exponential increase (from nearly 10 -3 to 10 2 pS/cm) (≅ 4.1 exp(0.1749 (DP/°C)) over the DP range of -40 to +10 °C, at ambient temperature of 23 °C. This, nearly maximum, sensibility is achieved at the cost of a slow response (1 to 3 min), in the less dense pellets. Other samples, pressed at 1830 MPa display ~2 orders of magnitude response over this DP range, with response transient below 30 s.
Upon removal of the bias, the relaxation current follows DP cycling in a similar manner, albeit at a lower sensitivity level, as the current during application of the bias. MPa and sintered at 500 °C for 12 h, at 50 V. In the "X-Y" notation, "D" stands for dry air flux, and "H" for DP of 5 °C air flux, first/second positions corresponding to the anode / cathode conditions. The response to humidity may be caused by some reaction, such as the surface dissociation of water. Henceforth, the current may be controlled by the polarity of the electrode which is exposed to the wet gas flux. This has been tested in a dual chamber, with results as shown in Fig. 1. Most notably, the current is seen to drop (more than one order of magnitude, in this particular case), whenever either electrode is exposed to dry ambient. Indeed, the current due to just one dry electrode is essentially the same as that with both electrodes in dry ambient. The current level is seen to overshoot upon returning to humidity, above the value it had just before the previous drop due to going into a dry ambient. This is pointed out by the "I m " and arrow in Fig. 1, an it is a feature which appears also during relaxation, after bias stress.
The current path, through either the inner porous bulk or the mantle of the pellet, was tested using guarded electrodes, as shown in Fig.2. There, the inset shows the electrodes configuration, where "V" stands for the (100 V) bias source, "A" stands for the electrometer, and "G" stands for the guard line, which is kept at the same bias as the electrometer line. This configuration applies to the stages "1" and "4". In stage "3", the positions of the "A" and "G" lines are The current levels during the opposite stages "1" and "3" indicate that the current flows through the bulk of the (porous) pellet, the side of the pellet, exposed to the humid ambient, being of no particular import.
Preliminary measurements of a (~ 80 µm) thick film on the interdigitated Au electrodes display a larger and faster response than in the case of the pellets. As shown in Fig. 3, the current goes through excursions in excess of 5 orders of magnitude upon cycling from dry to DP of 10 °C ambients. The response of BTN to ambient humidity, so far observed, presents the promise of this material for sensors in the low humidity range.
As regards the response mechanism, the numerical solution of a carrier diffusion model has been proposed [4], mostly to model the current overshoot which is pointed out in Fig. 1. The model assumes the diffusion of a charge carrier in the self-consistent electric field, considering an external bias and the space charge field of the carrier density. Limited qualitative agreement is achieved with the general features of the relaxation current, its overshoot behavior, in particular, and with results of the asymmetric dual-chamber tests. | 2022-06-28T01:34:09.505Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "45ac1ee7bb4236aea17fa9760630edec9555e0e6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/134/1/012021",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "45ac1ee7bb4236aea17fa9760630edec9555e0e6",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119433380 | pes2o/s2orc | v3-fos-license | Interplay of disorder and nonlinearity in Klein-Gordon models: Immobile kinks
We consider Klein-Gordon models with a $\delta$-correlated spatial disorder. We show that the properties of immobile kinks exhibit strong dependence on the assumptions as to their statistical distribution over the minima of the effective random potential. Namely, there exists a crossover from monotonically increasing (when a kink occupies the deepest potential well) to the non-monotonic (at equiprobable distribution of kinks over the potential minima) dependence of the average kink width as a function of the disorder intensity. We show also that the same crossover may take place with changing size of the system.
I. INTRODUCTION
An extensive research work on the static and transport properties of nonlinear excitations in various solitonbearing disordered systems has been undertaken in the last decade (see Refs. 1 and 2). It is well known that when taken separately both the nonlinearity and disorder contribute to the localization effects, the character of localization being however essentially different. The nonlinearity results in the possibility of existence of nonlinear localized excitations usually referred to as solitons that are rather robust and can propagate through the system undistortedly. At the same time the disorder (in linear systems) evokes the Anderson localization, which manifests itself in the behavior of the transmission coefficient of a plane wave decaying exponentially with the system width. These two localization mechanisms are competitive to some extent; taken together, the nonlinearity and disorder may lead to a number of qualitatively new effects, namely, the transmission coefficient tending to zero with increasing the system length according to a power law 3,4 rather than exponentially (which would be otherwise the property of a linear system); there can arise a multistability [3][4][5] in the wave transmission through a disordered slab; excitations in highly nonlinear or multidimensional nonlinear Schrödinger (NLS) systems (which would either disperse or collapse otherwise) can be stabilized by disorder. [6][7][8] In the present paper we study the static properties of a one-kink solution (or, equivalently, of a diluted kink gas) of disordered Klein-Gordon models where the disorder is assumed to be a δ-correlated Gaussian spatial noise. The disorder of the kind is akin, for example, to Josephson junctions, where it is caused by the fluctuations of the gap between two superconductor plates. Quite re-cently Mints proposed 9 a similar model with randomly alternating critical current density to account for a selfgenerated magnetic flux observed 10 experimentally. We discuss his results in more detail in the Conclusion.
In general, Klein-Gordon models have been repeatedly studied 1,2,11 for the disorder represented by a lattice of δ-like impurity potentials with random positions of the impurities and either equal or randomly distributed intensities. It was found out that in a number of cases the kink dynamics in disordered systems could be adequately described within the framework of the collective coordinates approach (Refs. 12-15, and references therein). This is the background of our restricting ourselves to the said approach in the scope of the present paper providing however at each step the validation of analytical results comparing them to the numerical simulations of the original system. We emphasize that in opposite to, for example, Refs. 2 and 11, we use Rice's collective coordinates approach 16,17 with the kink width being the variational parameter.
A similar approach has been recently applied for two other one-dimensional (1D) systems: Bussac et al. investigated 6 the effects of the polaron ground state in a deformable chain, while Christiansen et al. considered 7 the stabilization of nonlinear excitations by disorder in the NLS model. It should be indicated that investigation of these two (in fact closely related) systems leads to one and the same result: the width of stationary solitons decreases with growing intensity of the disorder. The importance of this conclusion resides in its prediction that the disorder can stabilize otherwise unstable solitons in 2D and 3D NLS models. Quite recently this prediction was borne out numerically 8 for the 2D case. It must be emphasized that for NLS models the conclusion does not depend on the averaging procedure: one can equally perform averaging either on absolute ground states 6 or over all local minima of the effective random potential with equal weights. 7,8 We show that it is not the case for the Klein-Gordon models: their properties exhibit strong dependence on the assumptions as to the statistical distribution of kinks over the minima of the effective random potential. For the purely dynamical problem these statistics are left beyond consideration and should be thus imposed as an additional assumption. We use Jaynes's maximum entropy inference 18 for this purpose.
The outline of the paper is the following. In Sec. II we present the model and derive the equations for the collective coordinates of the kink taking into account the disorder via the effective random potential. In Sec. III we investigate both analytically and numerically the case of immobile kinks and demonstrate the existence of the crossover between monotonic and nonmonotonic dependence of the average kink width as a function of the disorder intensity. In Sec. IV we summarize the exposed results.
II. COLLECTIVE COORDINATES APPROACH
We consider a Klein-Gordon (KG) model in the presence of space disorder. The Hamiltonian of the system has the form where the subscripts stand for partial derivatives with respect to the indicated variables and units are chosen so that the Hamiltonian is already in the scaled form. The potential Φ(φ) has the form for the sine-Gordon (SG) model, and for the φ 4 model. We assume that ǫ(x) is delta-correlated spatial disorder (the brackets ... denote averaging over all realizations of the disorder) with the Gaussian distribution We have studied both SG (2) and φ 4 (3) models. Although properties of these two models show many similarities, they also exhibit a number of interesting distinctions related, in particular, to the existence of a breather state in the SG model and of Rice's internal mode 16,17 in the φ 4 model. So, it was important to compare the effects of disorder in both these models. But since the qualitative features of the exposed models turned out to coincide in the scope of our present investigations, we dare not overload the paper with unnecessary repetitions and restrict ourselves with presenting in detail the sine-Gordon model only, keeping in mind although that every stage of the calculations applies for the φ 4 model as well.
The SG system is governed by the equation of motion where the damping term with the damping constant γ has been included. It is well known that in the absence of disorder and damping (η = γ = 0) Eq. (6) is completely integrable and possesses a topologically stable solution in the form of a kink given by where X(t) = X 0 + vt is the kink coordinate, v is its velocity, and L = √ 1 − v 2 is the kink width. In the general case of Eq. (6) for a number of situations the kink emission is exponentially small, 14 so that the kink dynamics can be studied by the collective coordinate approach. In the framework of this approach the variables X(t) and L(t) are understood as time-dependent variational parameters. Inserting Eq. (7) into Hamiltonian (1) as a trial function, we obtain the effective Hamiltonian with momenta Here is the potential function in the case of no disorder and is the effective random potential arising because of the disorder term. Then, taking into account the damping, we arrive at the following equations of motion: 16 In the following section we solve approximately these equations of motion for immobile kinks and compare the results to the results of direct numerical integration of Eq. (6).
III. RESULTS FOR IMMOBILE KINKS
As a consequence of the damping γ the kink will eventually stop at some stable or metastable stationary position along the system. Here we do not consider this transient stage and assume that the kink is already immobile [dp L /dt = p L = p X = 0]. In this case the equations of motion (12) and (13) take on the form Considering the center-of-mass motion described by Eq. (12) we observe that for each realization of the random potential ǫ(x) the stable stationary position X = X m ({ǫ}, L) of the kink is defined by the point where V ({ǫ}, L, X) has a minimum with respect to X. Thus we can now insert the value X = X m ({ǫ}, L) into Eq. (15) and, solving the resulting equation get the value of the stationary kink width L({ǫ}) at given position of the kink. Encountering a similar problem for the case of NLS system, Christiansen et al. invoked 7 the mean-field approximation Further the estimation of the quantity V was performed using Rice's averaging theorem. 19,20 However, being rather good for the NLS model 7 the mean-field approximation (17) fails for the KG models. Thereby we were forced to use a more precise averaging procedure calculating dV /dL directly. Expanding Eq. (16) into series up to the second order in η, and solving by iterations we get after averaging [which by means of Eq. (4) can be performed for the terms containing η 2 exactly] the average kink width where the averaging of the function is performed over all realizations of the disorder in the points X = X m in which the potential takes on its minima on X.
Thus we arrive at the problem of performing the average of λ({ǫ}, X m ) over the minima X m of the function µ({ǫ}, X). It is convenient for later use to perform this averaging in two steps calculating at the outset and thereafter Here P l (λ |μ) is the conditional probability that λ({ǫ}, X m ) has the valueλ if µ({ǫ}, X m ) equals toμ. Correspondingly, Λ(μ) is the value of λ({ǫ}, X m ) averaged over all realizations of the disorder for which µ({ǫ}, X m ) is equal toμ. It is difficult to calculate Λ(μ) analytically but numerical simulations show (see Fig. 1) that up to very good accuracy the dependence Λ(μ) is linear: Substituting it into Eq. (24) we obtain that where is the average value of the function µ({ǫ}, X) over its minima X m . Here the probability density P m (μ) is a product of two factors. The first one is the probability density that some arbitrary chosen minimum X m of the function µ({ǫ}, X) will be equal toμ. We denote this probability density as p min (μ). The second factor is the conditional probability that if the minimum is equal tõ µ it will be actually occupied by the kink. It is evident that in a real system the kink is more likely to occupy the deeper minimum than the shallow one. So to be consequential one must ascribe to every minimum of the function µ({ǫ}, X) some probability weight and average taking into account those probabilities. But the values of these probabilities are in general determined by the whole prehistory of the kink. These values are not contained in the dynamical equations of motion that state only that the kink should take on some minimum regardless to its depth. It would be a cumbersome problem to calculate them appropriately. That is why two limit cases are in general considered: either the kink seats itself into the deepest well 6 or it rather occupies any of them with equal probabilities. 7 As it was already remarked in the Introduction, both limit cases lead to qualitatively the same results for the NLS model. It is not the case of KG models where, as it will be shown later, different assumptions as to the a priori weights of minima lead to qualitatively different behavior of the kink width. Since we are merely lacking information sufficient enough to reconstruct these weights in an objectivistic fashion, a remedy would be Jaynes's maximum entropy inference, 18 according to which the simplest self-consistent unbiased choice is to assume that the kink will occupy a potential well corresponding to the minimum X m of the function µ({ǫ}, X) [for given profile ǫ(x)] with probability proportional to e −βµ({ǫ},Xm) thus introducing an additional parameter β (following Jaynes we shall call it a conjugate parameter). By this expedience we present some natural interpolation covering two mentioned limit cases: of equiprobable distribution (β = 0) and of averaging over the deepest minima only (β → ∞). So we can write where plays part of the partition function.
To calculate p min (μ) we follow Ref. 7 and make use of the Rice's averaging theorem 19,20 [valid for the case, well attested by the numerics, of µ({ǫ}, X) being a stationary centered Gaussian process] stating that the probability density of some given minimum of the function µ({ǫ}, X) to be equal toμ is where the function and the spectral momenta were introduced. Hence the partition function takes on the form and can be expanded into series in β yielding the average value And now, substituting it into Eqs. (26) and (19) we obtain for the average kink width Thus it is seen that there is a qualitative change of the kink width behavior as function of the disorder intensity η according to whether the value of conjugate parameter β is below or above some critical value β cr ≈ 0.3. At small β the average kink width is a nonmonotonic function of the disorder intensity η: it decreases at small intensities but starts to increase thereafter. This result is well attested by the direct numerical calculations of stationary kink solutions of the initial equation of motion (6). In Fig. 2 we compare the analytical prediction given by Eq. (35) with the numerical results for the case of equiprobable distribution of the kinks over the potential wells (β = 0). The numerical results have been obtained as an average of 1000 realizations of the disorder. Two different expressions for the kink width were calculated: and From the point of view of the collective coordinate approach L cos and L der should coincide with the value of L introduced in Eq. (7). Indeed, as it is seen from the figures, it does take place for 0 ≤ η < ∼ 0.2; thus, these are limits where the collective coordinate approach works well. Going on with the analysis of Eq. (35) we see that at big values of the conjugate parameter (β > β cr ) when the kink rather occupies deep potential wells, the kink width should grow monotonically with the disorder. But it is evident that in this case the analytical approach discussed above is applicable to systems of infinite length only. For the finite system the case of big β represents the situation when the kink sits in the deepest potential well. Obviously its average depth essentially depends on the system length. We can make an estimation of this dependence drawing on the formula 20 for the average number of minima of the function µ({ǫ}, X) on an interval R whose values lie below someμ: Inserting there N min = 2 (μ is an absolute minimum on the interval R but not on the longer interval) one can estimate its average value and, substituting it into Eqs. (19) and (26) one can find that the average kink width equals It is seen that for finite-size systems the character of dependence L var (η) depends on the size of the system R. It is interesting that even for the case of averaging over the absolute minima considered here, the function L var (η) grows monotonically with η only for the systems that are large enough (R > ∼ 7.5). The reason is that for a small system the number of potential wells of the effective random potential is too small to yield the average over absolute minimum that would be essentially smaller than the average value calculated over all minima. Indeed, Figs. 3 and 4, in which we compare Eq. (40) to the results of the numerical calculations for R = 15 and R = 5, lend support to the validity of the approach leading to Eq. (40). One can see from these figures that the average kink width grows monotonically with η for R = 15 but is nonmonotonic (similar to the case depicted on Fig. 2) for small R = 5. But in this latter case the boundary conditions become very important and most likely they are responsible for the difference between Figs. 2 and 4.
IV. CONCLUSION
In the paper we consider the Klein-Gordon models with the δ-correlated spatial disorder and investigate both analytically and numerically the width of immobile kinks as a function of the intensity of disorder. The analytical collective coordinates approach is based on Rice's averaging theorem from the theory of random processes 19,20 as well as on the maximum entropy inference proposed by Jaynes. 18 We have shown that the properties of the kinks exhibit strong dependence on the assumptions as to their statistical distribution over the minima of the effective random potential. Namely, there exists a crossover from monotonically increasing (when a kink occupies the deepest potential well) to the nonmonotonic (at equiprobable distribution of kinks over the potential minima) dependence of the average kink width as a function of the disorder intensity. We have shown also that the same crossover may take place with the changing size of the system: the average kink width monotonically increases for the systems of big size but is nonmonotonic for the small ones.
It is interesting to compare the effects of the disorder in the KG model with the effects in the nonlinear Schrödinger (NLS) model. As it was recently shown in Refs. 6-8, the δ-correlated spatial disorder in the NLS systems creates an additional factor contributing to the decrease of the excitation width. This effect, being insensitive to the manner of the statistical distribution of kinks over the minima of effective random potential, favors the stabilization of excitations in highly nonlinear or multidimensional systems, which would either disperse or collapse otherwise. The stabilizing function of disorder is of no doubt important for practical applications and to elucidate the extent to which it is universal seems to be an intriguing question. The considered example of the KG systems demonstrates that there exists a class of systems, for which, in contrast to the NLS system, the effects of disorder can lead in different cases to diametrically opposed behavior.
In the case of the SG model we can consider the term ηǫ(x) as a change of the Josephson current density due to fluctuations of the thickness of an insulating layer. Quite recently Mints studied 9 such a model to account for a self-generated magnetic flux observed 10 by Mannhart et al. He has shown that in the case of η < 1 a state with a self-generated flux exists and can be studied experimentally in the presence of Josephson vortices. However, as is shown in the present paper, the Josephson energy [equal to 4L cos in Eq. (36)] and the magnetic energy [equal to 4/L der in Eq. (37)] of the Josephson vortices are functions of the intensity of fluctuations of the insulating layer thickness. And their contribution into the experimentally observable magnetic flux will strongly (up to the change of a sign) depend on the statistical distribution of the vortices along the Josephson junction. Thus, for the proper description of the problem one must develop a thermodynamic model. | 2019-04-14T02:19:20.746Z | 1999-02-01T00:00:00.000 | {
"year": 1999,
"sha1": "4ea158a161e1f888b791563e9fbfb706a085f5a5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9906029",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4ea158a161e1f888b791563e9fbfb706a085f5a5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
51919459 | pes2o/s2orc | v3-fos-license | Impact of Epidural Analgesia in Labour on Neonatal and Maternal Outcomes
Aim: To evaluate the effect of epidural analgesia during labour on neonatal-maternal outcomes. Methods: A retrospective cohort study of nulliparous parturients who gave birth in Västernorrland County, Sweden, over a 2-year period between 2015 and 2016. Neonatal outcomes (Apgar score at 5 min and umbilical cord arterial blood gases), maternal outcomes (perineal injury, total bleeding volume and maternal satisfaction with birth) and labour parameters (mode of delivery and the durations of labour and postpartum hospital stay) were evaluated. Results: The study cohort consisted of 1449 women with singleton pregnancies. Patients were divided into two groups according to whether during labour they were administered epidural analgesia using bupivacaine and sufentanil (EDA group, n = 615) or not (non-EDA group, n = 834). The rate of assisted vaginal delivery was significantly higher in the EDA group than in the non-EDA group (15.6% and 11.3%, respectively, p < 0.05), whereas the rates of caesarean section were similar. The duration of the active phase of labour was significantly longer in the EDA group than in the non-EDA group (489 ± 217 min versus 371 ± 210 min, respectively, p < 0.001). The Apgar score at 5 min and umbilical cord blood pH were lower and the base deficit greater in the EDA group (p < 0.001, p < 0.001 and p < 0.01, respectively). Bleeding volume was similar between the groups after adjusting for gestational age. Women in the EDA group were more satisfied with their labour experience, as measured by the visual analogue scale (p < 0.05). Conclusion: The results of this study suggest that EDA affects delivery and neonatal-maternal outcomes negatively, but increases the mother’s satisfaction with labour.
Introduction
Labour may be the most painful experience many women ever encounter [1].
The experience is unique for each woman and is influenced by emotions, motivation, cognitive ability, and social and cultural circumstances.The mother's physiological response to the pain affects maternal and foetal well-being as well as the delivery process [2].The pain itself is not life-threatening for healthy parturients, but it can lead to neuropsychological consequences such as postnatal depression [3] and post-traumatic stress disorder [4].
Good pain relief is one of the most important factors related to patient satisfaction [5] [6].Epidural analgesia is the most popular method of pain relief and considered the gold standard for pain relief during labour [5] [7].In Sweden, more than half of the primiparous women who gave birth at hospitals in 2015 were given epidural anaesthesia; however, this rate varies among counties (37% -66%) [8].Its advantages include no effect on consciousness and reduced levels of stress hormones in the blood; it also acts to lower blood pressure in cases of preeclampsia or hypertension [9] [10] [11].Although epidural analgesia is considered safe for both the parturient and baby, some transient side effects have been reported (mainly in the mother), including itching, shaking, nausea, insufficient pain relief, urinary retardation, muscle weakness, elevated temperature, headache, local pain at the site of injection and hypotension [7] [11] [12] [13] [14].Epidural analgesia can also be used for delivery by caesarean section [15].
Several studies have shown that epidural analgesia can affect labour and neonatal-maternal outcomes negatively in terms of increases in labour duration, rate of instrumental deliveries [10] [13], oxytocin use, temperature [12] [13] and risk of foetal malrotation [12] [16].However, many other studies have shown no effect of epidural analgesia on the mode of delivery or on neonatal outcomes [14].
The controversial results of such studies raise women's concerns regarding the side effects and safety of epidural analgesia, thus leading to a hesitation to use epidural analgesia [17].
The aim of this study is to evaluate the effects of epidural analgesia (EDA) using bupivacaine and sufentanil on labour and neonatal-maternal outcomes among nulliparous women.
Methods
We conducted a retrospective cohort study of all nulliparous parturients who used epidural analgesia and gave birth in Västernorrland County, Sweden over a 2-year period between 2015 and 2016.The study was approved by the Regional Ethical Review Board of Umeå, Sweden.
The study group consisted of nulliparous women who received epidural analgesia of bupivacaine and sufentanil during the active phase of labour.Matched women who did not receive epidural analgesia during the same period were recruited as the control group.The inclusion criteria were primiparity, age 18 -40 years, gestational age between ≥37 + 0 and 42 + 0 weeks, active phase of labour gestational age of <37 + 0 or ≥42 + 0 weeks, cephalopelvic disproportion, foetal malpresentation, cervical dilation < 4 cm, medical complications (e.g.preeclampsia, hypertension, diabetes), neurological disease and an allergy to anaesthesia.In this study, we included only women who were in the active phase of labour.The active phase of labour was defined as the presence of at least two of the following three criteria: regular painful contractions (every 3 -5 min for more than 1 hour), membrane rupture and cervical dilation > 3 cm.
The method of analgesia: In the epidural group, a 17-gauge epidural catheter (SIMS Portex LTD, Hythe, UK) was inserted under aseptic precautions in the lateral position at L3 -L4 with the loss of resistance to saline technique.The epidural catheter was then secured and the parturient placed in the supine position with left uterine displacement with the head of the bed elevated 20 -30 degrees.For pain relief, a ready to use solution of bupivacaine 0.6 mg/l and sufentanil 0.5 μg/ml in 100 ml bag. 10 ml of mixture is given as a test dose and therapeutic dose.After five minutes, continuous infusion of 5 -10 mL/h of the analgesic solution was started to maintain labour analgesia.
The women who met the above criteria were identified by searching their medical records using Obstetrix (Siemens Corporation, Upplands Väsby, Sweden), a Swedish electronic medical record system designed specifically for prenatal care and childbirth.In Obstetrix, pregnancies are followed in a logical and structured manner, from enrolment in the prenatal health care centre, to arrival at the maternity unit, to time of delivery.
Maternal and neonatal parameters were recorded.The maternal parameters were as follows: duration of the active phase of labour (cervical dilation ≥ 4 cm until delivery), duration of postpartum hospital stay (from delivery until discharge), mode of delivery, perineal tearing (anal sphincter damage), overall satisfaction with their childbirth experience (measured by the visual analogue scale [VAS]) and total intrapartum bleeding volume.The VAS is a simple method with good validity and reliability that is often used in patient satisfaction surveys [18].Here, it was used to assess the mother's overall satisfaction with the birth at the time of her discharge from the hospital.The VAS comprised a straight horizontal line of fixed length (100 mm), each end representing either the worst imaginable (score of 1) or best possible (score of 10) birth experience.Perineal tears and injuries were classified into four grades according to the depth of injury and degree of anal sphincter involvement [19].Grade 3 injuries involve the anal sphincter muscles, whereas grade 4 injuries additionally involve the intestinal mucosa (anal mucosa).
The neonatal outcomes were the Apgar score at 5 min and umbilical cord arterial blood gases (pH and base excess [BE]).The demographic characteristics of the mother (age and body mass index [BMI]) were also analysed.
Statistical analysis: The statistical analyses were performed using the IBM Open Journal of Obstetrics and Gynecology SPSS 23 Statistical Program (IBM Corp., Armonk, NY, USA).The distribution of the continuous data was not tested for normality, but the sample size was judged to be large enough to use parametric testing [20].The Chi-square test was used for categorical data.Student's t-test was used to assess differences in the continuous variables between the EDA and non-EDA groups.Pearson's correlation analysis was used to evaluate correlations.Analysis of covariance was used to determine the differences in continuous variables between the EDA and non-EDA groups after adjusting for gestational age.Logistic regression analysis with adjustment for confounders was used to identify the risk factors for perineal injury.The results of the logistic regression analyses are presented as the odds ratio (OR) and 95% confidence interval.Categorical variables are expressed as proportions and numbers and continuous variables as means ± standard deviation.In all tests, the level of significance was set at p < 0.05.
Power Analysis: Earlier study showed that epidural anaesthesia may cause a delay during the second phase of labour [21].In order to detect a delay of 15 min, α = 0.05 and β = 0.80, the size of study population will be 70 test subjects per group.
Results
A total of 1529 women met the selection criteria, and after applying the exclusion criteria, 1449 nulliparous women were enrolled in the study.The subjects were divided into two groups according to whether they had been given EDA (EDA group, n = 615) or not (non-EDA control group, n = 834) during labour.
Some women had incomplete data, which is why the group sizes vary among the different parameters.
The baseline characteristics of both groups are shown in Table 1.Although all women in both groups had full-term pregnancies, gestational age was higher in the EDA group than in the non-EDA group (p < 0.001).There was a significant difference in the delivery method between the groups, in that the rate of assisted vaginal delivery was significantly higher in the EPA group than in the non-EDA group (15.6% and 11.3%, respectively, p < 0.05); however, the rate of caesarean section delivery was similar.The duration of the active phase was 118 min longer in the EDA group than in the non-EDA group (p < 0.001).There was no significant difference in the length of postpartum hospital stay between the groups.
Total bleeding volume was higher in the EDA group compared with the EDA showed no effect on the rate of caesarean sections (Table 2).
All of the women evaluated their childbirth experience using the VAS scale.
Compared with the non-EDA-group, the VAS score was significantly higher in the EDA group, indicating a higher level of satisfaction in these women (p < 0.05, Table 2).
In the EDA group, the Apgar score at 5 min, umbilical cord arterial blood pH and negative BE were significantly lower (p < 0.001 and p < 0.001, p < 0.01 respectively) compared with the non-EDA group (Table 3).In both groups, the difference in BE between the groups was reduced after adjusting for gestational age in an analysis of covariance, but the difference remained significant.No significant effect of gestational age was seen on any of the other neonatal variables.In both groups, maternal BMI was positively correlated with neonatal weight (EDA group: r = 0.189, p < 0.001; non-EDA group: r = 0.112, p < 0.001).In the non-EDA group, there was a negative correlation between the Apgar score at 5 min and BMI (r = −0.071,p < 0.05; versus EDA group r = −0.004,p = 0.9).In both groups, the Apgar score was positively correlated with BE (EDA group: r = 0.236, p < 0.001; non-EDA group: r = 0.24, p < 0.001) and umbilical cord arterial blood pH (EDA group: r = 0.239, p < 0.001; non-EDA group: r = 0.263, p < 0.001).
Discussion
The results of this study showed that EDA affected labour by increasing its duration and by increasing the likelihood of instrumental delivery, but had no effect on the caesarean section rate.Although the neonates in the EDA group had lower scores for well-being parameters compared with those in the non-EDA group, the values were within normal limits.
Our findings are in agreement with those of many earlier studies [13] Cambic et al. reported an association between severe and early onset of labour pain and poorer obstetric outcomes and overall childbirth experience [31].Lieberman et al. suggested that women who had EDA were slightly shorter, had a Open Journal of Obstetrics and Gynecology longer gestational period and delivered slightly heavier babies compared with those who did not have EDA, and that they had enrolled earlier in the labour process and had slower cervical dilation even before they received EDA [29].We detected no differences in BMI or neonatal weight between the two groups, although gestational age was higher in the EDA group.
As well as a longer duration of the active phase of labour in the EDA group, there was also an association with higher gestational age, which could be the factor underlying the prolonged active phase of labour, and could perhaps explain why instrumental deliveries were more frequent in this group.However, the analysis of gestational age as a cause of this difference showed no effect of this parameter as a confounder.On the other hand, Thorp et al. showed a link between instrumental delivery and EDA use whether cervical dilation was rapid or slow [30].Some studies investigated the duration of the first stage of labour in parturients administered EDA and found no effect of EDA on duration [13] [22].In contrast, an earlier study showed that in nulliparous women, EDA increases the duration of the first stage of labour by approximately 30 min [23], and this finding has been supported by other studies [33] [34].A meta-analysis suggested that EDA prolongs the first stage of labour by approximately 42 min [35].However, other studies showed that EDA use during early-stage labour shortens the first stage of labour compared with opioid pain relief [32] or compared with EDA use later in labour [36].
As in our study, many other studies showed that EDA increases the risk of instrumental delivery [13] [23], which in turn can increase the risk of more serious perineal injuries [37].An earlier study indicated that severe perineal injuries are not caused by EDA itself but rather by other factors [38].In a large systematic review, however, an association was found between grade 3/4 perineal tears and EDA [25].Robinson et al. showed that EDA is strongly associated with grade 3/4 perineal tears, but after adjusting for other factors (e.g.neonatal weight, oxytocin use and rates of instrumental delivery and episiotomy, all of which were more frequent in the EDA group), these associations were no longer observed [39].In our study, there was a small but non-significant difference between the groups, in that the EDA group had a higher rate of grade 3/4 perineal tears.We found a significantly higher bleeding volume at birth in the EDA group, and bleeding volume was positively correlated with perineal injury in both groups, but neither of the group's bleeding volume ranges exceeded the postpartum haemorrhage cut-off volume (≥1000 ml).After adjusting for gestational age as a confounder, the difference in total bleeding volume between the groups was no longer significant.
A previous study revealed an association of EDA with an increased risk of induction of labour, episiotomy, instrumental delivery, prolonged second stage of labour and more severe maternal perineal injury, all of which increase the risk of postpartum bleeding [40].
In an Australian study, a relationship was observed between EDA and retained placenta, but not between EDA and postpartum bleeding; however, this Open Journal of Obstetrics and Gynecology result was questionable because the analyses were not adjusted for confounding factors [41].In contrast, another study showed that the risk of postpartum haemorrhage doubled when EDA was used, and the risk remained even after correction for multiple maternal-and pregnancy-related factors [42].Mangann et al. confirmed the association between EDA use and increased postpartum bleeding [43].Earlier studies showed that EDA increases the rate of oxytocin use [12] [13], suggesting that uterine contractions are adversely affected by EDA, which in turn can contribute to greater bleeding, a prolonged active phase of labour and an increased rate of instrumental delivery.Although we found an increase in bleeding volume in the EDA group, the association detected between gestational age and bleeding may suggest gestational age to be the underlying cause of the difference in bleeding volume between the groups because of the older gestational age in the EDA group compared with the non-EDA group.
The present findings of lower values in the EDA group of Apgar score at 5 min, umbilical cord arterial blood pH and negative BE indicate that EDA use during labour has a negative impact on neonates.A Cochrane study showed that the incidence of foetal asphyxia (Apgar score at 5 min < 7) was not increased among women using EDA compared with those not using EDA [13].In a systematic review, the authors found (in 33 of the 34 articles) no connection between EDA use and the Apgar score at 5 min [25], whereas another study suggested an association between EDA use and a low Apgar score at 5 min (<7) [42].Leighton and Halpern, on the other hand, showed that EDA reduces the risk of a low Apgar score at 1 min (<7) (OR: 0.54), suggesting positive effects of EDA on placental circulation.They also showed that EDA did not affect neonatal oxygenation, umbilical cord arterial blood pH or the Apgar score at 5 min [14].This can be explained by the finding that pain relief leads to reduced activation of the mother's sympathetic nervous system and may be beneficial to placental circulation, which improves the neonate's acid-base status [2].Even during the course of a normal and uncomplicated delivery, the foetus undergoes a number of periods of reduced oxygen supply due to uterine contractions, which decreases the placental circulation.These short hypoxic episodes are usually well tolerated, as the foetus has several defence mechanisms to cope with impaired oxygenation [44].
Umbilical cord arterial blood pH is an important outcome parameter in obstetric research because a low pH is strongly correlated with neonatal mortality and morbidity [45].Major abnormalities in blood pH are not considered compatible with survival, so it is extremely important that the body maintains a normal pH range; to achieve this, the body has several buffer systems that regulate changes in the hydrogen concentration [44].Both pH and BE are useful measurements to assess the rate of foetal metabolic acidosis that may occur during labour.However, pH, which is used most frequently, is not the best parameter for estimating the cumulative exposure to perinatal hypoxia because its logarithmic scale does not provide a linear measurement of acid accumulation.Both respiratory and metabolic changes affect pH.BE, however, changes linearly with the level of me-Open Journal of Obstetrics and Gynecology tabolic acid accumulation and is also adapted for variations in the partial pressure of carbon dioxide [44] [46].Lieberman et al. suggested that a lower umbilical cord blood pH can predict a poor neonatal outcome (low Apgar score) better than can a lower BE.They evaluated the effect of EDA on umbilical cord blood gases after adjusting for neonatal outcomes, and concluded that EDA use is not associated with umbilical cord arterial blood pH [25].A meta-analysis suggested that EDA has positive effects on the BE level in the umbilical cord artery [47].In that same study, BE was more sensitive than pH as an indicator of neonatal acidosis because umbilical cord arterial blood pH can be increased by the mother's hyperventilation, which is commonly induced by pain.Thus, women with inadequate pain relief may have higher umbilical cord arterial blood pH than that of those with good pain relief, which may partly explain the higher pH in the non-EDA group in our study.In our study, the lower pH in the umbilical cord artery of neonates whose mothers used EDA is of no clinical significance, as both groups had an Apgar score at 5 min > 7, which is correlated with good long-term outcomes [48]; however, new-borns in the EDA group potentially experienced greater stress and consumed more reserves (lower BE) to compensate for the poor conditions (low pH).In a healthy foetus with normal reserves, a decrease in pH (Table 3) has no clinical significance, but for a vulnerable foetus, even a slight decrease in pH may be important, because the rate of unfavourable neurological outcomes increases when the pH decreases below 7.1 [49].
Maternal satisfaction at birth is becoming increasingly important in our modern healthcare system; however, its assessment is difficult because it is a multifactorial subject for which no gold standard exists.One potential method of measuring satisfaction is the well-known VAS.In this study, the women in the EDA group had a higher VAS score for overall childbirth experience, suggesting that they were more satisfied with their experience than women who did not use EDA.This finding has been supported by other studies, suggesting that EDA is the gold standard resulting in adequate pain relief and increased patient satisfaction [24] [31] [33].Effective pain management correlates strongly and positively with increased patient satisfaction [50] [51] [52].As expected, in our study, the VAS score was negatively associated with the length of postpartum hospital stay, total bleeding volume and mode of delivery in both groups and with perineal injury in the non-EDA group.This suggests that women with a less complicated delivery are more satisfied with their childbirth experience.The childbirth experience is important for a woman, and a negative experience can affect the mother and her family relationships adversely in the long run [53].However, in contrast to our results, a Cochrane study suggested that parturients who used EDA were no more satisfied with their childbirth experience than those who did not use EDA [13].Although the childbirth experience and woman's satisfaction are multidimensional and difficult to evaluate using a scale, the VAS is still an important and helpful instrument for healthcare professionals to improve maternal care and allocate resources more efficiently.
Conclusion
In conclusion, our study suggests that EDA adversely affects the mode of delivery and neonatal-maternal parameters, but that women who use EDA are more satisfied with their overall childbirth experience.EDA has become a popular method for reducing pain during labour.The impacts of epidural use are not limited to the mother and can extend to the neonate at birth.It is important for healthcare staff to provide information on this topic to all pregnant women.The limitations of this study are those characteristics of retrospective design that probably impacted or influenced the interpretation of the findings; therefore, more and better designed randomised prospective studies addressing the efficacy and safety of EDA are needed.
Table 1 .
Demographics of the parturients enrolled in the study.Open Journal of Obstetrics and Gynecology non-EDA group, but the difference was not significant after adjusting for gestational age (Table2).The Chi-square test showed no difference between the two groups in terms of grade 3 or 4 perineal tears.EDA increased the risk of instrumental delivery (Table2), which in turn increased the risk of anal sphincter injury.Logistic re-
Table 2 .
Maternal and delivery outcomes.
NS: non-significant; EDA: epidural analgesia using bupivacaine and sufentanil; VAS: visual analogue scale score; n: number of patients.
Table 3 .
Neonatal outcomes.Correlation analysis revealed many associations between study parameters, but most were weak.In the EDA group, gestational age was negatively correlated with BE (r = −0.112,p < 0.05; versus non-EDA group NS: non-significant; EDA: epidural analgesia using bupivacaine and sufentanil; n: number of patients.Open Journal of Obstetrics and Gynecology | 2018-08-03T13:51:38.937Z | 2018-07-31T00:00:00.000 | {
"year": 2018,
"sha1": "66510e220f1bd4ec4b87a140020c5ea3967e5c90",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=86411",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "66510e220f1bd4ec4b87a140020c5ea3967e5c90",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251000380 | pes2o/s2orc | v3-fos-license | Fiber Post Removal Using a Conservative Fully Guided Approach: A Dental Technique
This report describes the usefulness of an endodontic template for the removal of a fiber post. A 40-year-old man presented with discomfort in the maxillary left canine. Clinical and radiographic examinations showed tooth #23 with a permanent core material retained with fiber post along with a persistent apical radiolucency. Among the various treatment modalities, nonsurgical root canal retreatment with fiber post removal using a conservative fully guided approach was proposed. After obtaining both the cone-beam computed tomographic images and the cast surface scan, their data were merged using implant planning software (ImplaStation for Windows x64 Bit Beta Version, ProDigiDent, Miami, Florida, USA) and superimposed. The drilling space was planned based on the location, diameter, and apical extent of the fiber post and was virtually overlapped and transferred clinically using a resin template to drill through the fiber post. With guides in position over the rubber dam, drilling was made with increments of 2 mm using a size 4 long-shank round bur (Thomas, Bourges, France) until it exposed the coronal gutta-percha. As soon as the canal was located, K3 rotary files (Sybron Endo, Orange, USA) were used along with chloroform to remove the old obturating materials. Then, additional shaping and cleaning were done with ProTaper Next rotary files (Dentsply Sirona, Ballaigues, Switzerland), sizes X2 and X3, and 5.25% NaOCl irrigation, respectively. The root canal was then dried with paper points and obturated with gutta-percha and AH Plus sealer (Dentsply Sirona, Ballaigues, Switzerland) using the continuous-wave compaction technique. Finally, the tooth was temporarily restored using the double seal technique with zinc oxide and zinc sulfate-based temporary material (Cavit W; 3M ESPE, St. Paul, MN, USA) and resin-modified glass ionomer material (Photac Fil; ESPE, Norristown, PA, USA) filling materials and referred for the final restoration.
Introduction
Recurring periapical pathology can develop after inadequate nonsurgical root canal treatment. A common procedure for clinicians to encounter is the retreatment of endodontically treated teeth with posts [1]. Metal posts retained with traditional types of cement such as zinc phosphate can usually be removed; however, in recent years, adhesively bonded glass [2], carbon [3], or quartz fiber [4] posts have become popular, replacing metal posts. The fiber posts are bonded into the root canal space with adhesive materials such as compos-ite resins or glass ionomers, which are reported to be more difficult to remove [5,6]. It is reported that the fiber posts can be fragmented and removed by using a microscope along with drilling with long-shank round burs, ultrasonic tips, and/or special removal kits [7,8]. Nevertheless, currently used post removal techniques frequently result in procedural errors such as excessive removal of intraradicular dentin, deviation from the root axis, and perforation of the root structure [9]. Furthermore, these techniques are timeconsuming and dependent on the clinician's experience [7,[9][10][11][12]. Post removal requires fragmentation within a limited anatomic area that is difficult to visualize, which may result in excessive substance loss leading to iatrogenic errors that compromise stability and hence compromise the tooth prognosis [13].
Cone-beam computed tomographic (CBCT) imaging has been recently recommended in endodontics as a diagnostic aid in root canal treatment planning [14,15]. It lays the foundation for the 3D printing of endodontic templates [16]. In guided endodontics, the combined use of CBCT imaging and intraoral scanning allows the manufacturing of a 3D endodontic template. This template facilitates a straight access cavity to the root canal by guiding the endodontic bur to the exact area [17,18]. The use of guided endodontics has been previously performed and reported in the literature as a safe [19] and predictable technique [20] which, in turn, leads to an improved long-term prognosis as they help to preserve the dental structure and avoid accidents such as deviations and perforations [18]. The present case report proposes a fully guided preparation as an attempt to minimize dentin loss and eliminate iatrogenic errors during fiber post removal.
Case Report
A 40-year-old normal healthy male patient, ASA I (According to the American Society of Anesthesiologists), presented to the postgraduate endodontic clinic at King Abdulaziz medical city, complaining of discomfort in the upper left anterior area for the past month. After clinical examination, tooth #23 exhibited sensitivity to percussion while the mobility was normal (grade I) as tested using two ends of metallic instruments. The periodontal probing depths were checked with a periodontal probe and were found within normal limits. Radiographic interpretation revealed a permanent core retained with a fiber post of unknown sources with an inadequate root canal filling and periapical radiolucency ( Figure 1). A CBCT was taken using Planmeca Pro-Max 3D S (Planmeca OY, Helsinki, Finland) operated at 80 kV, 3.0 mA, and voxel size of 0.15 mm to fully assess the anatomy of tooth #23 and surrounding structures. The imaging revealed apical root resorption and a radiolucent area with intact buccal and palatal plates. The obturation was found to be 2.15 mm short from the apex and the fiber post was located up to the middle third ( Figure 2). Based on these, a diagnosis of a previously root canal-treated tooth with symptomatic apical periodontitis was reached. Among the various treatment modalities, nonsurgical root canal retreatment with fiber post removal using a conservative fully guided approach was proposed. The procedure's benefits and risks were explained to the patient, and consent was obtained.
An impression of the upper arch was made using polyvinyl siloxane material (Imprint 4, 3 M, Saint Paul, Minnesota, USA) and poured to fabricate a diagnostic cast. The cast was then scanned by using the desktop laser scanner R700 Desktop (3Shape, Copenhagen, Denmark). Then, both the DICOM file from CBCT images and the cast surface scan file were merged using implant planning software (ProDigiDent, ImplaStation for Windows x6464 Bit Beta Version) and superimposed by selecting three reference landmarks in both files. The template was made with 3.5 mm thickness and 0.15 mm offset, which was extended to cross the midline for maximal stability. The drilling space was planned based on the location, diameter, and apical extent of the fiber post in the sagittal view. It was found to be 20.74 mm long with a 1.48 mm diameter apically. The space was virtually overlapped over the fiber post to drill through it with minimal dentin loss (Figure 3). The endo-guide template was then created and exported for printing using a digital light processing (DLP) (M-One; MAKEX Technology, Zhejiang, China) technology. A 3D printer (MiiCraft 125; MiiCraft, Jena, Germany) was used with a photo-polymerized biocompatible polymer resin (Freeprint Temp; DETAX GmbH & Co., Ettlingen, Germany) to print the template. The printer settings included 50 μm thickness, 405 nm wavelength, and a curing time of 2.40 s per layer. To guide the bur in the created drilling space, a guiding sleeve with 3.0 mm external diameter, 1.7 mm internal diameter, and 5 mm length was virtually customized using CAD software (Google SketchUp) (SketchUp, Trimble Navigation, Sunnyvale, California, USA) and printed using a selective laser melting system (GE Additive company, Boston, MA, USA) with standard parameters. Both the custom sleeve and endoguide template were integrated to fully guide the bur during Case Reports in Dentistry the fiber post removal. In the second visit, the tooth was anesthetized using 2% lidocaine with 1:80,000 epinephrine (Lignospan Special; Septodont, Saint-Maur-des-fossés, France) and isolated with a rubber dam. The endo-guide template was fitted inside the patient's mouth (Figure 4(a)). After a satisfactory assessment of the fit and stability, with pumping movement, drilling was made with increments of 2 mm using a high-speed handpiece with a size 4 long-shank round bur (Thomas, Bourges, France), which has a 1.4 mm head diameter, 1.6 mm shank diameter and 28 mm shank length. The full procedure was performed by an endodontic resident and recorded under a dental operating microscope (ZEISS OPMI pico; Carl Zeiss Meditec AG, Oberkochen, Germany). The procedure took 14 min and 55 s to expose the coronal guttapercha inside the canal (Figure 4(b) and 4(c)). Then, K3 rotary files (Sybron Endo, Orange, USA) sizes 25 06 taper and 30 06 taper were used along with chloroform to remove the old obturating materials. A size 30 K-file was then inserted to verify the working length with an electronic apex locator (Root ZX; J Morita, Tokyo, Japan) and confirm it radiographically ( Figure 5(a)). Then, additional shaping and cleaning were done with ProTaper Next rotary files (Dentsply Sirona, Ballaigues, Switzerland) sizes X2 and X3, and 5.25% NaOCl irrigation, respectively. The root canal was then dried with paper points and obturated with gutta-percha and AH Plus sealer (Dentsply Sirona, Ballaigues, Switzerland) using the continuous-wave compaction technique. Finally, the tooth was temporarily restored using the double seal technique with zinc oxide and zinc sulfate-based temporary material
Discussion
The present case report describes a guided technique for the removal of fiber post during nonsurgical endodontic retreatment using CBCT and a 3D printer. Evaluation of the preoperative periapical radiograph of tooth #23 confirmed the presence of a fiber post that extended to the middle third of the root. Overall, fiber posts can be removed using one or a combination of several techniques such as ultrasonic vibrations, drilling with long-shank burs, and using special post removal kits [8]. Nevertheless, currently used post removal techniques frequently result in procedural errors such as excessive removal of intraradicular dentin, deviation from the root axis, and perforation of the root structure [9]. The difficulty of post removal varies according to post type, design, material, length, and cementing material [5]. CBCT is a reliable and noninvasive tool that has gained widespread use in the diagnosis and treatment planning of dentoalveolar conditions. The American Association of Endodontists and the American Academy of Oral and Maxillofacial Radiology have published a joint position statement related to the use of CBCT [21]. The need for a CBCT scan can be considered if careful evaluation of differently angled periapical radiographs failed to yield conclusive information or if further information in the buccolingual dimension is 2.15 mm 3 Case Reports in Dentistry still required. In cases deemed appropriate for the scan, a narrow field of view that is associated with reduced radiation dose and higher spatial resolution is advisable [21]. Hence, CBCT should only be used as an adjunctive tool in certain clinical situations such as assessment of teeth with suspected complex morphology, localization of obliterated canals, evaluation of the endodontic treatment outcome, and planning of nonsurgical and surgical endodontic retreatment as well as dentoalveolar trauma and resorptive defects [21,22]. Furthermore, CBCT is frequently used in oral implantology for Case Reports in Dentistry three-dimensional planning to quantify the alveolar bone levels and to localize vital anatomic structures [23] as well as in guided implant surgery to help with implant site preparation and implant placement [24]. Zehnder and his colleagues introduced the concept of "guided endodontics" to facilitate access cavity preparation for teeth with root canal obliteration and reported that deviations of planned and prepared access cavities were ranging from 0.17 to 0.47 mm at the tip of the bur, while the mean of angle deviation was 1.81° [18]. In this case, a custom sleeve was fabricated and integrated with the endo-guide template to adequately guide the drilling pathway without the risk of resin being damaged by over-heating or undesirable resin drilling [25]. Therefore, in comparison to guided implant surgery, the accuracy of guided endodontics is considered relatively high [26].
The guided preparation that was used to remove the fiber post in our study conserved as much tooth substance as possible. A previous study revealed that the mean amounts of prepared dentin in traditional and conservative guided approaches to access root canal systems were 49.9 and 9.8 mm 3 , respectively. Moreover, unlike traditional access preparations, the success of the guided approach is not influenced by the operator's experience [27]. Hence, the conservative guided technique significantly reduces access cavity size, follows a clear path, and preserves the tooth structure [28].
The guided approach presented in this case report has some limitations. For instance, the technique requires prior training for the clinician with an associated learning curve. Also, guided endodontic procedures require the use of CBCT to permit 3D evaluation of the target area. CBCT is associated with more ionizing radiation than conventional radiographs [15], which might be concerning for some patients. Moreover, the presented approach is sensitive to distortions or errors made during intraoral scanning, 3D virtual planning, and printing of the guide. Another limitation of guided endodontics is that it does not enable immediate intervention due to the need for CBCT imaging and intraoral scanning in advance.
The speedy progress of digital dentistry workflows, supported by the evolving technology, will continue to improve the accuracy of guided endodontics. This progress will give rise to the widespread implementation of this digitally supported technique in dental practice.
Conclusion
A guided endodontics template created with virtual planning facilitated complete removal of the fiber post with no iatrogenic errors observed and shortened treatment time. Furthermore, to produce predictable results, this approach does not necessitate specialized training or extensive clinical experience. | 2022-07-24T15:14:13.132Z | 2022-07-22T00:00:00.000 | {
"year": 2022,
"sha1": "3ca5681476d79f892608458fe13627f0d5d8207c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/crid/2022/3752466.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd18abff00f2453e7821137a3ec147d49a933951",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256105748 | pes2o/s2orc | v3-fos-license | ProKD: An Unsupervised Prototypical Knowledge Distillation Network for Zero-Resource Cross-Lingual Named Entity Recognition
For named entity recognition (NER) in zero-resource languages, utilizing knowledge distillation methods to transfer language-independent knowledge from the rich-resource source languages to zero-resource languages is an effective means. Typically, these approaches adopt a teacher-student architecture, where the teacher network is trained in the source language, and the student network seeks to learn knowledge from the teacher network and is expected to perform well in the target language. Despite the impressive performance achieved by these methods, we argue that they have two limitations. Firstly, the teacher network fails to effectively learn language-independent knowledge shared across languages due to the differences in the feature distribution between the source and target languages. Secondly, the student network acquires all of its knowledge from the teacher network and ignores the learning of target language-specific knowledge. Undesirably, these limitations would hinder the model's performance in the target language. This paper proposes an unsupervised prototype knowledge distillation network (ProKD) to address these issues. Specifically, ProKD presents a contrastive learning-based prototype alignment method to achieve class feature alignment by adjusting the distance among prototypes in the source and target languages, boosting the teacher network's capacity to acquire language-independent knowledge. In addition, ProKD introduces a prototypical self-training method to learn the intrinsic structure of the language by retraining the student network on the target data using samples' distance information from prototypes, thereby enhancing the student network's ability to acquire language-specific knowledge. Extensive experiments on three benchmark cross-lingual NER datasets demonstrate the effectiveness of our approach.
Introduction
Named Entity Recognition (NER) is a fundamental sub-task of information extraction that aims to locate and classify text spans into predefined entity classes such as locations, organizations, etc (Ma et al. 2022b). It is often employed as an essential component for tasks such as question answering (Cao et al. 2022) and coreference resolution (Ma et al. 2022a). Despite the impressive performance recently achieved by deep learning-based NER methods, these supervised methods are limited to a few languages with rich entity labels, such as English, due to the reasonably large amount of human-annotated training data required. In contrast, the total number of languages currently in use worldwide is about 7, 000 1 , the majority of which contain limited or no labeled data, constraining the application of existing methods to these languages (Wu et al. 2020c,b). Hence, cross-lingual transfer learning is gaining increasing attention from researchers, which can leverage knowledge from high-resource (source) languages(e.g., English) with abundant entity labels to overcome the data scarcity problem of the low-(zero-) resource (target) languages . In particular, this paper focuses on the zero-resource scenario, where there is no labeled data in the target language.
To improve the performance of zero-resource crosslingual NER, researchers have conducted intensive research and proposed various approaches (Jain et al. 2019;Wu et al. 2020c;Pfeiffer et al. 2020). Among these, the knowledge distillation-based approaches Wu et al. 2020b,a) have recently shown encouraging results.These approaches typically train a teacher NER network using source language data and then leverage the soft pseudo-labels produced by the teacher network for the target language data to train the student NER network. In this way, the student network is expected to learn the language-independent knowledge from the teacher network and perform well on unlabeled target data (Hinton, Vinyals, and Dean 2015).
While significant progress has been achieved by knowledge distillation-based approaches for cross-lingual NER, we argue that these approaches still have two limitations. First, knowledge distillation relies heavily on the shared language-independent knowledge acquired by the teacher network across languages. As is known, there are differences in the feature distribution between the source and target languages, existing techniques employ only the source language for teacher network training. As a result, the teacher network tends to learn source-language-specific knowledge and cannot effectively grasp shared language-independent knowledge. Second, under the knowledge distillation learning mechanism, the student network aims to match the pseudo-soft labels generated by the teacher network for the target language. Consequently, the student network acquires all of its knowledge from the teacher network and ignores the acquisition of target language-specific knowledge. Undesirably, these two constraints would hinder the model's performance in the target language.
In this paper, we propose an unsupervised Prototypical Knowledge Distillation network (ProKD), which employs contrastive learning-based prototype alignment and prototypical self-training to address the two above limitations, respectively. Specifically, we rely on performing class-level alignment between the source and target languages in semantic space to enhance the teacher network's capacity for capturing language-independent knowledge. We argue that class-level alignment can bridge the gap in the feature distribution and force the teacher network better to learn the shared semantics of entity classes across languages (Van Nguyen et al. 2021;Xu et al. 2022). To do this, we choose prototypes (Snell et al. 2017), i.e., the class-wise feature centroids, rather than samples, for class-level alignment because prototypes are robust to outliers and friendly to class imbalance tasks (Qiu et al. 2021;Zhang et al. 2021). In order to pull the prototypes of the same class closer and push the prototypes of different classes far away across languages, we leverage classical contrastive learning (Chen et al. 2020) to adjust the distance among class prototypes. Thus the classlevel representation alignment between the source and target languages is achieved.
Furthermore, we present a prototypical self-training method to enhance the student network's ability to acquire the target language-specific knowledge. In particular, we establish pseudo-hard labels for unlabeled target samples based on their softmax-valued relative distances, i.e., prototype probability, to all prototypes and then retrain the network using these pseudo-labels. Since the prototypes accurately represent the clustering distribution underlying the data, the prototypical self-training enables the student network to learn the intrinsic structure of the target language , thus revealing language-specific knowledge, such as the token's label preference. In addition, while calculating the pseudo-hard labels, the class distribution probabilities generated by the teacher network are incorporated into the prototype probabilities to improve the quality of the pseudo-hard labels and facilitate self-training.
Summarily, we make four contributions: (1) We propose a ProKD model for zero-resource cross-lingual NER task, which can improve the model's generalization to the target language .
(2) We propose a contrastive learning-based prototype alignment method to enhance the teacher network's ability to acquire language-independent knowledge. (3) We propose a prototypical self-training method to enhance the student network's ability to acquire target language-specific knowledge. (4) Experimental results on six target languages validate the effectiveness of our approach.
Related Works
Cross-lingual NER Current research on cross-lingual NER with zero resources falls into three main branches. The translation-based meth-ods rely on machine translation and label projection (Xie et al. 2018a;Jain et al. 2019) to construct pseudo-training data for the target language, all of which involve high human costs and introduce label noise. The direct transferbased methods resort to training a NER model with the source language and directly transferring it to the target language (Wu and Dredze 2019;Wu et al. 2020c;Pfeiffer et al. 2020). These approaches fail to exploit information from the unlabelled target language, resulting in non-optimal cross-lingual performance. The knowledge distillation-based methods encourage the student network to learn language-independent knowledge from the teacher network. Specifically, Wu et al. (2020a) distills knowledge directly from multi-source languages. AdvPicker leverages adversarial learning to select target data to alleviate the overfitting of the model to source data. We argue that the above approaches fail to effectively learn shared language-independent knowledge and ignore the acquisition of target language-specific knowledge.
Knowledge Distillation
Knowledge distillation enables knowledge transfer from the teacher network to the student network (Hinton, Vinyals, and Dean 2015), where the student network is optimized by fitting the soft labels generated by the trained teacher network. Since the soft targets have a high entropy value, they provide more information per training case than the hard targets (Hinton, Vinyals, and Dean 2015), the student network can learn from the teacher network and perform well on unlabeled data. Knowledge distillation achieves significant results in various tasks such as model compression , image classification (Hinton, Vinyals, and Dean 2015), dialogue generation (Peng et al. 2019), machine translation (Weng et al. 2020), etc. In this paper, we choose knowledge distillation as the basic framework of our proposed approach for zero-resource cross-lingual NER.
Methodology
The NER task is modeled as a sequence labeling problem in this paper, i.e., given a sentence X = {x 0 , · · · , x i , · · · , x L }, the NER model is expected to produce a label sequence Y = {y 0 , · · · , y i , · · · , y L }, where y i denotes the entity class corresponding to token x i . Following previous works' setting (Wu et al. 2020a,b), given a labeled dataset source language dataset, {(X s m , Y s m )} ns m=1 ∼ D s , and an unlabeled target language dataset, {(X t m )} nt m=1 ∼ D t , the zero-resource cross-lingual NER aims to train a model with the above two datasets and expects the model to obtain good performance on target language data.
Overall Architecture
In this section, we describe the proposed approach, ProKD, for cross-lingual NER with zero resource, whose architecture is shown in Fig 1 and Fig 2. The core of ProKD is a knowledge distillation framework that includes a teacher network and a student network. In more detail, the teacher network employs a prototype class alignment method based on contrastive learning, which enhances its ability to acquire language-independent knowledge. The student network utilizes a prototypical self-training approach combined with the class distribution probability of the teacher network, which enhances its ability to learn language-specific knowledge.
Zero-resource Cross-Lingual NER via Knowledge Distillation
The knowledge distillation-based methods for zero-resource cross-lingual NER typically follow a two-stage training pipeline. First, the teacher network is trained with labeled source data, and then language-independent knowledge is distilled to the student network. Given a sequence X s m = {x s 0 , · · · , x s i , · · · , x s L } from source language data, the encoder f θ of teacher network can map it into the hidden space and output the representations H s m = {h s 0 , · · · , h s i , · · · , h s L }. Following the previous works (Wu et al. 2020a,b), we adopt multilingual BERT(short for mBERT) (Devlin et al. 2019) as the feature encoder. Then we leverage a classifier with a softmax function to obtain the output p s i for each token x s i , and the cross entropy loss for the teacher network can be formulated as: where Θ tea is the parameters of the teacher network to be optimized, n s is the number of the sentences in dataset D s , and y s i represents the golden label of token x s i . Benefiting from the shared feature space of pre-trained mBERT and task knowledge from the labeled source data, we can directly utilize the teacher network to infer the class probabilities p t i of each token in a sequence X t m from unlabeled dataset D x t . Then the student network, consisting of a feature encoder mBERT and a classifier with a softmax function, is trained using these class probabilities as "soft targets" on the unlabeled dataset. To approximate the probabilities p t i , the training objective for the student network can be formulated as: where p t i and q t i denote the probability distribution produced by the teacher and the student network for x t i , respectively. And here, following previous works (Yang et al. 2020;Wu et al. 2020a), we use the MSE loss to measure the prediction discrepancy of the two networks.
Prototypical Class-wise Alignment
Here, we present our method, prototypical class-wise alignment, to boost the teacher network's capacity to acquire language-independent knowledge.
Due to the absence of annotations on target language data, the class-wise alignment between the source and target languages is not trivial. To address this, as shown in Fig 1, we first calculate target class prototypes by class distribution probabilities produced by the teacher network in target data, and then leverage the prototype alignment between the two above languages to achieve class-wise alignment. We use prototype alignment rather than sample alignment since the prototype is robust to outliers , and it can alleviate the negative impact of the noise (Xie et al. 2018b) introduced by the teacher network for the target data. Additionally, the prototype treats all classes equally , which is crucial for the NER task, as non-entity type samples constitute the bulk of the overall samples.
To be specific, for the source language, we first obtain the token representation h s i of each token x s i using mBERT, and then, with the help of the golden labels, we directly compute the average representation of token samples with same label and treat it as the class prototype: where k denotes an entity class label, I is an indicator function, and n s k represents the number of samples belonging to class k in the source language.
For the target language, we utilize the same method to obtain the representation h t i of each target token x t i . Since the target data is unlabeled, to alleviate the uncertainty of class prototype computation, we use the output of the teacher classifier to estimate the probabilities for the current token belonging to each class. Regarding these probabilities as weight, we aggregate representations of all target tokens to derive the target class prototype, which can be expressed as: where p t i,k represents the probability that the token x i belongs to the class k.
The class prototype calculation involves all the samples, leading to high computing costs. To reduce the computation complexity while ensuring the stability of updates, we use the moving average method (Xie et al. 2018b) to update the source and target prototypes : where λ ∈ (0, 1) is the moving average coefficient, cur denotes the current moment and cur − 1 indicates the previous moment. In practical implementation, the source prototypes are updated once per epoch, while the target prototypes are updated once per batch.
After obtaining all class prototypes, we leverage classical contrastive learning to adjust the distance among prototypes in the feature space for class-wise alignment. For prototypes from source and target data with the same class, we regard one as an anchor (e.g., C s i ) and the other as the positive sample of the anchor (e.g., C t i ), while the rest of the prototypes are considered as negative samples (marked as C s i,neg ). Then the class alignment loss is presented as: where z s i , z t i , z s i,neg and z t i,neg are l 2 regularization of C s i , C t i , C s i,neg , C t i,neg , respectively, the C t i,neg denotes the negative samples of C t i , τ 1 is a temperature parameter, and num is the number of entity classes .
In this way, we can pull in source and target prototypes of the same class and push away source and target prototypes of different classes. Finally, we obtain the total loss L(Θ tea ) for the teacher network, consisting of the cross-entropy loss and the class alignment loss: Prototypical Self-trainning Here, we present our approach prototypical self-training with the unlabeled target language data, to boost the student network's ability to learn language-specific knowledge. Specifically, we rely on prototype learning to iteratively generate hard pseudo labels for unlabelled target language samples and leverage these hard labels to conduct selftraining on the target data. This is because the prototypes can perceive the underlying clustering distribution of the data, fundamentally reflecting the internal structure of the data and the intrinsic differences across the data , which facilitates the learning of language-specific knowledge, such as the label preference of a token.
To acquire the target class prototypes, we first obtain the hidden representations and prediction probabilities through the student network, respectively, and then leverage the exact prototype computation and updating equation (Equation 4 and Equation 5) as the teacher network to obtain the class prototypes C t . Afterwards, a class probability distribution ρ i based on prototypes is calculated by leveraging the sample's feature distance w.r.t the class prototypes: where τ 2 is the softmax temperature, and ρ t i,k represents the softmax probability of sample x i belonging to the kth class. As observed, if a feature representation h t i is far from the prototype C t k , the probability of this feature for class k would be very low. We convert ρ t i into a hard pseudo-label y t i based on the following formula: where ξ denotes the conversion function. Intuitively, we can use these pseudo-hard labels for selftraining. However, one natural question then arises. Prototypical self-training is essentially cluster-based representation learning and will inevitably introduce incorrect label in pseudo labeling. For instance, when a sample is far from the prototype to which it belongs, the student network may mislabel this sample (Snell et al. 2017). To alleviate this issue, we fuse the above prototypical probability ρ t i,k with the teacher's output probability p t i,k , to produce a hybrid soft pseudo-label η t i,k : where γ is a fuse factor. Since the trained teacher has the general semantic knowledge of classes, the p t i,k can be regarded as a priori knowledge, to improve the quality of pseudo labeling, which shows appealing advantages in previous works (Li, Xiong, and Hoi 2021;Zhang et al. 2021).
Note that, the teacher network's output p t i,k remains fixed as training proceeds. The reason we choose p t i,k instead of the updating probability q t i,k of the student, is to avoid the degenerate solution, resulting from the simultaneous update of features and labels throughout the self-training. Subsequently, we use the hybrid η t i instead of ρ t i to produce pseudo hard label. To this end, the student can be trained by the traditional self-training loss (Zou et al. 2018): where q t i denotes the probability distribution produced via the classifier of the student network for x t i . Based on the above, the student network can benefit from two aspects (Fig 2) : knowledge distillation and self-training. A very straightforward issue is that the student network may not be competent at an early stage to undertake effective selftraining . To guarantee that the student network can learn the shared class semantic for self-training at the early stage , we follow a cumulative learning strategy (Zhou et al. 2020) to gradually shift the model's learning focus from knowledge distillation to self-training using the control parameter α : where E max is the number of total training epochs, and e is the current epoch. The α automatically decreases from 1 to 0 with increasing epoch. Finally, the loss L(Θ stu ) for the student network can be expressed as:
Experiments and Analysis Datasets
We adopt three widely-used benchmark datasets for experiments: CoNLL-2002 (Spanish and Dutch) (Sang 2002), CoNLL-2003 (English andGerman) (Sang and Meulder 2003), and Wikiann (English, Arabic, Hindi and Chinese) (Pan et al. 2017). Each language-specific dataset has the standard training, development, and evaluation sets.The statistics for all datasets are shown in Table 1.
Following previous works (Wu et al. 2020a,c), we apply word-piece (Wu et al. 2016) to tokenize the sentences into sub-words, which then be marked by the BIO scheme. The data are annotated with four different entity types: PER (Persons), LOC (Locations), ORG (Organizations), and MISC (Miscellaneous). For all experiments, English is regarded as the source language and others as the target language respectively. Note that, CoNLL-2002CoNLL- /2003 share a common English dataset as source data. Moreover, we train the model on the source language training set, validate the model on the source language development set, and evaluate the learned model on the target language test set to simulate the zeroresource cross-language NER scenario.
Implementation Details
We adopt the pre-trained mBERT (Pires et al. 2019) as the feature extractor. Following previous works (Wu et al. Sentence 20,000 10,000 10,000 Entity 25,031 12,493 12,532 2020a,c), we use the token-level F1 score as the evaluation metric. For all experiments, we use the Adam optimizer (Kingma and Ba 2015) with learning rate = 5e-5 for teacher network and 1e-5 for student network, batch size = 128, maximum sequence length = 128, and the dropout = 0.5 empirically. We utilize the grid search technology to obtain the optimal super-parameters, including the moving average coefficient λ selected from {0.001, 0.005, 0.0001, 0.0005}, the contrastive learning temperature τ 1 selected from 0.5 to 0.9, the softmax temperature τ 2 selected from 0.5 to 0.9, and the fuse factor γ selected from 0.7 to 0.9.
Following the previous work (Wu and Dredze 2020), we only consider the first sub-word tokenized by word-piece in our loss function and freeze the parameters of the embedding layer and the bottom three layers of the mBERT model. Additionally, our approach is implemented using PyTorch, and all calculations are done on NVIDIA Tesla V100 GPU.
The results are presented in Table 2 and Table 3 ,where the baseline and SOTA experimental results are from their original papers. As observed, our method achieves the best results on most the datasets. For Conll2002/2003, compared with the two competitive knowledge distillation-based methods, RIKD and AdvPicer, our approach improves on the average F1 by 1.76% and 1.38%, respectively. For Wikiann, our method outperforms the RIKD by 2.26% on average. Especially, for German(de) language, we obtain an F1 value of 78.9%, which is 3.42% higher than the best result of the RIKD. And for Arabic(ar) language, our method achieves the best F1 value of 50.91%, with an improvement of 4.95% than RIKD. Analytically, RIKD and AdvPicer leverage adversarial learning and reinforcement learning to select target data for distillation, respectively, and the selected data tends to be consistent with the source language in feature distribution. Consequently, the student network learning on this data fail to effectively acquire the target language knowledge, resulting in insufficient generalization on the target language. Contrastly, our model uses the prototypical self-training to enhance the student network's ability to learn the target language, thus performing well on the target language. Table 5: Case Study. The ProKD can learn language-specific knowledge with self-training, which helps the model to rectify incorrect predictions to correct ones.
Ablation Study
To investigate the contributions of different factors, we conduct ablation experiments with four variant models: (1) ProKD w/o CA removes the prototypical class-wise alignment from the teacher network.
(2) ProKD w/o ST wipes out the prototypical self-training from the student network. (3) ProKD w/o PK does not use the prior knowledge from the teacher network in self-training process. (4) ProKD w/o CL cuts out the cumulative learning scheme and adopts the parameter α = 0.5 in loss function (Equation 13) for student network. As shown in Table 4, the average F1 value of ProKD w/o CA decrease by 2.1% compared to ProKD on Conll 2002& 2003. This indicates that class-level alignment effectively improves the model's generalization, as class alignment forces the teacher network to learn languageindependent knowledge from source and target languages. The performance of the ProKD w/o ST in F1 score drops by 1.55% compared to ProKD, which well validates the effectiveness of the self-training to acquire the target languagespecific knowledge. For ProKD w/o PK , the slight drop in F1 results compared to the ProKD suggests that incorporating prior knowledge of the teacher network can enhance the quality of pseudo labels during self-training. Also, ProKD w/o CL yields a slight drop in F1 values, which proves that the knowledge distillation learning should be performed first and then the self-training. The above experimental phenomena can also be observed on the Wikiann dataset.
Visualizing the Token Sample Representations
To demonstrate that our ProKD can achieve class-level feature alignment, we randomly select 50 token samples for each class from the source and target languages and feed them to the teacher networks of ProKD and ProKD w/o CA to obtain token-level representations, respectively. Note that, the teacher network of ProKD w/o CA degenerates to a vanilla mBERT when removing the prototypical class-wise alignment. We then visualize these representations using the T-SNE (Van der Maaten and Hinton 2008) and show the results for the four target languages in Figure 3. As shown, the feature representations of source and target languages from ProKD w/o CA are distributed differently and inconsistent due to languages gap. Many target language examples of one class are incorrectly aligned to the source lan-guage examples of a different class, thus causing confusion and hindering the model's performance. By contrast, our approach ProKD shows superiority over ProKD w/o CA with more classes aligned correctly. For example, when performing a cross-lingual NER from English(en) to Chinese(zh), the ProKD w/o CA aligns source and target features for just one class, ILOC, while our model achieves feature alignment on five classes. We argue that a model aligning features across multiple classes can capture more shared class features across languages, which is essential for generalizing the model to unknown target languages.
Case Study
In this part, we present a case study to show that our model can learn target language-specific knowledge through self-training. We compare the prediction results of the ProKD w/o ST with our ProKD for the target language test data, as shown in Table 5. In example 1, the ProKD w/o ST model incorrectly predicts "Madrid" as "I-ORG" because 66.67% of the "Madrid" tokens in English dataset are annotated as "I-ORG". The teacher network trained with this English data would distill this label preference to the student network, resulting in the student network of ProKD w/o ST tending to make incorrect predictions. In contrast, our model captures the label preferences of the target language by a prototypical self-training learning mechanism. In the same example 1, 59.73% of "Madrid" tokens in target Spanish language are labeled as "I-ORG". Our model can produce accurate predictions due to its intimate familiarity with the target language-specific knowledge. We observe the same phenomenon in examples 2 and 3.
Conclusion
This paper presents a knowledge distillation-based network ProKD for zero-resource cross-lingual NER. The ProKD proposes a contrastive learning-based prototype alignment approach to boost the teacher network's capacity to capture language-independent knowledge. In addition, the ProKD introduces the prototypical self-training method to improve the student network's capacity to grasp language-specific of target knowledge. The experiments on six target languages illustrate the effectiveness of the proposed approach. | 2023-01-24T06:42:08.607Z | 2023-01-21T00:00:00.000 | {
"year": 2023,
"sha1": "b4d052bdd292e574028a472b1a2ea357ee1faa84",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5be413ff1171d43dd531de38d1c19325569ccda3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
220938873 | pes2o/s2orc | v3-fos-license | Identification of a protein signature for predicting overall survival of hepatocellular carcinoma: a study based on data mining
Background Hepatocellular carcinoma (HCC), is the fifth most common cancer in the world and the second most common cause of cancer-related deaths. Over 500,000 new HCC cases are diagnosed each year. Combining advanced genomic analysis with proteomic characterization not only has great potential in the discovery of useful biomarkers but also drives the development of new diagnostic methods. Methods This study obtained proteomic data from Clinical Proteomic Tumor Analysis Consortium (CPTAC) and validated in The Cancer Proteome Atlas (TCPA) and TCGA dataset to identify HCC biomarkers and the dysfunctional of proteogenomics. Results The CPTAC database contained data for 159 patients diagnosed with Hepatitis-B related HCC and 422 differentially expressed proteins (112 upregulated and 310 downregulated proteins). Restricting our analysis to the intersection in survival-related proteins between CPTAC and TCPA database revealed four coverage survival-related proteins including PCNA, MSH6, CDK1, and ASNS. Conclusion This study established a novel protein signature for HCC prognosis prediction using data retrieved from online databases. However, the signatures need to be verified using independent cohorts and functional experiments.
Background
Hepatocellular carcinoma (HCC), is the fifth most common cancer in the world and the second most common cause of cancer-related deaths. Over 500,000 new HCC cases are diagnosed each year [1]. Viral hepatitis and nonalcoholic steatohepatitis are the most common causes of cirrhosis which underlies approximately 80% of cases of HCC [2]. HCC prognosis remains a challenge due to the recurrence of HCC and the 5-year overall survival rate is only 34 to 50% [3]. Despite the rapid advancements in medical technology, there are still no effective treatment strategies for HCC patients [4]. Byeno et al [5] reported that based on long-term survival data, the serum OPN and DKK1 levels in patients with liver cancer can be used as novel biomarkers that predict prognosis. Other serum markers, such as alphafetoprotein (AFP) and alkaline phosphatase (ALP or AKP), have also been reported in clinical practice, however, these markers lack sufficient sensitivity and specificity [6]. Therefore, it is necessary to find effective biomarkers essential for diagnosis and treatment for HCC.
Proteomics is a field of research that studies the proteins at a large-scale level. Biomarker analysis uses highthroughput sequencing technologies in proteomics and genomics. Mass spectrometry-based targeted proteomics has been used to set up multiple omics. Mass spectrometry-based identification of matching or homologous peptide identification can further refine gene model [7]. This allows for an in-depth analysis of hostpathogen interactions. Combining advanced genomic analysis with proteomic characterization not only has great potential in the discovery of useful biomarkers but also drives the development of new diagnostic methods and therapies. Proteogenomic studies have enabled the exploration of the prognosis of cancer progression, however, its role and mechanism remain unclear. Chiou et al [8] used integrated proteomic, genomic, and transcriptomic techniques to obtain protein expression profiles from HCC patients. This study found that S100A9 and granulin protein markers were associated with tumorigenesis and cancer metastasis in HCC. Similarly, Chen et al [9] using a proteomic approach found that curcumin/β-cyclodextrin polymer (CUR/CDP) inclusion complex exhibited inhibitory effects on HepG2 cell growth. Over the last few years, integrative tools useful in executing complete proteogenomics analyses have been developed. In this study, we systematically evaluated the prognostic protein signature for the prediction of overall survival (OS) for HCC patients. The availability of highthroughput expression data has made it possible to use global gene expression information to analyze the genetic and clinical aspects of HCC patients. Therefore, in this study, protein data from Clinical Proteomic Tumor Analysis Consortium (CPTAC) and validated in The Cancer Proteome Atlas (TCPA) and the cancer genomic maps (TCGA) dataset was used to identify HCC biomarkers and the dysfunctional of proteogenomics.
Data collection
CPTAC is a public repository of well-characterized, mass spectrometry (MS)-based and targeted proteomic assays, useful in characterizing the protein inventory in tumors by leveraging the latest advances in mass spectrometrybased discovery proteomics [10]. TCPA is a user-friendly data portal that contains 8167 tumor samples in total, which consists primarily of TCGA tumor tissue samples and provides a unique opportunity to validate the TCGA data and identify model cell lines for functional investigations [11]. TCGA has generated multi-platform cancer genomic data and generated some proteomic data using the Reverse Phase Protein Array (RPPA) platform, measuring protein levels in tumors for about 150 proteins and 50 phosphoproteins [12]. In this study, proteomics data was downloaded from TCPA (level 4) and combined with clinical data from TCGA, and comprehensive analysis of proteomics performed through CPTAC.
Establishing the prognostic gene signature
Univariate Cox regression analysis was performed to identify prognostic genes and establish their genetic characteristics. The prognostic gene signature was demonstrated as risk score = (CoefficientmRNA1 × expression of mRNA1) + (CoefficientmRNA2 × expression of mRNA2) + ⋯ + (CoefficientmRNAn × expression mRNAn). Based on the median risk score, the patients were classified into the low-risk (<median) group and a high-risk (≥median) group. The Kaplan-Meier survival analysis was used to analyze the survival difference between the high and low groups.
Building and validating a predictive nomogram
Nomograms are often used to predict the prognosis of cancer. Mainly because they can simplify statistical prediction models to a single numerical assessment of the probability of an event (such as relapse or death) depending on the condition of an individual patient [13]. A receiver operating characteristic (ROC) curve was plotted over time to assess the prediction accuracy of prognostic signals in HCC patients. Univariate and multifactorial Cox regression analysis was used to analyze the relationship between gene clinicopathological parameters.
Statistical analysis
Statistical analyses were performed using R (version 3.5.3) and R Bioconductor software packages. Benjamini-Hochberg's method was used to convert P values to FDR. Perl language was used for data matrix and data processing and a P value less than 0.05 was used. The identification of differentially expressed proteins between HCC and non-cancerous samples in CPTAC used |log2FC| > 1 and a P-value < 0.05 was considered to be statistically significant. Table S1) and 422 differentially proteins (112 upregulated and 310 downregulated Table S2) were identified from the CPTAC database. To analyze the function of the identified differentially expressed proteins, biological analyses were performed using gene ontology (GO) enrichment and KEGG pathway analysis. GO analysis revealed that the GO terms related to biological processes (BP) of differentially expressed proteins were enriched in fatty acid biosynthesis and catabolism, molecular function (MF) were mainly enriched in cofactor binding, coenzyme binding, vitamin binding, monooxygenase activity, carboxylic acid-binding, iron ion binding, and organic acid binding and cell component (CC) were mainly enriched in the mitochondrial matrix, MCM complex, collagen trimer, peroxisome, microbody, microbody part, peroxisomal part, peroxisomal matrix, and microbody lumen. KEGG pathway analysis revealed that the differentially expressed proteins were mainly enriched in retinol metabolism, chemical carcinogenesis, drug metabolism-cytochrome P450, fatty acid degradation, arginine biosynthesis, PPAR signaling pathway and other metabolic pathways (Fig. 2).
Protein-protein interaction (PPI) network construction and module analysis
To further explore the relationship between differentially expressed proteins at the protein level, the PPI network was constructed based on the interactions of differentially expressed proteins. A total of 542 interactions and 236 nodes were screened to establish the PPI network and the top five most contiguous nodes between genes were CDK1, AOX1, CYP2E1, CYP3A4, and TOP2A (Table S3-S4).
Survival analysis
Survival data was extracted from HCC patients in CPTA C and used to perform univariate Cox regression analysis. The expression of survival-related proteins revealed 105 survival-related proteins (P<0.05, Table S5). Univariate and multivariate Cox regression analysis was performed on the clinical factors and survival-related proteins and 41 proteins that can act as independent prognostic factors for OS were identified (Table S6-S7). ROC curves were used to investigate the use of the protein patterns as early predictors of HCC incidence. This model demonstrated that 8 proteins (MCM3, MCM7, PCNA, SLC39A1, SMC2, TOP2A, UBE2C, and UHRF1) had an AUC value above 0.7 (Table S8). Table S9 presents detailed information about the relationship between the 8 proteins and clinical factors. The 8 proteins were used to build a prognostic model, and the median risk score set as the threshold to divide the cohort into high-risk and low-risk groups. The detailed prognostic signature information of the HCC group is shown in Fig. 3.
Building a predictive nomogram
A Nomogram was constructed by involving clinical pathology and prognosis models. The LASSO logistic regression algorithm was used to select the most important prediction markers which greatly contributed to the final prediction model. The model included features in CPTAC: gender, age, tumor differentiation, history of liver cirrhosis, number of tumors, tumor size, tumor thrombus, tumor encapsulation, HBcAb, AFP, PTT, TB, ALB, ALT, and GGT (Fig. 4). The use of the prognostic model and clinical pathology data can improve the sensitivity and specificity of 1-, 3-, and 5-year OS prediction.
Immunohistochemistry analysis
Proteomics data was downloaded from TCPA-HCC (level 4; 184 samples and 218 proteins) and combined with clinical data from TCGA. Univariate Cox regression analysis determined the expression of survival-related proteins (Table S10). and we intersect survival-related proteins with CPTAC database, and four survival-related proteins PCNA, MSH6, CDK1, and ASNS were identified. The Human Protein Atlas (HPA) is a website that involves immunohistochemistry-based expression data for distribution and expression of 20 tumor tissues, 47 cell lines, 48 human normal tissues, and 12 blood cells [15]. In this study, the direct contrast of protein expression of the four genes between normal and HCC tissues was used by immunohistochemistry image and the results are shown in Fig. 5. However, PCNA, CDK1, and ASNS proteins were not expressed in normal liver tissues but were expressed in high to medium levels in HCC tissues. Besides, MSH6 was lowly expressed in normal tissues and highly expressed in tumor tissues. TIME R (Differential gene expression module) is a comprehensive asset for systematical investigation of immune infiltrates over various malignancy types. It was used to explore PCNA, MSH6, CDK1, and ASNS based on thousands of variations in copy numbers or gene expressions in patients with HCC. Similar to our findings, the four proteins were significantly overexpressed in HCC patients in the TIMER database (Fig. 6). OS analysis demonstrated that the four proteins with high had a poorer prognosis than that with a low group (P < 0.05) (Fig. 7).
Discussion
Proteomic analysis of early-stage cancers provides new insights into changes that occur in the early stages of tumorigenesis and represents a new resource for biomarkers for early-stage disease. Proteome characteristics of tumor cells distinguish them from normal cells and are critical in the study of their growth and survival. Proteomic analysis in signaling pathways has become ideal targets for personalized therapeutic intervention in cancer patients [16]. In this study, we identified novel and effective prognostic signatures for patients with HCC. These signatures show great potential in the prognosis prediction of HCC.
In this study, we did a comprehensive analysis of proteomics through CPTAC as well as downloaded proteomic data from TCPA (level 4) which combined with clinical data from TCGA. We first identified 422 differentially proteins and analyzed the function of the identified differentially proteins and then the PPI network construction, we found the most contiguous nodes was CDK1. BP was significantly enriched in acid biosynthetic process and catabolic process, MF were mainly enriched in biological compounds binding, CC was mainly enriched in organelles and enzymes and retinol metabolism, chemical carcinogenesis, drug metabolism-cytochrome P450, fatty acid degradation, arginine biosynthesis, PPAR signaling pathway, and other metabolism pathways. A recent study found that Simvastatin can inhibit the HIF-1α/PPAR-γ/ PKM2 axis resulting in decreased proliferation and increased apoptosis in HCC cells [17]. Similarly, Wang et al [18] confirmed that the anticancer efficacy of avicularin in HCC was dependent on the regulation of PPAR-γ activities. Therefore, we hypothesis that the differentially expressed proteins identified may play a critical role in drug chemical carcinogenesis via the PPAR signaling pathway, however, there is a need for further studies to confirm this hypothesis. The analysis was restricted to the intersection between CPTAC and TCPA database survival-related proteins and four survival-related proteins PCNA, MSH6, CDK1, and ASNS were identified.
Proliferating cell nuclear antigen (PCNA, also known as ATLD2), is a cofactor of DNA polymerase delta which is ubiquitinated in response to DNA damage. A recent study found that PCNA knockdown-HepG2 cells under hypoxia showed the induction of more epithelial-mesenchymal transition (EMT) process compared to the control [19]. PCNA and EMT-related markers were down-regulated following treatment with Wnt/β-catenin signaling inhibitor (XAV939) and the proliferative activity of HCC cells was significantly inhibited [20]. MutS homolog 6 (MSH6) is a member of the DNA mismatch repair MutS family. Togni et al [21] reported a nuclear expression of MSH6 in HCC excluding a DNA mismatch repair defect and Ozer et al [22] studied the methylation status of MSH6 involved in DNA repair mechanisms. MSH6 is associated with an increased risk for breast cancer and should be considered in individuals with a family history of breast cancer [23]. Another study evaluated metachronous colorectal cancer (CRC) incidence according to the MSH6 gene in Lynch Syndrome (LS) patients who underwent a segmental colectomy [24]. However, there is currently no comprehensive study on the role of MSH6 in HCC and this study may provide important information for consideration in future studies. Cyclin-dependent kinase 1 (CDK1, also known as CDC2; CDC28A; P34CDC2), is a member of the Ser/Thr protein kinase family which is essential for G1/S and G2/M phase transitions of the eukaryotic cell cycle. Anti-CDK1 treatment can boost sorafenib antitumor responses in HCC patient-derived xenograft (PDX) tumor models [25]. Gao et al [26] demonstrated that karyopherin subunit-α 2 (KPNA2) may promote tumor cell proliferation by increasing the expression of CDK1. Asparagine synthetase (ASNS, also known as TS11; ASNSD), is involved in the synthesis of asparagine. The expression of ASNS has been reported to be high in HCC tumor tissues and closely correlated with the serum AFP level, tumor size, microscopic vascular invasion, tumor encapsulation, TNM stage, and BCLC stage [27]. Li et al [28] found that the expressions of ASNS decreased and also functioned as an independent predictor of OS in HCC patients. This study's OS analysis demonstrated that these four proteins with high had a bad prognosis than those with the low group. A total of 41 proteins were identified that can serve as an independent prognostic factor for OS. Among the proteins, 8 proteins (MCM3, MCM7, PCNA, SLC39A1, SMC2, TOP2A, UBE2C, and UHRF1) had AUC value above 0.7. The use of the prognostic model and clinical pathology data can improve the sensitivity and specificity of 1-, 3-, and 5-year OS prediction. The 8 proteins were used to build a prognostic model and final SLC39A1 and UBE2C choose to build the prognostic model. Solute carrier family 39 member 1 (SLC39A1, also known as ZIP1, ZIRTL), acts as a molecular zipper to bring homologous chromosomes to close apposition [29]. In prostate cancer, zinc levels have been reported to be decreased and the ZIP1 transporter is lost [30]. Similarly, studies reveal that hZIP1 (SLC39A1) is expressed in the zincaccumulating human prostate cell lines, LNCaP, and PC-3 [31]. However, the role of SLC39A1 in HCC remains unknown. Ubiquitin-conjugating enzyme E2 C (UBE2C, also known as UBCH10; dJ447F3.2) is an enzyme required for the destruction of mitotic cyclins and cell cycle progression. Studies have demonstrated that knockdown of UBE2C expression suppresses proliferation, migration, and invasion of HCC cells in vitro. Moreover, the silencing of UBE2C also increases the sensitivity of HCC cells to sorafenib [32]. This study was not without limitations. The results have not been validated in clinical samples, and they do not provide accurate clinical data due to the relatively small number of patients used.
Conclusion
This study established a novel protein signature for HCC prognosis prediction using data retrieved from online databases. However, the signatures need to be verified using independent cohorts and functional experiments.
Additional file 1: Table S1. The detailed clinical information of CPTAC-HCC patients. Table S2. The 422 differentially expressed proteins identified using the CPTAC database. Table S3. A total of 542 interactions and 236 nodes screened to establish the PPI network. Table S4. The top five most contiguous nodes: CDK1, AOX1, CYP2E1, CYP3A4, and TOP2A. Table S5. Cox regression analysis of the identified 105 survival-related proteins. Table S6. Univariate Cox regression analysis of survival-related proteins. Table S7. Multivariate Cox regression analysis of survival-related proteins and 41 proteins identified as independent prognostic factors for OS. Table S8. ROC curves investigating the use of the protein patterns as early predictors of HCC incidence and the 8 proteins with AUC value above 0.7. Table S9. The relationship between the 8 proteins and clinical factors. Table S10. Univariate Cox regression analysis exploring the expression of survival-related proteins in the TCPA database. | 2020-08-04T13:40:01.419Z | 2020-08-03T00:00:00.000 | {
"year": 2020,
"sha1": "2063768ab5a16a933fd927ff219e7644a0dc8a06",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-020-07229-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a349fa6a5628460bcad05551561ed05de901e4c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
6237732 | pes2o/s2orc | v3-fos-license | Cotranslational folding of deeply knotted proteins
Proper folding of deeply knotted proteins has a very low success rate even in structure-based models which favor formation of the native contacts but have no topological bias. By employing a structure-based model, we demonstrate that cotranslational folding on a model ribosome may enhance the odds to form trefoil knots for protein YibK without any need to introduce any non-native contacts. The ribosome is represented by a repulsive wall that keeps elongating the protein. On-ribosome folding proceeds through a a slipknot conformation. We elucidate the mechanics and energetics of its formation. We show that the knotting probability in on-ribosome folding is a function of temperature and that there is an optimal temperature for the process. Our model often leads to the establishment of the native contacts without formation of the knot.
Introduction
There are several hundreds of knot-containing structures [1,2,3] in the Protein Data Bank (PDB) and their topology can be characterized primarily by how many intersections a backbone makes with itself on making its two-dimensional projection. The knot extends between two end points, n 1 and n 2 > n 1 , along the sequence. The knot ends are defined operationally through systematic cutting-away of the amino acids from both termini until the knot disintegrates [4,5]. The last site that still supports the knot is its end point. If n 1 and n 2 are both distant from the termini, the knot is considered to be deep; otherwise it is called shallow. One needs closed lines to declare existence of a knot on a line with certainty. Backbones of proteins are generally not closed but for deep knots the determination of a knot-type is quite reliable.
Stretching of a knotted protein either at constant speed [6,7,8] or at constant force [9] results in a step-wise knot tightening process in which the knot ends jump. Here, we consider two other conformation changing processes [10] in knotted proteins: folding and unfolding through heating. Folding from an extended state to a knotted native conformation of protein YibK has been reported [11,12] to be difficult with, at best, 1 -2% success rate even if one uses a coarse-grained structure-based model. Here, we examine whether nascent conditions, such that exist when a protein is formed by the ribosome [13,14,15,16], can help in establishing the native knot in proteins. We propose a simple generic model in which the ribosome is represented as an infinite repulsive plate which spawns proteins. Our model is an oversimplification of the real geometry of the ribosome -the peptide chain is formed in the ribosome tunnel and it undergoes folding, at least partially, within it. The tunnel has a diameter that varies between 10 and 20Å and the largest cavity in the tunnel cannot encompass a sphere with a radius larger than 9.5Å [13,17,18]. This geometry provides stronger confinement than that induced by the plane. Nevertheless, the model with the plane is the simplest one that introduces new qualtitative features that are brought in by confinement.
Here, our focus is on YibK [20] from Haemophilus influenze which contains a deep trefoil knot with three intersections. The corresponding PDB code is 1J85. (We shall refer to proteins through their PDB structure codes.) This protein is probably the most frequently cited example of a deeply knotted protein [1,2]. Its radius of gyration is about 15Å. We find that the nascent conditions do help in folding 1J85 -they actually enable it -and the effect is observed only within a range of temperatures that provide optimality. We show that invoking non-native contacts is not necessary to generate the on-ribosome slipknot, just one needs to employ a proper procedure to define contacts that are declared native. We identify the slipknot based mechanism of folding and explain why the model ribosome favors its formation. We also study the energetics involved in the emergence of the slipknot. Unlike the claims of ref. [11] (that uses the same model), we do not observe the slipknot in the absence of the ribosome.
Structure-based modeling
The justification and details of implementation of our model are explained in refs. [22,23,24]. Its character is structure-based or, equivalently, Go-like [21], and the molecular dynamics deals only with the α-C atoms. The bonded interactions are described by the harmonic potentials. The list of pairs of amino acids that are considered to be in contact (i.e. interacting) in the native state is known as the contact map. These contacts are described by the Lennard-Jones potentials with the minima at the crystallographically determined distances. The potentials are identical in depth, denoted as ǫ. The value of ǫ has been calibrated by making comparisons to the experimental data on stretching: approximately, ǫ/Å is 110 pN (which also is close to the energy of the O-H-N hydrogen bond of 1.65 kcal/mol). Non-native contacts are considered repulsive.
The contact map itself is obtained by using the overlap criterion in which the heavy atoms in the native conformation are represented by enlarged van der Waals spheres [22,25]: if at least one pair of such spheres placed on different amino acids overlap (OV) then there is the native contact. An alternative way to establish a contact map is by invoking considerations that are more chemical in nature, as available at the CSU server [26]. The CSU contacts may either be specific (like hydrogen bonds) or non-specific (presumably the much weaker dispersive interactions). The CSU and OV maps have many contacts in common but are not identical. Thus, in addition to the OV map, we also consider OV-CSU map in which we augment CSU by the missing specific CSU contacts.
The backbone stiffness is accounted for by the chirality potential [22]. The simulations are done at various temperatures. For most proteins without any knots, optimal folding takes place around at T = 0.3 ǫ/k B , which should correspond to a vicinity of the room temperature (k B is the Boltzmann constant). We generate at least 200 trajectories for each temperature considered. The time unit of the simulations, τ , is effectively of order 1 ns as the motion of the atoms is dominated by diffusion instead of being ballistic.
Folding is usually declared when all native contacts are established for the first time. For knotted proteins, however, this condition does not necessarily signify that the correct native knot has been formed.
There are two important aspect of the role of the ribosome in the context of nascent folding. The first is that folding of a protein is concurrent with its birth. Since the mRNA is translated from 5' to 3', the proteins are synthesized from the N terminus to the C terminus. The time interval between the emergence of two successive α-C atoms will be denoted by t w . The second aspect is that the surface of the ribosome provides excluded volume and, therefore, reduces the conformational entropy. Both aspects can be captured by a model in which the ribosome is represented by an infinite plate which gives birth to a protein at one fixed location. We take the plate to generate a laterally uniform potential of the form 3 where z denotes the distance away from the plate and σ 0 = 4 × 2 −1/6Å . This form of the potential comes from integrating the energy of interaction between a Lennard-Jones particle and a semi-infinite continuum below z=0 and discarding the attractive part. A coarse-grained model of cotranslational folding with molecularly sculpted ribosome has been proposed by Elcock [27]. We have adopted a less sophisticated model in order to enable simulations of hundreds of trajectories that last long -formation of a deep knot is a rare event and making simplifications is necessary.
Results
The PDB structure file of 1J85 provides coordinates of 156 residues. About half of them are arranged into seven α-helices and eight β-strands [20]. The native conformation of 1J85 is shown in panel A of Fig. 1. Segments 1-74, 75-95, 96-120, and 121-156 are shown in green, red, green, and purple respectively. The reason for this color convention is that in order to form the knot in (at least) cotranslational folding, the purple segment (121-141) must form a slip-loop that would go through the red knot-loop (see the defining drawings in ref. [28]). The knot ends are at LEU-75 and LYS-119 in the native statea separation of 44 sites. On stretching, the knot tightens up and the ultimate separation between the knot ends becomes 10 [6,9].
Equilibrium properties and thermal unfolding
In order to set the stage, we first consider a situation in which 1J85 is set in the native state and then undergoes time evolution at various temperatures. At any finite T , some number of contacts break down (the distance between the α-C atoms in residues that make a contact becomes larger than 1.5 of the native distance) and some get restored. The top panel of Fig. 2 shows the probability, P 0 , of all native contacts being simultaneously established as a function of T . Similar to the lattice models of proteins [29] (see also an exact analysis [30,31]) one may define T f as a temperature at which P 0 crosses through 1 2 . For the OV contact map, T f = 0.194 ǫ/k B and for the OV-CSU contact map it is 0.204 ǫ/k B -just a small shift. For both of these contact maps, P 0 is essentially close to zero at T = 0.3 ǫ/k B (1% and 3% for OV and OV-CSU respectively). However, at this temperature, the fraction of the native contacts present, Q, is high (the middle panel of Fig. 2). Q crosses 1 2 at 0.827 and 0.908 ǫ/k B for OV and CSU-OV respectively. Folding is said to occur when all native contacts are established simultaneously and the kinetic optimality (typically around 0.3 -0.35 ǫ/k B ) may take place where P 0 is small.
It should be noted that in ref. [11] T f is defined as corresponding to the temperature at which a half of the native bonds are present (i.e. around 0.8 ǫ/k B ) -a much more relaxed criterion compared to P 0 crossing 1 2 . At this elevated temperature, substantially higher than the room temperature, the probability of maintaing the native conformation is simply zero. However, the temperature at which Q crosses 1 2 signals the onset of globular shapes in the model protein. The simulations reported on in ref. [11] could have been done around the T f defined through Q = 1 2 as this is the only temperature mentioned in the text.
The fluctuations in the contact occupation numbers may or may not affect the sequential locations of the knot ends. Fig. 3 shows locations of the knot in examples of trajectories at several temperatures. At T = 0.3 ǫ/k B , the knot ends stay fixed for at least 20 000 000 τ -a duration which is at least three orders of magnitude longer that optimal folding times of unknotted proteins within the same model [32]. At T = 1.00 ǫ/k B , the knot ends stay put for a while and then diffuse out of the chain rapidly. At t = 0.95 ǫ/k B we observe an intermittent behavior in which the knot disappears and is then restored. If there is any intermittency at T = 0.9 ǫ/k B , then the recovery time is longer than the scale of the simulations. The bottom panel of Fig. 2 shows the median unfolding time defined as in ref. [33], i.e. through the instant at which all contact that are sequentially separated by more than l are broken simultaneously. Taking l of 4 (local helical contacts) was numerically unfeasible so we took l = 10. The statistics were based on 110 trajectories. At T = 1.0 ǫ/k B and below less than 50% trajectories led to folding and the median time could not be determined -with the cutoff of 30 000 τ .
Folding in unbounded space
We now consider folding in an unbounded space when one starts from a random conformation which is nearly fully extended. Even though we use the same model as in ref.
[11], we do not find even one single trajectory that would lead to correct folding. However, there were trajectories which led to the establishment of all native contacts in an unknotted way. We shall refer to such situations as corresponding to misfolding. One of them is shown in panel C of Fig. 1. The lack of proper folding in 1J85 essentially agrees with the result of ref. [12] (where 0.1% success rate is reported). We have generated 1201 trajectories at 0.35 ǫ/k B and between 200 and 314 trajectories at 0.1, 0.2 and 0.3 ǫ/k B . The simulational cutoff time was 1 000 000 τ . With this statistics, a 1 -2% success rate claimed in ref. [11] (at unspecified temperatures) would mean observation of good folding in at least 12 trajectories. We speculate that the discrepancy may be due to the following factors a) possible biases in the initial conformations used in that reference, b) malfunctioning of the KMT algorithm (that may show knots when none are present), and c) the folding temperature range is narrower than 0.025 ǫ/k B , i.e. the steps we considered, and the right T to fold was found accidentally. It should be noted that we have obtained a total of 338 misfolded trajectories (17% of all trajectories). 143 of these are at 0.35 ǫ/k B and correspond to a mean folding time of 472 651 τ . It is possible to mistake some of the misfolded states for the knotted ones. We find no folding nor misfolding at T f defined by the condition of Q = 1 2 . Not much is changed when one attempts to reduce the conformational entropy by anchoring the C terminus. For the total of 427 trajectories (with T between 0.1 and 0.4 ǫ/k B ) 93 (i.e. 22%) established all native contacts without forming the knot.
We have also considered a different Go-like model, along the lines of Clementi et al. [34]. In this model, the backbone stiffness is accounted for by the more common bond and dihedral angle potentials. The contact interactions are given by the 10-12 Lennard-Jones potentials, and we take the contact map to be given by OV (here we we removed the i, i + 2 and i, i + 3 contacts). For this model, T f moves upward by about 0.2 ǫ/k B and P 0 is about 0.01 at 0.6 ǫ/k B . At this T , only one out of 246 trajectories led to the correct folding with the proper knot (the folding time was 510 264 τ ). At temperatures 0.7 ǫ/k B and 0.75 ǫ/k B we get 1% and 7% of the correctly folded trajectories respectively and none at lower temperatures. This model, however, is not the one considered in ref. [11].
Folding on the ribosome
The percentage-wise success, S, of reaching the properly knotted folded conformation increases substantially by simulating the process in the cotranslational way. The results are summarized in Fig. 4. They depend on the value of t w but there appears to be a saturation in S around t w of 5 000 τ . The experimental times of translation are certainly much longer than ∼ 5 µs but this is not expected to affect the S. However, the corresponding S for misfolding (the inset in the lower panel) may reach saturation at a bigger t w .
The data for the evidently optimal 0.35ǫ/k B were obtained based on 500 trajectories for each t w that was considered. Same statistics were used for 0.325 and 0.375 ǫ/k B and t w of 5 000 τ . In other cases, there were at least 100 trajectories. The average combined time of folding under the optimal conditions with t w = 5000 τ is 860353 τ . Comparable times are at 0.325 and 0.375 ǫ/k B .
Interestingly, switching to the Clementi et al.-like model generates no proper folding independent of whether the contact map is OV, CSU, or OV-CSU (in the temperature range between 0.45 and 1.3 ǫ/k B ). The backbone chain appears to be too stiff to allow for a knot in this model.
The lower panel of Fig. 4 shows that the time evolution from extended states results in a substantial percentage of the misfolded conformations. This percentage gets boosted by the nascent conditions from 33% to 37% if the evolution takes place at 0.3ǫ/k B . At this T , there is no knotted folding. On the other hand, at the knot-optimal temperature of 0.35ǫ/k B the nascent conditions produce only 6.6% of the misfolded states. Fig. 5 shows five snapshots of an example of a successful folding trajectory in our basic model of 1J85. (A related movie is available as the Supplementary Material). The sequential fragments emerge from the model ribosome in order: first green, then red, green again, and finally purple. At stage A, there is no tertiary order yet. At stage B, the knot-loop segment (75-95) has left the model ribosome. At stage C, the knot-loop forms a nearly planar and nearly closed contour and residue 121 arrives at the plane of this contour. Another perspective on this stage is shown in Fig. 6 where the knot-loop is shaded in red. The C-terminal part (121-156, in purple) of the protein must drag through the knot-loop and the success rate depends on how well residue 121 is pinned to the plane of the knot-loop. There are eight OV-based contacts that residue 121 makes with the knot-loop, as shown in Fig. 7. However, the stabilization of the attachment is enhanced by the CSU-derived contact 88-121. A fully formed slipknot is shown in panel D of Fig. 5 and, in more details, in Fig. 8. At the very next stage E, the protein gets detached from the ribosome and becomes knotted because the C-terminal segment goes through the knot-loop.
Our results confirm the picture proposed in ref. [11] that knotting in 1J85 is enabled by the slipknotting mechanism. However, we see it operational only when the protein is nascent. The wall facilitates formation of the C-terminal slipknot on the correct side of the knot-loop and it provides semi-confinement that allows for making repeated attempts to drag the slip-loop through the knot-loop. When one removes the wall but starts the evolution from the slipknot conformation, then -at optimality -75% of trajectories lead to the knot formation.
It is interesting to note that inclusion of the 88-121 contact, at T = 0.35ǫ/k B and for t w =5000 τ , boosts S from 3.0% to 4.8% An inclusion of all missing CSU contacts (the OV-CSU contact map) boosts it even further to 6.2%. All of these contacts should be considered native. Wallin, et. al. [35] have argued that non-native contacts are necessary to fold 1J85 to the knotted state. Specifically they added exponentially decaying nonnative contacts between segments [86,93] and [122,147]. Their definition of a native contact is that two heavy atoms from different residues must fall within the distance of 4.5Å from one another. This procedure misses the 88-121 contact. In our contact map, we generate 13 OV native contacts for the segments chosen by Wallin et al.. CSU adds 4 more and 88-121 is the fifth one. Thus there is no need to invoke non-native contacts to explain folding in 1J85.
The energetics of the slipknot formation during folding on the ribosome
We have observed that the knotting process is very rapid. It takes quite a long time for a protein to get to state (a) shown in Fig. 9, in which the slip-loop is positioned just above the knot-loop. We expect that the specific amino-acid arrangement leads to formation of a potential well and, therefore, emergence of a force that drags the slip-loop through the knot-loop. To prove this, we have monitored the potential energy associated with specific amino acids from the slip-loop. The top panels in Fig. 9 show the energy experienced by one of the amino acids from the slip-loop, the 125th in the sequence, if found at various locations within the plane parallel to the plane of the model ribosome and crossing through this amino acid. The potential well is very localized. In state (a), its depth is close to 0.50 ǫ and in state (b), 200 τ later, it increases to 1.0 ǫ. In state (c), the slipknot is already created and the well becomes very shallow -0.07 ǫ -and the slipknot ceases to move forward any further. If at stage (a) the conformation of the knot-loop is such that the well is not sufficiently deep then no proper knotting takes place.
Other deeply knotted proteins
We have considered two other deeply knotted (hypothetical) proteins [19]: 1O6D from Thermatoga maritima and 1VH0 from Staphylococcus aureus. Both were analyzed previously through stretching simulations [6]. We find no properly folded trajectories for 1O6D and 1VH0, either with or without the wall. However, we have obtained substantial percentages of the misfolding situations. For 1O6D, 41 out of 400 trajectories without the wall and 151 out of 400 with the wall resulted in misfolding. For 1VH0, the corresponding numbers are 136 out of 500 and 203 out of 390. Thus the ribosome-imitating wall helps in folding through semi-confinement but not sufficiently enough to form the proper knots in a noticeable way. However, more sophisticated models of the ribosome might work better.
Conclusions
We have used the structure-based molecular dynamics model to study folding of deeply knotted proteins. The structure-based models favor formation of the native contacts but carry no topological bias. It is an open question how to implement such a bias in a model. Establishment of the native contacts usually does not mean establishing the knot. The nascent conditions are found to enhance the probability of establishing the contacts and sometimes also the formation of the knot. We also find that achieving proper folding requires staying within the proper range of temperatures.
Our results are consistent with the experiments of Mallam et al. [36] that nascent proteins form knots more easily and that knots in 1J85 (YibK) persist in the chemically denatured state [37]. Independent of this, the GroEL-GroES chaperonin complex may also accelerate knotted folding [36].
Our results for 1J85 support the slipknot mechanism in the knot formation but suggest that it may operate only under the nascent conditions and when the temperature is within an optimal range. The geometry of the ribosome tunnel should provide even more confinement than that given just by the plane and should boost the efficiency of cotranslational folding. Finally, there is no need to invoke non-native contacts to fold to the knotted state in this system.
Proteins with shallow knots, such as MJ0366 from Methanocaldococcus jannaschii with the PDB structure code of 2EFV, studied theoretically in refs. [38,39] appear to fold in a different way both off-and on-ribosome. A discussion of this problem is being prepared for a separate report. | 2015-09-03T13:14:31.000Z | 2015-08-20T00:00:00.000 | {
"year": 2015,
"sha1": "8f3e3385e2ec73a5adbcd427859e6b4beebef197",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1509.01067",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8f3e3385e2ec73a5adbcd427859e6b4beebef197",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Biology",
"Chemistry",
"Medicine"
]
} |
225023146 | pes2o/s2orc | v3-fos-license | Assessment of Job Satisfaction Level and Its Associated Factors among Health Workers in Addis Ababa Health Centers: A Cross-Sectional Study
Health workers account for the largest share of public expenditures on health and play an important role in improving the quality of health services. (ere is concern that poor health worker performance limits the effectiveness of health systems strengthening efforts. A cross-sectional study was conducted from September to October 2016 in Addis Ababa health centers. Data were collected from 420 healthcare workers using a pretested and structured questionnaire by trained data collectors. EPI Info 7 was used for data entry, and analysis was done by SPSS version 20. Bivariate and multivariate logistic analyses were used to identify factors associated with the outcome variable and to control confounders. P values less than 0.05 were considered statistically significant. (e overall job satisfaction level accounts for 53.8% with 95% CI of (48.9%, 59.0%). Marital status and professional qualification were the potent predictors of job satisfaction. Respondents who never married were 1.65 times more likely to be satisfied in their job than those married or divorced (AOR: 1.65 (95% CI: 1.02, 2.66)). Laboratory professionals and nursing professionals were 2.74 and 1.97 times more likely to be satisfied in their job compared to health officers (AOR: 2.47 (95% CI: 1.14, 6.59) and AOR: 1.97 (95% CI: 1.12, 3.48), respectively). More than half of the healthcare workers in the study area were satisfied in their job. Marital status and healthcare workers’ profession type were predictors of job satisfaction. Research studies indicate that there is a positive relationship between performance and job satisfaction. Accordingly, the present study aimed at determining the level of job satisfaction of health workers and its associated factors in the health centers of Addis Ababa, Ethiopia.
Introduction
Health workers account for the largest share of public expenditures on health and play a crucial role in efforts to improve the availability and quality of health services [1]. Job satisfaction of employees is one of the most challenging concepts in any organization and is the basis for many of the policies and management strategies to increase productivity and efficiency [2]. In Ethiopia, the overall performance of health workers is negatively impacted by low levels of health worker motivation and job satisfaction [1].
Job satisfaction of employees is one of the most challenging concepts in any organization and is the basis for many of the policies and management strategies to increase productivity and efficiency. It is influenced by many factors including environmental and personal factors, income, nature and social status of the job, organizational prestige, promotion, job security, lack of role, ambiguity, and physical job conditions of coworkers such as lighting, noise, and space that affect people's ability to work [2]. e overall job satisfaction of health workers recorded in Jimma University Specialized Hospital was 41.4% [3]. e dissatisfaction level of nurses in public health facilities of Ethiopia showed 47% too [4]. Research studies in job satisfaction level in Ethiopia recorded up to 67% in nurses [5] and 62% in midwives [6], while in general healthcare workers in public health facilities indicated 41% to 44.5% [7][8][9].
Research consistently demonstrates a relationship between core job characteristics and job satisfaction. Along with higher job satisfaction and motivation, employees performing enriched jobs usually experience lower absenteeism and turnover [10]. Job characteristics have an influence on critical psychological states, which in turn influence personal and work outcomes, given the strength of the employee's growth needs. Positive psychological states are associated with high internal work motivation, highquality work performance, high satisfaction with the work, and low absenteeism and turnover [11]. e core job characteristics significantly predict and influence the three psychological states, and such psychological states significantly influence civil servants' internal motivation, general job satisfaction, and their performances [12]. e study conducted in Nigeria revealed a significant positive strong correlation between the overall work environment and the general job satisfaction of the nurses [13].
Research studies showed that leaders' behavior impact the subordinates such that they are motivated to achieve both organizational goals and their individual-valued goals [14]. e main factors that correlated with healthcare workers' overall job satisfaction were conflict resolution at work, support from one's supervisor, and relationship with coworkers [15]. In Kenya, job satisfaction is an indicator for the recruitment and retention of healthcare staff and the provision of good quality of care [16].
Research studies indicated that marital status, sex, and tenure of service [17], professional background [3], and male employees [18] have significant associations for job satisfaction. In Ethiopia, research studies showed that salary [7,9], the profession of healthcare workers [5,6,8], marital status [5], and sex of healthcare workers [6] were significant predictors of job satisfaction.
Despite the fact that various studies conducted among health workers in different health institutions of the country magnify the severity of the problem, most papers focus on a specific profession and academic institutions. e aim of this study was to determine the level of job satisfaction of health workers and the associated factors in the health centers of Addis Ababa, Ethiopia.
Methods
An institution-based cross-sectional study was conducted in twenty-four health centers from four subcities in Addis Ababa. Primary data were collected using a structured and pretested questionnaire. A total of 376 healthcare workers participated in the study. Based on the fact that the Ethiopian health policy focuses on midlevel health professionals, participants selected randomly from 24 health centers of the capital Addis Ababa were 6 (1.6%) diploma holder health assistants, 23 (6.1%) diploma holder pharmacists, 28 (7.4%) first degree holder laboratory technicians, 104 (27.7%) first degree holder health officers, 89 (23.7%) BSc degree holder nurses, 7 (1.9%) diploma holder laboratory technicians, 82 (21.8%) diploma holder nurses, and 35 (9.3%) degree holder pharmacists. ere are ten subcity health offices under Addis Ababa Health Bureau. Four subcity health offices (Bole, Kirkos, Yeka, and Gullele) were selected randomly as a sampling frame. Randomly selected health centers, six from each subcity, were the study areas. Simple random sampling methods were used to select participants at each stage of sampling. Data were collected from six health centers from September to October 2016. e overall job satisfaction level of respondents was determined using a mean score for each factor. Accordingly, the score above the mean was taken as being satisfied for each factor while the score below the mean was classified as being dissatisfied. e study was conducted after the ethical approval of Addis Continental Institute of Public Health Institutional Review Board. Data were collected after written consent with a brief description of the importance of the study to the participants. e collected data were checked for completeness, accuracy, and consistency. Cleaned and coded data were entered with Epi Info version 3.5.3 software (Atlanta, Georgia) by an experienced data clerk with close supervision and support. Cleaned and edited data were analyzed using Statistical Software for Social Sciences (SPSS) version 20. Descriptive and inferential statistical analyses were employed. Bivariate and multivariate logistic regression models were also used to detect the potential predictors of job satisfaction. Factor variables having a P value of 0.25 and less in the bivariate analysis were included in the multivariate analysis. For the relationship of job satisfaction of health workers and predictor variable, P values less than 0.05 were considered statistically significant.
Ethical approval was obtained from Addis Ababa Health Bureau and Addis Continental Institute of Public Health Institutional Ethics Review Board. Consent was obtained from each participant and confidentiality was assured before data collection.
Results
e mean age of the participants was 29.9 ± 5.8 SD (standard deviation). e age of the participants ranges from 23 to 53. e median age was 28 with an interquartile range of 5 (26 and 31) years. e minimum and maximum years of experience were 1 and 34 years, respectively. e median year of experience was 5 years and the interquartile range of participants' year of experience was 5 (3 and 8 years). e majority (73.2%) of the respondents belonged to the age groups less than and equal to 30 years. Almost half of the respondents were married (Table 1). e overall level of job satisfaction was 53.8% with 95% CI of (48.9%, 59.0%) (Figure 1). e respondents were satisfied with helping others (88.5%), task significance (81.1%), task identity (74.3%), and feedback (74. 5%). e major dissatisfaction factors were income (6.83%), professional hazard (27%), availability of resources and supplies (27%), and workload (32.2%). e study revealed 206 health professionals satisfied and 166 dissatisfied in their job (Figure 2).
Further analysis was performed using the crude odds ratios (CORs) with 95% confidence in bivariate analyses and adjusted odds ratios (AORs) with 95% confidence intervals which were obtained from the multivariate logistic regression model. In the model, variables which specify the respondents' level of satisfaction to the job were considered as outcome variables whereas gender, age group, marital status, level of education, professional qualification, and income level were categorized as explanatory variables.
In the multivariate logistic regression model, marital status and professional qualification were the potent predictors of job satisfaction. ose respondents who married were 1.65 times more likely to be satisfied in their job (AOR: 1.65 (95% CI: 1.02, 2.66)) than those who were not married or divorced. Laboratory professionals and nurse professionals were 2.74 times (AOR: 2.74 (95% CI: 1.14, 6.59)) and 1.97 times (AOR: 1.97 (95% CI: 1.12, 3.48)), respectively, more likely to be satisfied in their job compared to health officers ( Table 2).
Discussion
e overall level of job satisfaction from the total participants of health workers in Addis Ababa health centers was 53.8% for those who were satisfied and 46.2% for those who were dissatisfied in their job. Marital status and professional qualification were found to be the potent predictors of job satisfaction. e major subscales which contributed most to the overall job satisfaction were helping others (88.5%), task significance (81.1%), task identity (74.3%), and feedback (74.5%). e study incorporated the five core job characteristics of Hackman and Oldham's job characteristics model into the tool [19]. is is very important as policymakers can get inputs from the result while designing strategies to improve healthcare workers' satisfaction and increase the retention rate of health workers in different governmental healthcare facilities. Advances in Public Health e overall satisfaction level of the present study (53.8%) was not consistent with the previous studies done in Jimma Specialized Hospital (41.4%) [4], whereas consistent with nurses working in Sidama Zone Public Health Facilities (52.5%) [3]. A similar finding on midwifery showed that 52.9% of workers were satisfied in their job [6]. Similar findings were also observed in Malaysia in fast food outlet managers. e corelational study conducted to examine the contribution of Hackman and Oldham's core job characteristic theory which states that when a job has a high score on the five core characteristics, it is likely to generate three psychological states, which can lead to positive work outcomes, such as high internal work motivation, high satisfaction with the work, high quality work performance, and low absenteeism and turnover [19]. e result was also in line with the study conducted in Nigeria that the relationship between core job characteristics and job satisfaction influences the overall satisfaction of employees on core job dimensions [12]. e job satisfaction level of the present study was high compared to a study done in Pakistan in that 86% [20] of nurses were dissatisfied in their job. e major reason for dissatisfaction in this study was income (6.83%). Income has always been reported to be the usual predictor of job satisfaction of health workers [21]. It revealed that only 5.7% of nurses were satisfied in terms of basic salary. Other factors that contributed to dissatisfaction of health workers were professional hazard (27%), availability of resources and supplies (27%), workload (32.2%), and physical working place conditions (39.1%).
us, these factors may be the reason for job satisfaction difference in health facilities.
Marital status and professional qualification were the potent predictors of job satisfaction. e finding was consistent with the study done in England in that married employees were less satisfied with their job than the single ones pertaining to a relationship between different factors and job satisfaction [22]. is finding is supported by other similar studies in Ethiopia in which marital status and type of profession were the predictors of job satisfaction [6,7].
A similar study done among nonacademic staff in the universities of Sri Lanka showed that job satisfaction of unmarried staff was higher than that of the married staff [18]. However, the result was not consistent with several studies conducted inside and outside the country [8,23]. In these studies, singles reported a higher level of job stress and a lower level of job satisfaction than their married counterparts [3]. is is justified by the support from the spouse which may lower the job tension after the day's work which may not be available to single workers [4].
Advances in Public Health
In the study, laboratory professionals and nurses were more likely to be satisfied in their job than health officers. is professional category difference was also observed in different research studies [5,8,23]. is professional variation may be a difference in salary and professional type that leads to job satisfaction status [6,9,24]. e paper is not free from limitations; analysis was conducted using changed to Likert scale into binary mean scale, this may collapse the responses from five scales into two, while we used multivariable logistic regression that clearly indicates the predictor variables, marital status, and type of health professions.
Conclusion
e study indicated that there is a low level of job satisfaction in Addis Ababa health offices which can greatly affect the quality of health services provided. Marital status and type of professions were predictors for the job satisfaction level of healthcare workers. It entails further investigation to justify the controversial finding of the present study which explained singles were more satisfied than married unlike other studies in a similar setting.
Recommendations
Among the major subscales in the present study, three factors that are adopted from the job characteristics model besides helping others contributed most to the overall job satisfaction score: (1) Skill variety: the degree to which a job requires a variety of different activities in carrying out the work, which involved the use of a number of skills and talents of the employee (2) Task identity: encourage the feeling that the job is meaningful and worthwhile, thus motivating the employee to work smart (3) Feedback: the degree to which carrying out the work activities required by the job results in the employee obtaining information about the effectiveness of his or her performance Motivation can be enhanced by making the job so interesting and the worker so responsible that he or she is motivated simply by performing the job. Specifically, enriching jobs with skill variety, task identity, and feedback gives employees tasks requiring higher levels of skill and responsibility and greater control over how to perform their jobs.
Data Availability
e data used to support the findings of the study are available from the corresponding author at any time on request through mesfinaklilu@yahoo.com.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2020-10-19T18:09:12.173Z | 2020-09-29T00:00:00.000 | {
"year": 2020,
"sha1": "dbd73a30909fb7d716fdee5769cf745f1de52c43",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/aph/2020/1085029.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2cc41cbb1e0b619db0ab15172c901872fbc3f55d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233835358 | pes2o/s2orc | v3-fos-license | Strength and fire resistance characteristics of geopolymers synthesized from volcanic ash, red clay and waste pen shells
This study utilized volcanic ash and red clay, as well as calcined waste pen shell (Baluko) in the production of geopolymer-based materials. The geopolymers were formed by activating the mixture of these raw materials (as the alumina-silica rich materials) with activating solution of 12M NaOH/Na2SiO3 (w/w: 2.5:1). Two sample types, a cube type and a slab type, were used in the study in order to conform to test standards for compressive strength and fire resistance test. The cube type molds were for the compressive strength tests while the slab type was used for the fire resistance tests. Material testing such as Fourier Transform Infrared (FTIR) spectroscopy was used to analyze the chemical characteristics of both the raw materials and the geopolymer specimens. The mixture containing 45% volcanic ash- 45% red clay-10% calcined waste pen shell powder (by weight) was observed to have the highest compressive strength out of all the samples tested. The fire resistance of the geopolymers formed from a ternary mixture of 16% volcanic ash-66.67% red clay- 16% calcined waste pen shell powder (by weight) was also observed to be comparable to that of ordinary Portland Cement (OPC). Furthermore, the FTIR results of both raw materials and geopolymer showed evidence that geopolymerization occurred in the samples, indicating that the selected precursors are viable for use in the formation of geopolymers.
Introduction
Geopolymers, also referred to as alkali-activated cement, geocement, hydroceramic, etc., are inorganic polymers synthesized by the addition of an alkali solution to an aluminosilicate-rich precursor material. It is dubbed as a next-generation cement but with a lower carbon footprint and embodied energy as compared to ordinary Portland cement (OPC) [1][2][3]. Some of its properties are high compressive strength, good chemical resistance and high temperature resistance attributed to its three-dimensional structure of aluminate and silicate tetrahedra joined by oxygen corners [4,5]. Some recent studies show IOP Publishing doi: 10.1088/1757-899X/1109/1/012068 2 that geopolymers could also be utilized for wastewater treatment, as a heavy metal adsorbent, and as a heterogeneous catalyst [6][7][8][9].
Geopolymer technology allows for waste valorization of industrial and agro-industrial by-products and waste materials that have high concentration of alumina and silica. These include coal ash, rice hull ash, red mud, blast furnace slags, among others [10][11][12].
Volcanic ashes, natural pozzolans rich in alumina and silica, have been used in geopolymer studies [13][14]. In this study, red clay, which is also naturally rich in minerals, is combined with volcanic ash as the aluminosilicate raw material mixture in the geopolymer synthesis. The addition of calcined Baluko shells, a type of pen seashells rich in calcium, is done in consideration of the calcium-silicatehydrate, CaO-SiO2-H2O or C-S-H, reaction path in the presence of water [15,16]. C-S-H is the primary product of hydration of silica and calcium and is the major factor in the strength of OPC-based concrete.
Thus, this study explores the Philippine-locally available materials of volcanic ash, waste pen shells, and red clay to form geopolymer-based materials. These three materials were sourced from the Bicol Region where Mayon Volcano is located, red clay is abundant and Baluko (Pinnidae) pen shell meat is a local delicacy. Seashells, such as Baluko shells, are good sources of calcium and the calcined shell has been utilized to make a composite geopolymer with fly ash [17]. The use of such indigenous resources and waste (as are the Baluko pen shells) as raw materials for geopolymer precursor mitigates not only the waste's environmental footprint but also contributes towards producing eco-friendly materials.
Preparation of raw materials
The three materials, volcanic ash, red clay and waste pen shells (Baluko), sourced from Bicol region in the Philippines, are shown in Figure 1. The volcanic ash and red clay, being initially damp, rocky, and clumped together, underwent drying and grinding to achieve a particle size of not greater than 150 μm. The waste pen shells had to be washed first with sodium hypochlorite to remove contaminants that cling onto the shell before oven drying at 110 o C. The dried shells were then calcined at 700 o C in a muffle furnace.
Synthesis and testing of geopolymer samples
The activating solution was prepared by mixing 12M sodium hydroxide (NaOH) solution with water glass solution (Na2SiO3) in a ratio of 2.5:1. The solid-to-liquid ratio of 0.25 by mass was maintained in all mixtures following the mix design described in Figure 2 shows the infrared spectrum of the red clay. The values indicated in the waveforms are the peaks in which there are bonds and minerals essential in geopolymerization. The peaks from 3624 -3693 cm -1 indicate that kaolinite is present. Kaolinite, Al2Si2O5(OH)4, is a major constituent of red clay which gives sharp absorption bands in the 3600-3700 cm -1 region [18]. The band at 1040 cm -1 is due to asymmetric stretching vibrations of silicate tetrahedron [18]. The peak at 793 cm -1 indicates the presence of quartz which is explained by the Si-O stretching vibration. On the other hand, the peak at 677 cm -1 indicates a Si-O symmetric bending vibration due to low level of Al for Si substitution. The band at 538 cm -1 is due to the presence of hematite. It overlaps into one broad adsorption band centered at 535 cm -1 assignable to Fe-O present in kaolinite [18]. The region between 2850 -3000 cm -1 indicates that there is a presence of organic matter. More specifically, the peaks at 2860 and 2927 cm -1 correspond to the C-H stretching vibrations of some organic contribution [19]. . On the other hand, FTIR pattern from the calcination process indicates a new peak which appears at 3,620 cm −1 . This indicates the formation of basic OH groups which attached to the calcium atoms, which could make the calcined clay more reactive to form cementitious-like structure [20]. Figure 5 shows spectra of some of the geopolymer specimens produced from this experiment. The peak at around 1600-1630 cm -1 indicate a H-OH bending vibration and is typical for polymeric structures including aluminosilicate [22]. At around 1400 cm -1 , O-C-O stretching of carbonates occur. The peaks ranging from 950-1200 cm -1 indicate that there is a T-O-Si asymmetric stretching, in which T can either be Al or Si. The key feature of the spectra are the bands around 1000 cm -1 to 900 cm -1 which indicates the presence of geopolymeric structure. As far as evidence of C-S-H reaction, the presence of the absorption hump at 970-1100 cm −1 due to polymeric silica and at 800-970 cm -1 due to the dissolution of calcium silicate, is correlated with the development of water bending vibration bands (1500-1700 cm −1 ) due to the formation of calcium silicate hydrate, C-S-H [23,24].
Thermogravimetric analysis (TGA) of Baluko seashells
As shown in Figure 6, the Baluko shells experienced a minimal loss of mass from 0℃-420℃ of the test. However, at within the 500℃ range, the weight of the sample began to decrease. At about 690℃, a dramatic reduction of weight can be observed. The decrease in mass was due to the thermal decomposition reaction that occurred when CaCO3 was exposed to extreme temperatures, which resulted in the production of the compounds calcium oxide (CaO) and carbon dioxide (CO2). At about 700℃, a chemical reaction occurred converting CaCO3 into CaO. Figure 6. Thermograph of Baluko seashell.
Unconfined compressive strength
The mean compressive strengths of the geopolymer samples produced varied over a range of 0.06MPa to 3.61MPa for the samples cured at ambient conditions, whereas the mean compressive strengths of the samples pre-cured at 80 o C varied over a range of 0.25MPa to 7.19MPa. These results indicate that the pre-curing of the samples at an elevated temperature has resulted in an overall gain in the mean compressive strength of the geopolymers [22]. The results of compressive strength test are summarized in Figure 7. Moreover, the samples formed from ternary mixtures of volcanic ash, red clay, and calcined Baluko seashell were observed to have higher mean compressive strengths when compared to the samples formed from binary mixtures. It can be observed that the mean compressive strength of the ternary geopolymer binder mixtures ranges from 4.3MPa to 7.19MPa, attaining the highest recorded value when equal parts of volcanic ash content (45%) and red clay (45%) content are incorporated into the mixture, with an amount of calcined Baluko seashell not exceeding 10% of the whole mixture. This implies that the addition of calcined Baluko shells contribute to increasing the strength.
Fire resistance tests
The fire resistance tests were conducted following the Fire Test Standard ASTM E119 which exposes one side of the samples to a prescribed temperature vs. time profile as shown in Figure 8. According to the ASTM E119, the material is considered to have failed if the temperature of the unexposed surface has risen 140 o C above its initial temperature. Material failure is also considered if visual cracking takes place. The time at which material failure has occurred is taken as the material fire resistance rating (FRR) [25,26]. Table 3. It is seen here that the samples with higher calcined seashell content produces geopolymers with comparable fire resistance rating to OPC concrete and also with higher percentage of residual strength as seen in Table 4. Figure 9 seems to indicate that higher volcanic ash content has a higher fire resistance than red clay based on the actual temperature vs. time profile of the unexposed side of the samples and on the physical appearance of the unexposed surfaces before and after fire testing, as shown in Figure 10, which shows more visible cracks for the sample with higher red clay content. | 2021-05-07T00:04:11.334Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "5ef2efd9c4c882f1a2295a1b8994ba219750e8f4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1109/1/012068",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "63598772b181517a609c4984e17a909deb0f6662",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
91935119 | pes2o/s2orc | v3-fos-license | Variation between some apricot varieties in regard to flowering phenology in Boldogkôváralja ,
Phenology data are sensitive data to identify how plants are adapted to local climate and how they respond to climatic changes. Modeling flowering phenology allows us to identify the meteorological variables determining the reproductive cycle. Phenology of temperate of woody plants is assumed to be locally adapted to climate. Nevertheless, recent research shows that local adaptation may not be an important constraint in predicting phonological responses. Plant phenology is very sensitive to climatic conditions and plays an important role in plant adaptation to climate (Gebler et al. 2007). Modeling plant phenology using climatic variables became an important issue in recent years, given the impacts of the climate change on plant phenology (Hanninen 1990; Chuine et al. 2000; Galán et al. 2005) Many studies have shown that temperature increase is responsible for important changes in plant and animal phenology, such as advance of spring events and delay of autumn events (Menzel et al. 2006) Although it is generally assumed that the phenology of woody plants growing in temperate climate may adapt to different local climate conditions, local adaptation depends on a balance between natural selection and gene flow and will occur if the natural selection processes prevail (Lenormand 2002). When local adaptation exists, phonological models should be fitted for each population in order to obtain more accurate predictions, since the phonological response of various plant populations to climate is different. Recent studies on deciduous trees from northern Europe showed that local adaptation may not be an important constraint in predicting phenology at the species level (Chuine et al. 2000). However, the role of genetic differentiation in plant phenological response should be determined for each species whenever possible, in order to ensure the accurate model predictions at the species level, especially if the species is a cultivated one and the local varieties were selected. Temperature during winter and early spring season are the major factors affected the blooming of apricot tree (Szalay and Szabó 1999). Flower buds can be often frozen during winter frost periods, flowers and young fruit can be also injured in frost days inApril and as the results of this, no yield will be obtained (Jakubowski 1988, Krska 1993). The problem how to avoid or minimise the risk connected with weather conditions is still important to be solved in many countries, because that is the only possibility to increase interest of apricot production. The aim of this study was the estimation of blossoming of 14 apricot cultivars in Boldogkôváralja in 2009, 2010 and 2011 seasons. And this will help growers to select appropriate varieties to their weather conditions.
Introduction
Phenology data are sensitive data to identify how plants are adapted to local climate and how they respond to climatic changes.Modeling flowering phenology allows us to identify the meteorological variables determining the reproductive cycle.Phenology of temperate of woody plants is assumed to be locally adapted to climate.Nevertheless, recent research shows that local adaptation may not be an important constraint in predicting phonological responses.
Plant phenology is very sensitive to climatic conditions and plays an important role in plant adaptation to climate (Gebler et al. 2007).Modeling plant phenology using climatic variables became an important issue in recent years, given the impacts of the climate change on plant phenology (Hanninen 1990;Chuine et al. 2000;Galán et al. 2005) Many studies have shown that temperature increase is responsible for important changes in plant and animal phenology, such as advance of spring events and delay of autumn events (Menzel et al. 2006) Although it is generally assumed that the phenology of woody plants growing in temperate climate may adapt to different local climate conditions, local adaptation depends on a balance between natural selection and gene flow and will occur if the natural selection processes prevail (Lenormand 2002).When local adaptation exists, phonological models should be fitted for each population in order to obtain more accurate predictions, since the phonological response of various plant populations to climate is different.Recent studies on deciduous trees from northern Europe showed that local adaptation may not be an important constraint in predicting phenology at the species level (Chuine et al. 2000).However, the role of genetic differentiation in plant phenological response should be determined for each species whenever possible, in order to ensure the accurate model predictions at the species level, especially if the species is a cultivated one and the local varieties were selected.
Temperature during winter and early spring season are the major factors affected the blooming of apricot tree (Szalay and Szabó 1999).Flower buds can be often frozen during winter frost periods, flowers and young fruit can be also injured in frost days in April and as the results of this, no yield will be obtained (Jakubowski 1988, Krska 1993).The problem how to avoid or minimise the risk connected with weather conditions is still important to be solved in many countries, because that is the only possibility to increase interest of apricot production.
The aim of this study was the estimation of blossoming of 14 apricot cultivars in Boldogkôváralja in 2009, 2010 and 2011 seasons.And this will help growers to select appropriate varieties to their weather conditions.
Materials and methods
About 10 years old trees of several apricot cultivars were planted on the Myrobolan rootstock seedling, the experiment was established in Boldogkôváralja.First part of the experiment was carried out by randomized block design, in five replications with four trees on each plot.All agro- And this will help growers to select appropriate varieties to their weather conditions.For this target the blooming period of 19 apricot varieties of different origin was observed in three subsequent years.There was no large difference in the beginning of blooming in the different years, and the greatest variation between the start date of flowering was about 1 to 3 days as the place of experiment site near to northern border and also, length of flowering period of apricot trees is also inversely related to date when blooming started.The little differences in flowering dates and flowering periods due to the high temperature through the three seasons of study.
Key words: apricot phenology, blooming and flowering period
International Journal of Horticultural Science 2012, 18 (1): 7-9.Agroinform Publishing House, Budapest, Printed in Hungary ISSN 1585-0404 technical methods used in whole experiment were carried out like in apricot commercial orchard and plant protection according to current recommendation of Orchard Protection Programme.
Beginning of blooming period of 14 apricot cultivars were observed on each tree separately, from 2009 to 2011.The estimation was based on the visual observation average three times a year.
Results and discussion
The bloom period comprises the phase of start, main and end of blooming.The easiest to determine is the start.According to date in fig.1,2 and 3. we can divide all tested varieties according to their phonological parameters to 2 sections: First: varieties which flower early (Goldbar, Goldstrike, Sweet cot, Bergarouge.Jumbo Cot, Lilly Cot and Chrisgold) Second: varieties which flower late (Flavor Cot, Zebra, Tom Cot, Robada, Bergeron, Late cot, Yellow Cot).Also we can notice that the differences between all varieties is about 1 to 3 days according to start of flowering and this may be due to on the northern border of apricot cultivation, there is but little difference in blooming date of varieties.Actually, the delays between earliest and latest varieties grown shows 4-5 days differences.A higher number of varieties and cold spring weather may produce some differences of 8-12 days ((Nyuitó, 1980Pedric, 1992).The delay between the first and last blooming varieties depends on the date of beginning of blooming.The earlier the date of start, the longer the delay between varieties and already this cannot be noticed in this experiment as the differences between the dates of beginning of flowering is about 1 to 3 days.Within the variety group of Magyar kajszi, Nyuito (1980) observed differences of 3 to 12 days in the start of flowering, whereas Vachun (1983) stated 1-3 day differences between clones of Velkapavlovicka variety.
The figure (4) shows length of flowering period of apricot trees is also inversely related to date when blooming started.As an example, 6 to 9 days between the start and the end of blooming and this is related with the finding of (Nyuitó, 1980).Also, Suranyi and Molnar, 1981 found that Magyar kajszi finished blooming within 6-7 days at 20 °C, whereas Ezzat, A., Amriskó, L., Balázs, G., Mikita, T., Nyéki, J., Soltész, M. & Szabó, Z. (1992) 8-11 days at 15-20 °C, but 11-16 days at 12-17 °C.This may help us to understand the difference trend of one variety through 3 seasons of study as it depended on the temperature as the flowering period will be longer in cool weather than at higher temperature as Szalay and Szabo (1999) registered the blooming period of 20 apricot varieties at higher temperature the period lasted 5 to 7 days meanwhile lasted 14 to 21 days in cool weather.The sequence of varieties may change according to the season.Also, in our data there are no big differences in flowering period as there are no obvious differences in times of start, main and end of flowering.
Conclusion
This study may indicate that the varieties take different trends in flowering dates and this is according to the place and the weather so, we have to do long time research to get good classification to our varieties.
Fig 1 :
Fig 1: Time of start, main and end of blooming 14 apricot varieties through 2009 seasons.
Fig 2 :Fig 3 :Fig. 4 :
Fig 2: Time of start, main and end of blooming 14 apricot varieties through 2010 seasons | 2019-04-03T13:08:52.265Z | 2012-04-25T00:00:00.000 | {
"year": 2012,
"sha1": "5a417240ebf152a5aaf6e5ac7cbf459340f209e3",
"oa_license": "CCBY",
"oa_url": "https://ojs.lib.unideb.hu/IJHS/article/download/985/983",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5a417240ebf152a5aaf6e5ac7cbf459340f209e3",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
231918749 | pes2o/s2orc | v3-fos-license | A Multi-View Approach To Audio-Visual Speaker Verification
Although speaker verification has conventionally been an audio-only task, some practical applications provide both audio and visual streams of input. In these cases, the visual stream provides complementary information and can often be leveraged in conjunction with the acoustics of speech to improve verification performance. In this study, we explore audio-visual approaches to speaker verification, starting with standard fusion techniques to learn joint audio-visual (AV) embeddings, and then propose a novel approach to handle cross-modal verification at test time. Specifically, we investigate unimodal and concatenation based AV fusion and report the lowest AV equal error rate (EER) of 0.7% on the VoxCeleb1 dataset using our best system. As these methods lack the ability to do cross-modal verification, we introduce a multi-view model which uses a shared classifier to map audio and video into the same space. This new approach achieves 28% EER on VoxCeleb1 in the challenging testing condition of cross-modal verification.
INTRODUCTION
Speaker recognition and verification systems are conventionally based on the speech component as speech is a medium that partially represents the identity of the speaker. However, in a noisy acoustic environment, it can become harder to distinguish different speakers based only on speech signals. In such cases, humans often rely on other signals for the identity which are not affected by acoustic noise, such as facial features. Because of the complementary nature of audio and video, several audio-visual (AV) systems have been proposed for speaker identification [1,2,3].
Such AV identification systems vary depending on their fusion strategies and modeling approaches. As described in other multimodal studies, e.g. [4], fusion methods include early, mid-level and late fusion. Early fusion concatenates inputs and learns joint features of both modalities, mid-level fusion combines information after some independent processing of the two modalities and late fusion mainly consists of This work was done when L. Sarı was an intern at Facebook. score fusion from unimodal systems. As for modeling approaches, earlier systems make use of probabilistic models such as dynamic Bayesian networks [1] whereas recent studies focus on neural network based modeling [3].
A common approach to achieve speaker verification is to extract a speaker representative embedding from the given utterance and compare a pair of embeddings using a distance measure to determine if the given utterances belong to the same person. Earlier studies have used i-vectors [5], as speaker representation and probabilistic linear discriminant analysis (PLDA) scoring for verification. In recent studies, neural network based speaker embeddings, such as xvectors [6], are used. These systems usually process the input speech by a network that generates a sequence of features for the utterance which are then aggregated into a single vector to represent the speaker embedding [7,8]. These aggregation or summarization methods range from temporal pooling to cluster based approaches which are also used in computer vision studies such as NetVLAD [9] and GhostVLAD [10,7].
In this study, we first investigate AV speaker verification by learning AV speaker embeddings which assumes that audio and video data are simultaneously available at test time. To our knowledge, we achieve the best reported AV speaker verification performance on the VoxCeleb1 [11] and VoxCeleb2 [12] datasets. These AV systems assume the availability of both modalities during test time. However, in many practical settings, one of the modalities is degraded or may be missing on one side of the verification pair. For instance, the speaker may be off-screen or have their camera disabled while they are actively speaking in which case only the audio stream is usable for verification. The audio may be missing or corrupted in some other scenarios. There may also be some verification pairs where audio is the usable modality on one side and video on the other. Here we need to do cross-modal matching, i.e. verifying if a video and an audio signal represent the same person. Late fusion or mid-level fusion cannot handle such cases. Therefore, we propose a multiview approach that allows us to perform verification in the case where a pair does not have matching modalities available, either audio or video. The proposed multiview approach achieves this by mapping audio and video into the same space by using a shared classifier on top of the unimodal encoders. arXiv:2102.06291v1 [cs.SD] 11 Feb 2021
RELATED WORK
There have been several studies on VoxCeleb1 and VoxCeleb2 datasets and several benchmarks have been published. The typical setup is to train the system on VoxCeleb2-dev set and test on VoxCeleb1-test verification pairs. These setups differ in network architecture and the embedding aggregation steps. For example, in [12], ResNet-50 architecture and time average pooling is used which achieves 4.2% EER. In [7], thin ResNet-34 architecture is used along with GhostVLAD pooling that results in 3.2% EER. In [8], a convolutional attention model is proposed for time and frequency dimensions and GhostVLAD based aggregation is applied and the model achieves 2.0%. The lowest EER on the VoxCeleb1 test set is reported in [13], where their best system makes use of data augmentation and system combination.
Although the VoxCeleb2 dataset comes with videos, there are only a few studies on audio-visual approaches for speaker verification. In [3], pretrained face and voice embeddings are fused using a cross-modal attention mechanism on short speech segments (0.1s or 1s). In their tests on VoxCeleb2, they obtain an EER of 5.3%. They also analyze the performance in the case of noisy or missing modality and their performance degrades to 7.7% and 12.2% when voice and face embeddings are omitted, respectively. In [14], an audio-visual self-supervised approach is used to train a system that learns identity and context embeddings separately. As a comparison, they also report audio-only fully-supervised training results on the VoxCeleb1 test set which achieves 7.3% EER. Since they use only 20% of the VoxCeleb2 dataset for training and there is not a standard set of verification pairs for VoxCeleb2, their AV results are not directly comparable to the previous study.
Cross-modal processing has been recently used in different combinations such as audio-video [15,14,16,17] and speech-text [18]. The common approach in these studies is to map inputs from different modalities into a shared space to achieve cross-modal retrieval. For example, in [15], contrastive loss is used to learn to map matching face and voice embeddings to the same space. In [16], same-different classification is performed on the cosine scores between face and voice embeddings to train the system. In [17], a novel loss function is proposed to learn the embeddings in a shared space. Their loss function tries preserving neighborhood constraints within and across modalities.
UNIMODAL AND MULTIMODAL MODELS
In the training stage, our verification models are optimized to learn speaker discriminative embeddings. The cosine similarity between embeddings coming from two videos are then used for verification at test time. In this section, we will describe the uni-modal and multi-modal systems that allow us to generate these embeddings.
Our unimodal systems consist of an encoder (F ) followed by a nonlinear classifier (C) as shown in Fig. 1a and 1b. Therefore, the final network output is represented by where the subscript i in x i denotes the modality of the input. In order to achieve AV speaker verification using unimodal systems, we use late score fusion. In this fusion, we separately compute the cosine similarity using each unimodal system and then average the similarities to get the final verification scores.
For joint AV training, we investigate a naive mid-level fusion approach. This is shown in Fig. 1c. In this model, we have separate encoders for audio and video, whose outputs are concatenated along the feature dimension before being fed into a nonlinear classifier. The network output is represented as In this case, a single loss function is applied to the joint output y AV during training. During verification, AV embeddings from two AV inputs are compared using cosine similarity.
THE MULTIVIEW MODEL
We propose a model that is trained to generate high level representations for audio and video modalities in a space shared across the two modalities. Such a system enables us to use the learned embeddings in a cross-modal testing scheme. We achieve this by using a shared classifier for audio and video encoder outputs and hence when optimized jointly, the encoder outputs are mapped to a shared space. We call this system a multi-view system since the classifier sees different views of the same input, i.e. the audio component and the visual component of the video input. As shown in Fig. 1d, in the multi-view model, we still have two separate encoders for audio and video (F A and F V ). If we denote the multi-view classifier by C M , then the network will have two outputs one for each modality: In this study, we jointly train the whole network with a multi-task objective. The total loss L is calculated using where the unimodal losses L A and L V are computed based on y M,A and y M,V , respectively. Note that here we jointly optimize two encoders and a single shared classifier.
EXPERIMENTS AND RESULTS
We perform our training on VoxCeleb2 (VC2) dev set and test on both VoxCeleb1 (VC1) and VC2 test sets. For the VC1 case, we use the verification pairs provided as part of the dataset. For VC2, we sample one positive and one negative video for each test set video as there is not an official set of pairs provided with the dataset. The positive video is randomly uniformly chosen from the utterances of the same speaker and the negative video is sampled from a different speaker by first choosing a random speaker and then selecting a random utterance of that sampled negative speaker. To get the training and validation splits, we set aside one video (including several utterances) of each speaker for validation and use the rest for training. This gives us roughly 995k training utterances and 97k validation utterances with their corresponding visuals.
Implementation Details
We use 64-dimensional logmel features to represent the audio. For the video component, we downsample the data to 2 frames per second and apply face detection to each frame using Detectron2 [19], then resize the face crops into 112x112 pixels. We skip the frames for which we do not have a face detection output. If we cannot detect any of the faces in a video, we use a single zero frame to represent it. In our unimodal and mid-level AV fusion systems, we make use of variants of convolutional neural networks (CNNs) for modeling the encoders. For the audio-only network and for the audio branch of the AV network, we use Mo-bileNetV2 [20] [20]. For the video-only branch of the AV model, we use ResNet architecture [21]. At the end of encoders, we have a sequence of features on both branches. In order to summarize the utterance into a single vector we pool using a self-attention mechanism on the audio-branch and temporal pooling on the video branch. These result in 356 dimensional audio encodings and 2048 dimensional video encodings. In the multi-view system, in order to bring them into the same dimensions, in our case 256, we apply a linear projection layer on both branches before feeding them into the classifier. The classifiers consist of a fully connected layer, followed by ReLU nonlinearity, batch normalization [22], dropout [23] and a linear fully connected layer.
We train all the networks with arc-margin loss [24]. We use a learning rate of 0.001 which is reduced by a factor of 0.95 based on the plateau of the loss value, the batchsize is 128.
Experiments on Unimodal and Multimodal Models
In Table 1, we present the EERs on VC1 and VC2 datasets. The upper part of the table includes audio-only (A-only) EER of [8,13] and attention based fusion proposed in [3]. In the lower part of the table, the first two rows show the unimodal performance of our systems. In the case of A-only, we achieve comparable results to the current best performance on VC1. Although [13] reports lower EER, they make use of either heavy data augmentation or system fusion. The third row reports EER for our mid-level fusion approach. Since it makes use of both modalities, the EER is lower than either of the unimodal systems. We also experimented with score fusion by averaging the cosine similarity scores from various systems before making the verification decision. The late fusion of unimodal systems achieves even lower performance than the naive AV fusion possibly because separately optimized A-only and V-only systems learn to capture the best representation of their respective input and their late fusion allows combining the best decisions from each modality. Furthermore, if we combine all three systems, then we achieve the lowest EERs on both VC1 and VC2. Note that our VC2 EER is not directly comparable to the one reported in [3] as we are not using the same verification pairs. Still, we achieve the lowest EER that has been reported on the VC2 test set so far.
On manually cross checking the results of our system with the labels in the VC2 test set, we observe some interesting er-
Experiments on the Multi-view Model
The multi-view model allows for greater flexibility during testing as compared to the other models. With a multi-view model, in addition to unimodal testing, we can apply late fusion to audio and video similarity scores, as well average audio and video embeddings to compute the similarity scores.
In Table 2, we show the unimodal performance of the multi-view model as well as their score fusion. When we compare the A-only performance of the multi-view model with that of unimodal model, we see some degradation in performance. However, the V-only performance of the multi-view model is comparable to the unimodal system, the differences are less than 0.2%. We think that there are a couple of reasons for the reduction in the A-only performance: a) The multiview model requires that the intermediate dimensions of Audio and Video embeddings to be the same which are different than our unimodal systems and this causes reduction in the total number of trainable parameters, b) especially, at the beginning of training, statistics of audio and video embeddings differ so we had to remove the shared batch normalization layer from the classifier part of the network. That might have as well affected the performance. On the other hand, when we look at the score fusion of the multi-view system given in last row of Table 2, we see that its EER is lower than either of the unimodal systems reported in Table 1 (A-only, V-only). This also shows that score fusion is a simple but an effective mechanism to reduce the EER. Another observation that we make is that it is harder to optimize both embeddings simultaneously as compared to the separate training case which causes the EER difference between A-only and V-only test cases as compared to Table 1.
As described in Section 4, the main goal of the multi-view model is to do cross-modal testing. We simulate this A vs. Table 2: Audio-only, video-only and score fusion results from the multi-view system on both VC1 and VC2 Test pairs VC2 EER VC1 EER A vs V of [15] NA 29.5 A vs V of [17] NA 29.6 A vs V of [16] 22.5 NA A vs V (our) 29.5 28.0 This makes the problem challenging as we try to match the face of a previously unseen person to the voice of a previously unheard person. It has been shown that even human performance on this task is low (more than 20% error [25]). Since A vs. V cross-modal verification setting is the most difficult situation, it has higher EER as compared to Table 2 but it is still better than the 50% chance level. Table 3 shows cross-modal EER of our system and other published systems on VC1 and VC2 test sets. Here we observe that our system's performance is comparable to that of previously published systems. However, we cannot claim that our system is better or worse as the other works do not use the VoxCeleb dev/test splits in a similar manner. We also performed A vs. AV and V vs. AV type of verification tests which are lacking one modality only on one side. Our experiments show that in such scenarios, it is better to use the matched data (A vs. A, or V vs. V) rather than fusing audio and video embeddings linearly, i.e. taking the average of audio and video embeddings. This is probably because of the fact that the shared space is not a linear space and it does not necessarily cover the linear combination of two embeddings.
CONCLUSIONS
In this work, we first investigated AV speaker verification on VoxCeleb datasets. We learned the AV embeddings from VC2 dataset and then applied cosine similarity based verification on both VC2 and VC1 test sets. We showed that with score fusion of unimodal and mid-level AV fusion models, we achieve the lowest EER reported on VC1 test set in the AV testing condition. We also proposed a multi-view system that maps audio and video to a shared space and enables the cross-modal verification scenario of real verification systems. | 2021-02-15T02:15:42.147Z | 2021-02-11T00:00:00.000 | {
"year": 2021,
"sha1": "5eaa4562fbb703cc12e44ca4514520f86b583e26",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2102.06291",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5eaa4562fbb703cc12e44ca4514520f86b583e26",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
17708014 | pes2o/s2orc | v3-fos-license | The Excitement Open Platform for Textual Inferences
This paper presents the Excitement Open Platform (EOP), a generic architecture and a comprehensive implementation for textual inference in multiple languages. The platform includes state-of-art algorithms, a large number of knowledge resources, and facilities for experimenting and testing innovative approaches. The EOP is distributed as an open source software.
Introduction
In the last decade textual entailment has been a very active topic in Computational Linguistics, providing a unifying framework for textual inference. Several evaluation exercises have been organized around Recognizing Textual Entailment (RTE) challenges and many methodologies, algorithms and knowledge resources have been proposed to address the task. However, research in textual entailment is still fragmented and there is no unifying algorithmic framework nor software architecture.
In this paper, we present the Excitement Open Platform (EOP), a generic architecture and a comprehensive implementation for multilingual textual inference which we make available to the scientific and technological communities. To a large extent, the idea is to follow the successful experience of the Moses open source platform (Koehn et al., 2007) in Machine Translation, which has made a substantial impact on research in that field. The EOP is the result of a two-year coordinated work under the international project EXCITEMENT. 1 A consortium of four academic partners has defined the EOP architectural specifications, implemented the functional interfaces of the EOP components, imported existing entailment engines into the EOP 1 http://www.excitement-project.eu and finally designed and implemented a rich environment to support open source distribution.
The goal of the platform is to provide functionality for the automatic identification of entailment relations among texts. The EOP is based on a modular architecture with a particular focus on languageindependent algorithms. It allows developers and users to combine linguistic pipelines, entailment algorithms and linguistic resources within and across languages with as little effort as possible. For example, different entailment decision approaches can share the same resources and the same subcomponents in the platform. A classification-based algorithm can use the distance component of an edit-distance based entailment decision approach, and two different approaches can use the same set of knowledge resources. Moreover, the platform has various multilingual components for languages like English, German and Italian. The result is an ideal software environment for experimenting and testing innovative approaches for textual inferences. The EOP is distributed as an open source software 2 and its use is open both to users interested in using inference in applications and to developers willing to extend the current functionalities.
The paper is structured as follows. Section 2 presents the platform architecture, highlighting how the EOP component-based approach favors interoperability. Section 3 provides a picture of the current population of the EOP in terms of both entailment algorithms and knowledge resources. Section 4 introduces expected use cases of the platform. Finally, Section 5 presents the main features of the open source package.
Architecture
The EOP platform takes as input two text portions, the first called the Text (abbreviated with T), the second called the Hypothesis (abbreviated with H).
Figure 1: EOP architecture
The output is an entailment judgement, either "Entailment" if T entails H, or "NonEntailment" if the relation does not hold. A confidence score for the decision is also returned in both cases.
The EOP architecture (Padó et al., 2014) is based on the concept of modularization with pluggable and replaceable components to enable extension and customization. The overall structure is shown in Figure 1 and consists of two main parts. The Linguistic Analysis Pipeline (LAP) is a series of linguistic annotation components. The Entailment Core (EC) performs the actual entailment recognition. This separation ensures that (a) the components in the EC only rely on linguistic analysis in well-defined ways and (b) the LAP and EC can be run independently of each other. Configuration files are the principal means of configuring the EOP. In the rest of this section we first provide an introduction to the LAP, then we move to the EC and finally describe the configuration files.
Linguistic Analysis Pipeline (LAP)
The Linguistic Analysis Pipeline is a collection of annotation components for Natural Language Processing (NLP) based on the Apache UIMA framework. 3 Annotations range from tokenization to part of speech tagging, chunking, Named Entity Recognition and parsing. The adoption of UIMA enables interoperability among components (e.g., substitution of one parser by another one) while ensuring language independence. Input and output of the components are represented in an extended version of the DKPro type system based on UIMA Common Analysis Structure (CAS) (Gurevych et al., 2007;Noh and Padó, 2013).
Entailment Core (EC)
The Entailment Core performs the actual entailment recognition based on the preprocessed text made by the Linguistic Analysis Pipeline. It consists of one or more Entailment Decision Algorithms (EDAs) and zero or more subordinate components. An EDA takes an entailment decision (i.e., "entailment" or "no entailment") while components provide static and dynamic information for the EDA.
Entailment Decision Algorithms are at the top level in the EC. They compute an entailment decision for a given Text/Hypothesis (T/H) pair, and can use components that provide standardized algorithms or knowledge resources. The EOP ships with several EDAs (cf. Section 3).
Scoring Components accept a Text/Hypothesis pair as an input, and return a vector of scores. Their output can be used directly to build minimal classifier-based EDAs forming complete RTE systems. An extended version of these components are the Distance Components that can produce normalized and unnormalized distance/similarity values in addition to the score vector.
Annotation Components can be used to add different annotations to the Text/Hypothesis pairs. An example of such a type of component is one that produces word or phrase alignments between the Text and the Hypothesis.
Lexical Knowledge Components describe semantic relationships between words. In the EOP, this knowledge is represented as directed rules made up of two word-POS pairs, where the LHS (left-hand side) entails the RHS (righthand side), e.g., (shooting star, N oun) =⇒ (meteorite, N oun). Lexical Knowledge Components provide an interface that allows for (a) listing all RHS for a given LHS; (b) listing all LHS for a given RHS; and (c) checking for an entailment relation for a given LHS-RHS pair. The interface also wraps all major lexical knowledge sources currently used in RTE research, including manually constructed ontologies like WordNet, and encyclopedic resources like Wikipedia.
Syntactic Knowledge Components capture entailment relationships between syntactic and lexical-syntactic expressions. We represent such relationships by entailment rules that link (optionally lexicalized) dependency tree fragments that can contain variables as nodes. For example, the rule fall of X =⇒ X falls, or X sells Y to Z =⇒ Z buys Y from X express general paraphrasing patterns at the predicate-argument level that cannot be captured by purely lexical rules. Formally, each syntactic rule consists of two dependency tree fragments plus a mapping from the variables of the LHS tree to the variables of the RHS tree. 4
Configuration Files
The EC components can be combined into actual inference engines through configuration files which contain information to build a complete inference engine. A configuration file completely describes an experiment. For example, it specifies the resources that the selected EDA has to use and the data set to be analysed. The LAP needed for data set preprocessing is another parameter that can be configured too. The platform ships with a set of predefined configuration files accompanied by supporting documentation.
Entailment Algorithms and Resources
This section provides a description of the Entailment Algorithms and Knowledge Resources that are distributed with the EOP.
Entailment Algorithms
The current version of the EOP platform ships with three EDAs corresponding to three different approaches to RTE: an EDA based on transformations between T and H, an EDA based on edit distance algorithms, and a classification based EDA using features extracted from T and H.
Transformation-based EDA applies a sequence of transformations on T with the goal of making it identical to H. If each transformation preserves (fully or partially) the meaning of the original text, then it can be concluded that the modified text (which is actually the Hypothesis) can be inferred from the original one. Consider the following simple example where the text is "The boy was located by the police" and the Hypothesis is "The child was found by the police". Two transformations for "boy" → "child" and "located" → "found" do the job.
In the EOP we include a transformation based inference system that adopts the knowledge based transformations of Bar-Haim et al. (2007), while incorporating a probabilistic model to estimate transformation confidences. In addition, it includes a search algorithm which finds an optimal sequence of transformations for any given T/H pair (Stern et al., 2012).
Edit distance EDA involves using algorithms casting textual entailment as the problem of mapping the whole content of T into the content of H. Mappings are performed as sequences of editing operations (i.e., insertion, deletion and substitution) on text portions needed to transform T into H, where each edit operation has a cost associated with it. The underlying intuition is that the probability of an entailment relation between T and H is related to the distance between them; see Kouylekov and Magnini (2005) for a comprehensive experimental study.
Classification based EDA uses a Maximum Entropy classifier to combine the outcomes of several scoring functions and to learn a classification model for recognizing entailment. The scoring functions extract a number of features at various linguistic levels (bag-of-words, syntactic dependencies, semantic dependencies, named entities). The approach was thoroughly described in Wang and Neumann (2007).
Knowledge Resources
As described in Section 2.2, knowledge resources are crucial to recognize cases where T and H use different textual expressions (words, phrases) while preserving entailment. The EOP platform includes a wide range of knowledge resources, including lexical and syntactic resources, where some of them are grabbed from manual resources, like dictionaries, while others are learned automatically. Many EOP resources are inherited from pre-existing RTE systems migrated into the EOP platform, but now use the same interfaces, which makes them accessible in a uniform fashion.
There are about two dozen lexical (e.g. wordnets) and syntactic resources for three languages (i.e. English, Italian and German). However, since there is still a clear predominance of English resources, the platform includes lexical and syntactic knowledge mining tools to bootstrap resources from corpora, both for other languages and for specific domains. Particularly, the EOP platform includes a language independent tool to build Wikipedia resources (Shnarch et al., 2009), as well as a language-independent framework for building distributional similarity resources like DIRT (Lin and Pantel, 2002) and Lin similarity (Lin, 1998).
EOP Evaluation
Results for the three EDAs included in the EOP platform are reported in Table 1. Each line represents an EDA, the language and the dataset on which the EDA was evaluated. For brevity, we omit here the knowledge resources used for each EDA, even though knowledge configuration clearly affects performance. The evaluations were performed on RTE-3 dataset (Giampiccolo et al., 2007), where the goal is to maximize accuracy. We (manually) translated it to German and Italian for evaluations: in both cases the results fix a reference for the two languages. The two new datasets for German and English are available both as part of the EOP distribution and independently 5 . The transformation-based EDA was also evaluated on RTE-6 dataset (Bentivogli et al., 2010), in which the goal is to maximize the F1 measure. The results of the included EDAs are higher than median values of participated systems in RTE-3, and they are competing with state-of-the-arts in RTE-6 results. To the best of our knowledge, the results of the EDAs as provided in the platform are the highest among those available as open source systems for the community.
Use Cases
We see four primary use cases for the EOP. Their requirements were reflected in our design choices.
Use Case 1: Applied Textual Entailment. This category covers users who are not interested in the details of RTE but who are interested in an NLP task in which textual entailment can take over part of or all of the semantic processing, such as Question Answering or Intelligent Tutoring. Such users require a system that is as easy to deploy as possible, which motivates our offer of the EOP platform as a library. They also require a system that provides good quality at a reasonable efficiency as well as guidance as to the best choice of parameters. The latter point is realized through our results archive in the official EOP Wiki on the EOP site.
Use Case 2: Textual Entailment Development. This category covers researchers who are interested in Recognizing Textual Entailment itself, for example with the goal of developing novel algorithms for detecting entailment. In contrast to the first category, this group need to look "under the hood" of the EOP platform and access the source code of the EOP. For this reason, we have spent substantial effort to provide the code in a well-structured and well-documented form.
A subclass of this group is formed by researchers who want to set up a RTE infrastructure for languages in which it does not yet exist (that is, almost all languages). The requirements of this class of users comprises clearly specified procedures to replace the Linguistic Analysis Pipeline, which are covered in our documentation, and simple methods to acquire knowledge resources for these languages (assuming that the EDAs themselves are largely language-independent). These are provided by the language-independent knowledge acquisition tools which we offer alongside the platform (cf. Section 3.2).
Use Case 3: Lexical Semantics Evaluation. A third category consists of researchers whose primary interest is in (lexical) semantics.
As long as their scientific results can be phrased in terms of semantic similarities or inference rules, the EOP platform can be used as a simple and standardized workbench for these results that indicates the impact that the semantic knowledge under consideration has on deciding textual entailment. The main requirement for this user group is the simple integration of new knowledge resources into the EOP platform. This is catered for through the definition of the generic knowledge component interfaces (cf. Section 2.2) and detailed documentation on how to implement these interfaces.
Use Case 4: Educational Use. The fourth and final use case is as an educational tool to support academic courses and projects on Recognizing Textual Entailment and inference more generally. This use case calls, in common with the others, for easy usability and flexibility. Specifically for this use case, we have also developed a series of tutorials aimed at acquainting new users with the EOP platform through a series of increasingly complexity exercises that cover all areas of the EOP. We are also posting proposals for projects to extend the EOP on the EOP Wiki.
EOP Distribution
The EOP infrastructure follows state-of-the-art software engineering standards to support both users and developers with a flexible, scalable and easy to use software environment. In addition to communication channels, like the mailing list and the issue tracking system, the EOP infrastructure comprises the following set of facilities.
Version Control System: We use GitHub, 6 a web-based hosting service for code and documentation storage, development, and issue tracking.
Web Site: The GitHub Automatic Page Generator was used to build the EOP web site and Wiki, containing a general introduction to the software platform, the terms of its license, mailing lists to contact the EOP members and links to the code releases.
Documentation: Both user and developer documentation is available from Wiki pages; the pages are written with the GitHub Wiki Editor and hosted on the GitHub repository. The documentation includes a Quick Start guide to start using the EOP platform right away, and a detailed step by step tutorial.
Results Archive: As a new feature for community building, EOP users can, and are encouraged to, share their results: the platform configuration files used to produce results as well as contact information can be saved and archived into a dedicated page on the EOP GitHub repository. That allows other EOP users to replicate experiments under the same condition and/or avoid doing experiments that have already been done. 6 https://github.com/ Build Automation Tool: The EOP has been developed as a Maven 7 multi-modules project, with all modules sharing the same Maven standard structure, making it easier to find files in the project once one is used to Maven.
Maven Artifacts Repository: Using a Maven repository has a twofold goal: (i) to serve as an internal private repository of all software libraries used within the project (libraries are binary files and should not be stored under version control systems, which are intended to be used with text files); (ii) to make the produced EOP Maven artifacts available (i.e., for users who want to use the EOP as a library in their own code). We use Artifactory 8 repository manager to store produced artifacts.
Continuous Integration:
The EOP uses Jenkins 9 for Continuous Integration, a software development practice where developers of a team integrate their work frequently (e.g., daily).
Code Quality Tool: Ensuring the quality of the produced software is one of the most important aspects of software engineering. The EOP uses tools like PMD 10 that can automatically be run during development to help the developers check the quality of their software.
Project Repository
The EOP Java source code is hosted on the EOP Github repository and managed using Git. The repository consists of three main branches: the release branch contains the code that is supposed to be in a production-ready state, whereas the master branch contains the code to be incorporated into the next release. When the source code in the master branch reaches a stable point and is ready to be released, all of the changes are merged back into release. Finally, the gh-pages branch contains the web site pages.
Licensing
The software of the platform is released under the terms of General Public License (GPL) version 3. 11 The platform contains both components and resources designed by the EOP developers, as well as others that are well known and freely available in the NLP research community. Additional components and resources whose license is not compatible with the EOP license have to be downloaded and installed separately by the user.
Conclusion
This paper has presented the main characteristics of Excitement Open Platform platform, a rich environment for experimenting and evaluating textual entailment systems. On the software side, the EOP is a complex endeavor to integrate tools and resources in Computational Linguistics, including pipelines for three languages, three pre-existing entailment engines, and about two dozens of lexical and syntactic resources. The EOP assumes a clear and modular separation between linguistic annotations, entailment algorithms and knowledge resources which are used by the algorithms. A relevant benefit of the architectural design is that a high level of interoperability is reached, providing a stimulating environment for new research in textual inferences.
The EOP platform has been already tested in several pilot research projects and educational courses, and it is currently distributed as open source software under the GPL-3 license. To the best of our knowledge, the entailment systems and their configurations provided in the platform are the best systems available as open source for the community. As for the future, we are planning several initiatives for the promotion of the platform in the research community, as well as its active experimentation in real application scenarios. | 2015-03-27T18:11:09.000Z | 2014-06-01T00:00:00.000 | {
"year": 2014,
"sha1": "a53815cf0c9cbb54a23b11f73c7d532910afa750",
"oa_license": null,
"oa_url": "http://www.dfki.de/~neumann/publications/new-ps/EOP_Demo_ACL2014.pdf",
"oa_status": "GREEN",
"pdf_src": "ACL",
"pdf_hash": "a53815cf0c9cbb54a23b11f73c7d532910afa750",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
213942003 | pes2o/s2orc | v3-fos-license | Uncertainty-Aware Search Framework for Multi-Objective Bayesian Optimization
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto set of solutions while minimizing the number of function evaluations. For example, in hardware design optimization, we need to find the designs that trade-off performance, energy, and area overhead using expensive simulations. We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation to solve this problem. The selection method of USeMO consists of solving a cheap MO optimization problem via surrogate models of the true functions to identify the most promising candidates and picking the best candidate based on a measure of uncertainty. We also provide theoretical analysis to characterize the efficacy of our approach. Our experiments on several synthetic and six diverse real-world benchmark problems show that USeMO consistently outperforms the state-of-the-art algorithms.
Introduction
Many engineering and scientific applications involve making design choices to optimize multiple objectives. Some examples include tuning the knobs of a compiler to optimize performance and efficiency of a set of software programs; and designing new materials to optimize strength, elasticity, and durability. There are two challenges in solving these kind of optimization problems: 1) The objective functions are unknown and we need to perform expensive experiments to evaluate each candidate design. For example, performing computational simulations and physical lab experiments for compiler optimization and material design applications respectively.
2) The objectives are conflicting in nature and all of them cannot be optimized simultaneously. Therefore, we need to find the Pareto optimal set of solutions. A solution is called Pareto optimal if it cannot be improved in any of the objectives without compromising some other objective. The overall goal is to approximate the true Pareto set while minimizing the number of function evaluations.
Bayesian Optimization (BO) (Shahriari et al. 2016) is an effective framework to solve blackbox optimization prob-Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. lems with expensive function evaluations. The key idea behind BO is to build a cheap surrogate model (e.g., Gaussian Process (Williams and Rasmussen 2006)) using the real experimental evaluations; and employ it to intelligently select the sequence of function evaluations using an acquisition function, e.g., expected improvement (EI). There is a large body of literature on single-objective BO algorithms (Shahriari et al. 2016) and their applications including hyper-parameter tuning of machine learning methods (Snoek, Larochelle, and Adams 2012;Kotthoff et al. 2017). However, there is relatively less work on the more challenging problem of BO for multiple objectives.
Prior work on multi-objective BO is lacking in the following ways. Many algorithms reduce the problem to singleobjective optimization by designing appropriate acquisition functions, e.g., expected improvement in Pareto hypervolume (Knowles 2006;Emmerich and Klinkenberg 2008). Unfortunately, this choice is sub-optimal as it is hard to capture the trade-off between multiple objectives and can potentially lead to aggressive exploitation behavior. Additionally, algorithms to optimize Pareto Hypervolume (PHV) based acquisition functions scale poorly as the number of objectives and dimensionality of input space grows. PESMO is a stateof-the-art information-theoretic approach that relies on the principle of input space entropy search (Hernández-Lobato et al. 2016). However, it is computationally expensive to optimize the acquisition function behind PESMO. A series of approximations are performed to improve the efficiency potentially at the expense of accuracy.
In this paper, we propose a novel Uncertainty-aware Search framework for optimizing Multiple Objectives (USeMO) to overcome the drawbacks of prior methods. The key insight behind USeMO is a two-stage search procedure to improve the accuracy and computational-efficiency of sequential decision-making under uncertainty for selecting candidate inputs for evaluation. USeMO selects the inputs for evaluation as follows. First, it solves a cheap MO optimization problem defined in terms of the acquisition functions (one for each unknown objective) to identify a list of promising candidates. Second, it selects the best candidate from this list based on a measure of uncertainty. Unlike prior methods, USeMO has several advantages: a) Does not reduce to single objective optimization problem; b) Allows to leverage a variety of acquisition functions designed for single objective BO; c) Computationally-efficient to solve MO problems with many objectives; and d) Improved uncertainty management via two-stage search procedure to select the candidate inputs for evaluation. Contributions. The main contributions of this paper are: • Developing a principled search-based BO framework referred as USeMO to solve multi-objective blackbox optimization problems.
• Theoretical analysis of the USeMO framework in terms of asymptotic regret bounds.
• Comprehensive experiments over synthetic and six diverse real-world benchmark problems to show the accuracy and efficiency improvements over existing methods.
Background and Problem Setup
Bayesian Optimization Framework. Let X ⊆ d be an input space. We assume an unknown real-valued objective function F : X → , which can evaluate each input x ∈ X to produce an evaluation y = F (x). Each evaluation F (x) is expensive in terms of the consumed resources. The main goal is to find an input x * ∈ X that approximately optimizes F via a limited number of function evaluations. BO algorithms learn a cheap surrogate model from training data obtained from past function evaluations. They intelligently select the next input for evaluation by trading-off exploration and exploitation to quickly direct the search towards optimal inputs. The three key elements of BO framework are: 1) Statistical Model of F (x). Gaussian Process (GP) (Williams and Rasmussen 2006) is the most commonly used model. A GP over a space X is a random process from X to . It is characterized by a mean function µ : X × X → and a covariance or kernel function κ. If a function F is sampled from GP(µ, κ), then F (x) is distributed normally N (µ(x), κ(x, x)) for a finite set of inputs from x ∈ X .
2) Acquisition Function (AF) to score the utility of evaluating a candidate input x ∈ X based on the statistical model. Some popular acquisition functions include expected improvement (EI), upper confidence bound (UCB), lower confidence bound (LCB), and Thompson sampling (TS). For the sake of completeness, we formally define the acquisition functions employed in this work noting that any other acquisition function can be employed within USeMO.
where µ(x) and σ(x) correspond to the mean and standard deviation of the prediction from statistical model, and represent exploitation and exploration scores respectively; β is a parameter that balances exploration and exploitation; GP is the statistical model learned from past observations; τ is the best uncovered input; and Φ and φ are the CDF and PDF of normal distribution respectively.
3) Optimization Procedure to select the best scoring candidate input according to AF via statistical model, e.g., DIRECT (Jones, Perttunen, and Stuckman 1993).
Multi-Objective Optimization (MOO) Problem. Without loss of generality, our goal is to minimize k ≥ 2 real-valued objective functions F 1 (x), F 2 (x), · · · , F k (x) over continuous space X ⊆ d . Each evaluation of an input x ∈ X produces a vector of objective values Y = (y 1 , y 2 , · · · , y k ) where y i = F i (x) for all i ∈ {1, 2, · · · , k}. We say that a point x Pareto-dominates another point x if F i (x) ≤ F i (x ) ∀i and there exists some j ∈ {1, 2, · · · , k} such that F j (x) < F j (x ). The optimal solution of MOO problem is a set of points X * ⊂ X such that no point x ∈ X \ X * Pareto-dominates a point x ∈ X * . The solution set X * is called the Pareto set and the corresponding set of function values is called the Pareto front. Our goal is to approximate X * while minimizing the number of function evaluations.
Related work
There is a family of model-based MO optimization algorithms that reduce the problem to single-objective optimization. ParEGO method (Knowles 2006) employs random scalarization for this purpose: scalar weights of k objective functions are sampled from a uniform distribution to construct a single-objective function and expected improvement is employed as the acquisition function to select the next input for evaluation. ParEGO is simple and fast, but more advanced approaches often outperform it. Recently, (Paria, Kandasamy, and Póczos 2019) proposed a scalarization based method focusing on a specialized setting, where preference over objective functions is specified as input. The preference is expressed in terms of the values of the scalars. Many methods optimize the Pareto hypervolume (PHV) metric (Emmerich and Klinkenberg 2008) that captures the quality of a candidate Pareto set. This is done by extending the standard acquisition functions to PHV objective, e.g., expected improvement in PHV (Emmerich and Klinkenberg 2008) and probability of improvement in PHV (Picheny 2015). Unfortunately, algorithms to optimize PHV based acquisition functions scale very poorly and are not feasible for more than two objectives. To improve scalability, methods to reduce the search space are also explored (Ponweiser et al. 2008). A common drawback of this family of algorithms is that reduction to single-objective optimization can be sub-optimal: it is hard to capture the trade-off between multiple objectives and can potentially lead to more exploitation behavior.
PAL (Zuluaga et al. 2013) and PESMO (Hernández-Lobato et al. 2016) are principled algorithms based on information theory. PAL tries to classify the input points based on the learned models into three categories: Pareto optimal, non-Pareto optimal, and uncertain. In each iteration, it selects the candidate input for evaluation towards the goal of minimizing the size of the uncertain set. PAL provides theoretical guarantees, but it is only applicable for input space X with finite set of discrete points. PESMO is a state-of-the-art method based on entropy optimization. It iteratively selects the input that maximizes the information gained about the true Pareto set. Unfortunately, it is computationally expensive to optimize the acquisition function employed in PESMO. Some approximations are performed to improve the efficiency of acquisition function optimization, but can potentially degrade accuracy and result in loss of information-theoretical advantage. MESMO (Belakaria, Deshwal, and Doppa 2019) is a concurrent work based on output space entropy search that improves over PESMO.
In the domain of analog circuit design optimization, (Lyu et al. 2018) developed a technique that conducts an optimization over the posterior means of the GPs using LCB acquisition function. It is an application-specific solution, whereas we show that USeMO generalizes for six diverse application domains including hyper-parameter tuning in neural networks, compiler settings, network-on-chip, and materials design. Additionally, we show consistently better performance using multiple acquisition functions.
Uncertainty-Aware Search Framework
In this section, we provide the details of USeMO framework for solving multi-objective optimization problems. First, we provide an overview of USeMO followed by the details of its two main components. Subsequently, we provide theoretical analysis of USeMO in terms of asymptotic regret bounds.
Overview of USeMO Framework
As shown in Figure 1, USeMO is a iterative algorithm that involves four key steps. First, We build statistical models M 1 , M 2 , · · · , M k for each of the k objective functions from the training data in the form of past function evaluations. Second, we select a set of promising candidate inputs X p by solving a cheap MO optimization problem defined using the statistical models. Specifically, multiple objectives of the cheap MO problem correspond to AF(M 1 , x), AF(M 2 , x), · · · , AF(M k , x) respectively. Any standard acquisition function AF from singleobjective BO (e.g., EI, TS) can be used for this purpose. The Pareto set X p corresponds to the inputs with different trade-offs in the utility space for k unknown functions. Third, we select the best candidate input x s ∈ X p from the Pareto set that maximizes some form of uncertainty measure for evaluation. Fourth, the selected input x s is used for evaluation to get the corresponding function evaluations: The next iteration starts after the statistical models M 1 , M 2 , · · · , M k are updated using the new training example: input is x s and output is (y 1 , y 2 , · · · , y k ). Algorithm 1 provides the algorithmic pseudocode for USeMO. Advantages. USeMO has many advantages over prior methods. 1) Provides flexibility to plug-in any acquisition function for single-objective BO. This allows us to leverage existing acquisition functions including EI, TS, and LCB. 2) Unlike methods that reduce to single-objective optimization, USeMO has a better mechanism to handle uncertainty via a two-stage procedure to select the next candidate for evaluation: pareto set obtained by solving cheap MO problem contains all promising candidates with varying trade-offs in the utility space and the candidate with maximum uncertainty from this list is selected. 3) Computationally-efficient to solve MO problems with many objectives. Figure 1: Overview of the USeMO framework for two objective functions (k=2). We build statistical models M 1 , M 2 for the two objective functions F 1 (x) and F 2 (x). In each iteration, we perform the following steps. First, we construct a cheap MO problem using the statistical models M 1 and M 2 and an input acquisition function AF: min x∈X (AF(M 1 , x), AF(M 2 , x)) and employ a cheap MO solver to find the promising candidate inputs in the form of Pareto set. Second, we select the best candidate input x s from the Pareto set based on a measure of uncertainty. Finally, we evaluate the functions for x s to get Y s =(y 1 , y 2 ) and update the statistical models using the new training example.
Key Algorithmic Components of USeMO
The two main algorithmic components of USeMO framework are: selecting most promising candidate inputs by solving a cheap MO problem and picking the best candidate via uncertainty maximization. We describe their details below. Selection of promising candidate inputs. We employ the statistical models M 1 , M 2 , · · · , M k towards the goal of selecting promising candidate inputs as follows. Given a acquisition function AF (e.g., EI), we construct a cheap multi-objective optimization problem with objectives Since we present the framework as minimization for the sake of technical exposition, all AFs will be minimized. The Pareto set X p obtained by solving this cheap MO problem represents the most promising candidate inputs for evaluation.
Each acquisition function AF(M i , x) is dependent on the corresponding surrogate model M i of the unknown ob-Algorithm 1 USeMO Framework Input: X , input space; F 1 (x), F 2 (x), · · · , F k (x), k blackbox objective functions; AF, acquisition function; and T max , maximum no. of iterations 1: Initialize training data of function evaluations D 2: Initialize statistical models M 1 , M 2 , · · · , M k from D 3: for each iteration t = 1 to T max do 4: // Solve cheap MO problem with objectives AF (M 1 , x), · · · , AF(M k , x) to get candidate inputs 5: X p ← min x∈X (AF (M 1 , x), · · · , AF(M k , x)) 6: // Pick the candidate input with maximum uncertainty 7: Update models M 1 , M 2 , · · · , M k using D 11: t ← t + 1 12: end for 13: return Pareto set and Pareto front of D jective function F i . Hence, each acquisition function will carry the information of its associated objective function. As iterations progress, using more training data, the models M 1 , M 2 , · · · , M k will better mimic the true objective functions F 1 , F 2 , · · · , F k . Therefore, the Pareto set of the acquisition function space (solution of Equation 5) becomes closer to the Pareto set of the true functions X * with increasing iterations. Intuitively, the acquisition function AF(M i , x) corresponding to unknown objective function F i tells us the utility of a point x for optimizing F i . The input minimizing AF(M i , x) has the highest utility for F i , but may have a lower utility for a different function F j (j = i). The utility of inputs for evaluation of F j is captured by its own acquisition function AF(M j , x). Therefore, there is a trade-off in the utility space for all k different functions. The Pareto set X p obtained by simultaneously optimizing acquisition functions for all k unknown functions will capture this utility trade-off. As a result, each input x ∈ X p is a promising candidate for evaluation towards the goal of solving MOO problem. USeMO employs the same acquisition function for all k objectives. The main reason is to give equivalent evaluation for all functions in the Pareto front (PF) at each iteration. If we use different AFs for different objectives, the sampling procedure would be different. Additionally, the values of various AFs can have considerably different ranges. Thus, this can result in an unbalanced trade-off between functions in the cheap PF leading to the same unbalance in our final PF.
Cheap MO solver. We employ the popular NSGA-II algorithm (Deb et al. 2002) to solve the MO problem with cheap objective functions noting that any other algorithm can be used to similar effect. NSGA-II evaluates the cheap objective functions at several inputs and sorts them into a hierarchy of sub-groups based on the ordering of Pareto dominance. The similarity between members of each sub-group and their Pareto dominance is used by the algorithm to move towards more promising parts of the input space.
Picking the best candidate input. We need to select the best input from the Pareto set X p obtained by solving the cheap MO problem. All inputs in X p are promising in the sense that they represent the trade-offs in the utility space corresponding to different unknown functions. It is critical to select the input that will guide the overall search towards the goal of quickly approximating the true Pareto set X * . We employ a uncertainty measure defined in terms of the statistical models M 1 , M 2 , · · · , M k to select the most promising candidate input for evaluation. In single-objective optimization case, the learned model's uncertainty for an input can be defined in terms of the variance of the statistical model. For multi-objective optimization case, we define the uncertainty measure as the volume of the uncertainty hyper-rectangle.
and UCB(M i , x) represent the lower confidence bound and upper confidence bound of the statistical model M i for an input x as defined in equations 2 and 3; and β t is the parameter value to trade-off exploitation and exploration at iteration t. We employ the adaptive rate recommended by (Srinivas et al. 2009) to set the β t value depending on the iteration number t. We measure the uncertainty volume measure for all inputs x ∈ X p and select the input with maximum uncertainty for function evaluation.
Theoretical Analysis
In this section, we provide a theoretical analysis for the behavior of USeMO approach. MOO literature has multiple metrics to assess the quality of Pareto front approximation. Most commonly employed metrics include Pareto Hypervolume (PHV) indicator (Zitzler 1999), R 2 indicator, and epsilon indicator (Picheny 2015). Both epsilon and R 2 metrics are instances of distance-based regret, a natural generalization of the regret measure for single-objective problems. We consider the case of LCB acquisition function and extend the cumulative regret measure for single-objective BO proposed in the well-known work by Srinivasan et al., (Srinivas et al. 2009) to prove convergence results. However, our experimental results show the generality of USeMO with different acquisition functions including TS and EI. Prior work (Picheny 2015) has shown that R 2 , epsilon, and PHV indicator show similar behavior. Indeed, our experiments validate this claim for UseMO. We present the theoretical analysis of USeMO in terms of asymptotic regret bounds. Since the point selected in the proof is arbitrary, it holds for all points. Hence, the regret bound can be easily adapted for both epsilon and R 2 metrics. Let x * be a point in the optimal Pareto set X * . Let x t be a point in the Pareto set X t estimated by USeMO approach by solving cheap MO problem at the t th iteration. Let R( . is the norm of the k-vector and T max is the maximum number of iterations. We discuss asymptotic bounds for this measure using GP-LCB as an acquisition function over the input set X . We provide proof details in Appendix 1. Lemma 1 Given δ ∈ (0, 1) and β t = 2log(|X |π 2 t 2 /6δ), the following holds with probability 1 − δ: Theorem 1 If X t is the Pareto set obtained by solving the cheap multi-objective optimization problem at t-th iteration, then the following holds with probability 1 − δ, where C is a constant and γ i Tmax is the maximum information gain about function F i after T max iterations. Essentially, this theorem suggests that since each term R i in R(x * ) grows sub-linearly in the asymptotic sense, R(x * ) which is defined as the norm also grows sub-linearly. To the best of our knowledge, this is the first work to prove a sub-linear regret for multi-objective BO setting. We proved this result using the same AF for all objectives. This is a strong theoretical-proof that USeMO is already the best in this setting. This is one of the strong reasons that justifies the use of single AF within USeMO framework.
Experiments and Results
In this section, we describe our experimental setup and present results of USeMO on diverse benchmarks.
Experimental Setup
Multi-objective BO algorithms. We compare USeMO with existing methods including ParEGO (Knowles 2006 . We employ the code for these methods from the BO library Spearmint 1 . We present the results of USeMO with EI and TS acquisition functions -USeMO-TS and USeMO-EI -noting that results show similar trend with other acquisition functions. We did not include PAL (Zuluaga et al. 2013) as it is known to have similar performance as SMSego (Hernández-Lobato et al. 2016) and works only for finite discrete input space. The code for our method is available at (github.com/belakaria/USeMO). Statistical models. We use a GP based statistical model with squared exponential (SE) kernel in all our experiments. The hyper-parameters are estimated after every 10 function evaluations. We initialize the GP models for all functions by sampling initial points at random from a Sobol grid using the in-built procedure in the Spearmint library. GPs are fitted using normalized objective function values to guarantee that all objectives are within the same range. Cheap MO solver. We employ the popular NSGA-II algorithm to solve the cheap MO problem noting that other solvers can be used to similar effect. For NSGA-II, the most important parameter is the number of function calls. We experimented with values varying from 1,000 to 20,000. We 1 https://github.com/HIPS/Spearmint/tree/PESM noticed that increasing this number does not result in any performance improvement for USeMO. Therefore, we fixed it to 1500 for all our experiments. Table 1: Details of synthetic benchmarks: Name, benchmark functions, no. of objectives k, and input dimension d.
Synthetic benchmarks. We construct several synthetic multi-objective (MO) benchmark problems using a combination of commonly employed benchmark function for single-objective optimization 2 and two of the known general MO benchmarks. We provide the complete details of these MO benchmarks in Table 1. Due to space constraints we present some of the results in the appendix Real-world benchmarks. We employed six diverse realworld benchmarks for our experiments.
1) Hyper-parameter tuning of neural networks. Our goal is to find a neural network with high accuracy and low prediction time. We optimize a dense neural network over the MNIST dataset (LeCun et al. 1998). Hyper-parameters include the number of hidden layers, the number of neurons per layer, the dropout probability, the learning rate, and the regularization weight penalties l 1 and l 2 . We employ 10K instances for validation and 50K instances for training. We train the network for 100 epochs for evaluating each candidate hyper-parameter values on validation set. We apply a logarithm function to error rates due to their small values.
2) SW-LLVM compiler settings optimization. SW-LLVM is a data set with 1024 compiler settings (Siegmund et al. 2012) determined by d=10 binary inputs. The goal of this experiment is to find a setting of the LLVM compiler that optimizes the memory footprint and performance on a given set of software programs. Evaluating these objectives is very costly and testing all the settings takes over 20 days.
3) SNW sorting network optimization. The data set SNW was first introduced by (Zuluaga, Milder, and Püschel 2012). The goal is to optimize the area and throughput for the synthesis of a field-programmable gate array (FPGA) platform. The input space consists of 206 different hardware design implementations of a sorting network. Each design is defined by d = 4 input variables.
4) Network-on-chip (NOC) optimization. The design space of NoC dataset (Almer, Topham, and Franke 2011) Table 1. consists of 259 implementations of a tree-based networkon-chip. Each configuration is defined by d = 4 variables: width, complexity, FIFO, and multiplier. We optimize energy and runtime of application-specific integrated circuits (ASICs) on the Coremark benchmark workload. The goal is to optimize thermal hysteresis and transition temperature of alloys. Each design is defined by d = 6 input variables (e.g., atomic size of the alloying elements including metallic radius and valence electron number).
6) Piezo-electric materials (PEM) optimization. PEM is a materials dataset consisting of 704 configurations of Piezoelectric materials (Gopakumar et al. 2018). The goal is to optimize piezoelectric modulus and bandgap of these material designs. Each design configuration is defined by d = 7 input variables (e.g., ionic radii, volume, and density).
Evaluation metrics. We employ two common metrics. The Pareto hypervolume (PHV) metric is commonly employed to measure the quality of a given Pareto front (Zitzler 1999). PHV is defined as the volume between a reference point and the given Pareto front (set of non-dominated points). After each iteration t , we report the difference between the hy-pervolume of the ideal Pareto front (Y * ) and hypervolume of the estimated Pareto front (Y t ) by a given algorithm. The R 2 Indicator is the average distance between the ideal Pareto front (Y * ) and the estimated Pareto front (Y t ) by a given algorithm (Picheny 2015). The R 2 metric degenerates to the regret metric presented in our theoretical analysis.
Results and Discussion
USeMO vs. State-of-the-art. We evaluate the performance of USeMO with different acquisition functions including TS, EI, and LCB. Due to space constraints, we show the results for USeMO with TS and EI, two very different acquisition functions, to show the generality and robustness of our approach. We also provide more results with LCB acquisition function in Appendix. Figure 2 and Figure 3 show the results of all multi-objective BO algorithms in-cluding USeMO for synthetic and real-world benchmarks respectively. We make the following empirical observations: 1) USeMO consistently performs better than all baselines and also converges much faster. For blackbox optimization problems with expensive function evaluations, faster convergence has practical benefits as it allows the end-user or decision-maker to stop early. 2) Rate of convergence of USeMO varies with different acquisition functions (i.e., TS and EI), but both cases perform better than baseline methods.
3) The convergence rate of PESMO becomes slower as the dimensionality of input space grows for a fixed number of objectives, whereas USeMO maintains a consistent convergence behavior. 4) Performance of ParEGO is very inconsistent. In some cases, it is comparable to USeMO, but performs poorly on many other cases. This is expected due to random scalarization. Uncertainty maximization vs. random selection. Recall that USeMO needs to select one input for evaluation from the promising candidates obtained by solving a cheap MO problem. We compare uncertainty maximization and random policy for selection in figure 4 . We observe that uncertainty maximization performs better than random policy. However, in some cases, random policy is competitive, which shows that all candidates from the solution of cheap MO problem are promising and improve the efficiency.
Comparison of acquisition function optimization time.
We compare the runtime of acquisition function optimization for different multi-objective BO algorithms including USeMO. We do not account for the time to fit GP models since it is same for all the algorithms. We measure the average acquisition function optimization time across all iterations. we run all experiments on a machine with the following configuration: Intel i7-7700K CPU @ 4.20GHz with 8 cores and 32 GB memory. Table 2 shows the time in seconds for synthetic benchmarks. We can see that USeMO scales significantly better than state-of-the-art method PESMO. USeMO is comparable to ParEGO, which relies on scalarization to reduce to acquisition optimization in singleobjective BO. The time for PESMO and SMSego increases significantly as the number of objectives grow beyond two.
Summary and Future Work
We introduced a novel framework referred as USeMO to solve multi-objective Bayesian optimization problems. The key idea is a two-stage search procedure to improve the ac- curacy and efficiency of sequential decision-making under uncertainty for selecting inputs for evaluation. Our experimental results on diverse benchmarks showed that USeMO yields consistently better results than state-of-the-art methods and scales gracefully to large-scale MO problems. Future work includes using USeMO to solve novel engineering and scientific applications (Belakaria et al. 2020).
Theorem 1 If X t be the Pareto-set generated by the cheap multi-objective optimization at t-th iteration, then the following holds with probability 1 − δ, where C is a constant and γ i Tmax is the maximum information gain about F i after T max iterations.
Proof. For the sake of completeness, the cheap multiobjective optimization problem for GP-LCB becomes min x∈X (LCB 1,t (x), LCB 2,t (x), · · · , LCB k,t (x)) Assuming optimality of X t , either there exists a x t ∈ X t such that LCB i,t (x t ) ≤ LCB i,t (x * ), ∀i ∈ {1, · · · , k} or x * is in the optimal Pareto set X t generated by cheap MO solver (i.e., x t = x * ). Now, using Lemma 1 for any function F j , Therefore, Inequality (22) is similar to the result of Lemma 5.2 from (Srinivas et al. 2009) in the single-objective BO case. Since j is arbitrary, this is true for each function F j , for all j ∈ {1, 2, · · · , k}.
Further, using Lemma 5.4 from (Srinivas et al. 2009), R j (x * ) ≤ CT max β Tmax γ j Tmax with probability ≥ 1 − δ Consequently, the bounds for R(x * ) becomes The quantity γ i Tmax is employed in many theoretical studies of GP based optimization including the well-known work of Srinivasan et al., (Srinivas et al. 2009). Ideal pareto set Estimated Pareto set Hypervolume Difference Figure 5: Illustration of Pareto hypervolume difference. The blue points correspond to the Pareto front estimated by a given algorithm. The gray volume is its corresponding Pareto hypervolume. The red points are correspond to the optimal Pareto front. The blue area represents the Pareto hypervolume difference metric for this example. Figure 6 shows the results of USeMO with LCB acquisition function and comparison with baseline multi-objective BO algorithms. | 2020-03-19T19:49:39.670Z | 2020-04-03T00:00:00.000 | {
"year": 2022,
"sha1": "37fd449778c6596de69a4a483c4a34b17fb1cbda",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/6561/6417",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "cf85afbcd8086f6368634e220faadb2245703bd0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
17436164 | pes2o/s2orc | v3-fos-license | Long-term changes in renal function and perfusion in heart failure patients with reduced ejection fraction
Introduction Little is known about the natural course of renal function and renal hemodynamics in heart failure patients with reduced ejection fraction (HFREF). Methods and results We prospectively studied effective renal plasma flow (ERPF) and glomerular filtration rate (GFR) in 73 HFREF patients with 125I-iothalamate/131I-hippuran clearances with a mean follow-up of 34.6 ± 4.4 months. Fifteen percent were female, with age 58 ± 12 years and left ventricular ejection fraction (LVEF) 29 ± 10 %. Baseline GFR was 81 ± 23 mL/min/1.73 m2 and declined 0.6 ± 4.7 mL/min/1.73 m2 per year. Baseline ERPF was 292 ± 83 mL/min/1.73 m2 and declined 4.3 ± 19 mL/min/1.73 m2 per year. Of the baseline variables, older age and high urinary kidney injury molecule-1 were the only variables associated with GFR decline (p < 0.05). Following stepwise backward analysis, only age (p < 0.001) remained significant. In addition, we found an association between change in GFR and changes in ERPF, N-terminal pro-brain natriuretic peptide and renovascular resistance. In the multivariable analysis, only the change in ERPF remained significantly associated with a change in GFR (p < 0.001). Conclusion In this cohort of stable chronic HFREF patients, the average decline in GFR over time was small. The decline of GFR was associated with a higher age and a lower baseline GFR, and was strongly related to changes in renal perfusion. Electronic supplementary material The online version of this article (doi:10.1007/s00392-015-0881-9) contains supplementary material, which is available to authorized users.
Introduction
Both chronic kidney disease (CKD) and worsening renal function are common in heart failure patients [1][2][3] and among the most powerful predictors of morbidity and mortality in this population [4]. However, little is known about the natural course of renal function in heart failure patients and determinants of long-term renal function decline. The cause of renal dysfunction in HFREF is thought to be multifactorial [5,6]. It has been attributed to medication [7], renin-angiotensin-aldosterone system (RAAS) activation [8], sympathetic nervous system (SNS) activation and inflammation. Decreased renal perfusion is likely the key determinant [9], via decreased renal perfusion pressure, an increase in renovascular resistance (RVR), increase in renal venous pressure or all of the above [10]. However, these associations have mostly been described in cross-sectional studies. The limited number of longitudinal studies has mostly focused on acute worsening of renal function, and few data are available on predictors of long-term estimated glomerular filtration rate (GFR) changes in heart failure patients with reduced ejection fraction (HFREF) [11][12][13][14]. All these studies used changes in serum creatinine to estimate GFR, which is considered a surrogate for the functioning kidney tissue. However, creatinine-based renal function estimates are not always accurate in estimating kidney function decline [15] and provide no information on renal hemodynamics.
Using gold standard techniques for measuring renal function, we studied the change in renal function over time and its clinical, biochemical and hemodynamic predictors in patients with heart failure. We previously described the cross-sectional associations. Renal blood flow showed the strongest association with GFR. In turn, N-terminal probrain natriuretic peptide (NT-proBNP), plasma renin activity, soluble vascular cell adhesion molecule-1 (sVCAM-1) levels and urinary albumin excretion (UAE) showed the strongest associations with renal blood flow [9]. In the current analysis, we investigated if these parameters are also associated with long-term renal function decline, measured using radioactive labeled specific renal function tracers.
Patient population
Details on the study design and patient population have been published previously [9]. In brief, 120 clinically stable HFREF patients, with left ventricular ejection fraction (LVEF) \45 % and stable heart failure medication for at least 1 month underwent renal function measurements using 125 I-iothalamate and 131 I-hippuran clearance techniques at the University Medical Center Groningen, The Netherlands. Blood and urine samples were collected, a physical examination performed and the patient's history documented. Patients were contacted after 3 years and all investigations were repeated. The study was approved by the ethics committee of the study center, and all subjects gave written informed consent. The study was conducted in accordance with Declaration of Helsinki guidelines.
Renal and cardiac function measurements
Renal function measurements were performed using radioactive labeled tracers, 125 I-iothalamate and 131 I-hippuran, as described previously [16]. This method has an intra-and inter-test variation of 1.9 and 2.9 %, respectively, for GFR. The intra-subject day-to-day CV of effective renal plasma flow (ERPF) is 5.0 % [17]. The filtration fraction was calculated as GFR/ERPF. RVR was calculated as (mean arterial pressure/ERPF) 9 (1 -hematocrit) and expressed in mmHg/mL/min. GFR and ERPF were corrected for 1.73 m 2 of body surface area, calculated using the Dubois formula. LVEF was determined by nuclear ventriculography.
Laboratory methods
Patients were all in the supine position during renal measurements, and a venous blood sample was drawn 2 h after the start of the measurements. Routine hematology, blood chemistry and urinalysis were performed within an hour of collection. Additional blood and urine samples were immediately centrifuged and stored at -80°C. Urinary markers of renal damage were measured in 24 h urine collections and corrected for urinary creatinine as described previously [18]. A detailed description of the methods and analytical variation is provided in supplement 1.
Follow-up
All patients were asked to return for a follow-up visit between 24 and 36 months after baseline renal function measurements. All measurements performed at baseline were repeated including laboratory analyses, renal function measurements using radioactive labeled tracers and nuclear ventriculography. Adverse events during follow-up were determined via interview and case record extraction. Adverse events included death from any cause, heart transplantation, cardiovascular event (myocardial infarction or primary percutaneous coronary intervention or primary coronary artery bypass grafting) and first hospitalization for worsening heart failure.
Statistical analyses
Continuous data are presented as mean ± standard deviation (SD) when normally distributed, as median and interquartile range when non-normally distributed and as frequencies and percentages for categorical variables. Differences between groups were tested using Student's T test, Kruskal-Wallis or Chi-square test as appropriate. Linear regression analysis was carried out to determine the association of baseline variables with change in GFR and to test the association of changes in hemodynamic parameters with changes in GFR. Linear regression models with delta variables were corrected for baseline values of the variables of interest. Age and sex were included in all multivariable models. Skewed variables were log-transformed where appropriate. Variables associated with the univariable model at p \ 0.1 were included in a stepwise, backward multivariable regression analysis, with a threshold for variable retention of p \ 0.1. All reported probability values are two tailed, and a p value of \0.05 was considered to be statistically significant. Statistical analyses were performed and graphics created using STATA version 11.0, College Station, TX, USA.
Results
Of the 120 patients included at baseline, 73 returned for follow-up measurements (Fig. 1). The baseline characteristics of the study population are presented in Table 1. In brief, 15 % were female, with a mean age of 58 ± 12 years. The left ventricular ejection fraction (LVEF) was 29 ± 10 %. Most patients had New York Heart Association (NYHA) class II or III heart failure symptoms. All patients were on an angiotensin-converting enzyme inhibitor and/or angiotensin receptor blocker and most were on beta-blocker therapy or aldosterone receptor antagonists.
Baseline GFR was 81 ± 23 mL/min/1.73 m 2 and baseline ERPF was 292 ± 83 mL/min/1.73 m 2 . The mean follow-up time was 34.6 ± 4.4 months. In patients with a complete follow-up, the mean decline in GFR was 0.6 ± 4.7 ml/min/1.73 m 2 per year and ERPF declined 4.3 ± 19 mL/min/1.73 m 2 per year. There was no significant difference in the rate of renal function decline between patients with a GFR below and above 60 mL/min/1.73 m 2 at baseline (p = 0.81). Patients who were lost to follow-up are also presented in Table 1. Patient who died or had a heart transplant during follow-up had a lower blood pressure, GFR, ERPF and filtration fraction, and a higher RVR, UAE and NT-proBNP and were more often using angiotensin receptor blockers (ARB) or aldosterone receptor antagonists (ARA) compared with patients who completed follow-up. There were no significant differences between patients who completed follow-up and those who were lost to follow-up for other reasons.
Baseline variables
Associations of baseline characteristics and laboratory tests with change in GFR are shown in Table 2. Baseline age, sex, mean arterial pressure, neutrophil gelatinase-associated lipocalin (NGAL) and kidney injury molecule 1 (KIM-1) showed a relation with change in GFR at p \ 0.1 ( Table 2). Following stepwise backward analysis, only older age (p \ 0.001) remained significantly associated with higher GFR decline in a multivariable model.
Changes in hemodynamics and renal perfusion
In general, patients who completed follow-up maintained a relatively stable hemodynamic profile. Changes in LVEF (?3.3 ± 11 %), mean arterial pressure (-0.13 ± 10 mmHg), NT-proBNP [-0.6 (-265 to ?250.6)ng/L] and RVR (0.01 ± 0.05 mmHg/mL/min) were modest. A decrease in ERPF and NT-proBNP and increase in RVR were associated with a decrease in GFR, while LVEF was not (Table 3; Fig. 2). In the multivariable analysis, only change in ERPF remained significantly associated with a change in GFR. In parallel to changes in GFR, an increase in RVR and a decrease in NT-proBNP and LVEF were associated with a decrease in ERPF. In multivariable analysis, only RVR and NT-proBNP remained significantly associated with changes in ERPF (results not shown). Change in mean arterial pressure was not associated with a change in either GFR or ERPF.
Discussion
In the present study of patients with stable HFREF, we found only a small decrease in GFR over a longer period of time, in the order of magnitude also reported as the agerelated decline in the general population. Likewise, ERPF decline did not differ much from the age-related decline rate in the general population [19]. Change in GFR was strongly associated with a parallel change in ERPF. Only higher age and lower baseline GFR predicted a greater decline in GFR over time, but none of the tested urinary biomarkers of renal damage or hemodynamic parameters were associated with GFR decline.
Several studies have focused on markers predicting worsening renal function in chronic heart failure, with limited success. The identified risk factors include congestion [20], vascular disease, diuretics, advanced age, left ventricular ejection fraction and worse renal function at baseline [4,7,11]. Furthermore, NGAL and NT-proBNP have been linked to worsening renal function in acute heart Normally distributed data are presented as mean ± SD; * skewed data as median (p25-p75) RR blood pressure, LVEF left ventricular ejection fraction, GFR glomerular filtration rate, ERPF effective renal plasma flow, RVR renovascular resistance, UAE urinary albumin excretion, NT-proBNP N-terminal pro-brain natriuretic peptide, NGAL neutrophil gelatinase-associated lipocalin, KIM-1 kidney injury molecule 1, NAG N-acetyl-b-D-glucosaminidase, ARB angiotensin receptor blocker, ACE angiotensinconverting enzyme ** p \ 0.05 and # p \ 0.01 compared with patients with complete follow-up failure [21][22][23] and chronic heart failure [24]. However, all these studies used plasma creatinine to estimate GFR and cannot differentiate between changes in hemodynamics and kidney damage. In a previous analysis we demonstrated a strong relation of renal blood flow with GFR in HFREF patients [9].
In the current analysis, we found that none of the urinary biomarkers or hemodynamic parameters at baseline could predict renal function decline. Our study may have limited power, because of the small change in GFR over time; however, most of the aforementioned studies also demonstrated a limited estimated GFR decline over time and by using radioactive labeled tracers we can measure small changes in GFR more accurately. We cannot exclude that deceased subjects had a more rapid renal function decline. These subjects did have a lower GFR and ERPF and higher NT-proBNP at baseline; however, tubular damage markers were not elevated in these subjects. What is most remarkable is that they had a high RVR in combination with a low filtration fraction and low blood pressure. This may reflect the kidneys' inability to maintain glomerular perfusion pressure. They were more often on double reninangiotensin-aldosterone system (RAAS) blockers, which may decrease the filtration fraction by vasodilation of the efferent glomerular arteriole; however, this should cause a decrease in RVR. The high RVR, therefore, must reflect a different mechanism, possibly compromised kidney perfusion by increased venous pressure, sympathetic nerve activation or a decreased amount of functioning glomeruli.
In our study, we found that the change in ERPF was the strongest determinant of the change in GFR. In contrast, in healthy individuals, GFR remains relatively stable with moderate changes in renal blood flow [25]. It may be speculated that impaired systemic circulation causes decreased ERPF and, because of impaired intra-renal regulatory mechanisms, a parallel decline in GFR, but it may also imply that both ERPF and GFR are affected by intrarenal hemodynamic changes. Both congestion and reduced cardiac output are thought to influence renal function in heart failure patients. In our study, an increase in NT-proBNP was associated with an increase in ERPF and GFR. This is counterintuitive, since higher NT-proBNP is associated with worsening cardiac function [26]. However, changes in volume status also influence NT-proBNP levels, suggesting that not only congestion, but also hypovolemia causes renal function decline in these patients. Another explanation for the observed relationship is that kidney damage affects both ERPF and GFR. However, many patients showed an increase in ERPF and an associated increase in GFR, which suggests changes in hemodynamics rather than in viable kidney tissue.
This study has several limitations. First, not all patients were able to participate in the second measurement. The deceased patients had worse baseline renal function, lower blood pressure and higher NT-proBNP. Second, we only had two measurements; therefore, we cannot establish if there is a linear trend over time and cannot account for fluctuations. Furthermore, our study has a modest sample size. The measurements performed, however, are the gold standard for measuring renal function, with a day-to-day variation coefficient of less than 3 % for GFR and 5 % for ERPF. Patients were mostly stable on medication; however, some patients had minor changes in dose or type of
Conclusion
In these stable chronic HFREF patients, long-term changes in GFR were small, but strongly related to changes in ERPF. None of the investigated urinary biomarkers and hemodynamic parameters other than baseline GFR and age could predict changes in GFR. This underlines the need for the development of new renal risk markers and demonstrates that changes in GFR are mostly driven by changes in renal hemodynamics in chronic HFREF patients. Intervention trials should investigate whether targeting ERPF may improve GFR and reduce cardiac events and mortality. | 2016-05-12T22:15:10.714Z | 2015-06-30T00:00:00.000 | {
"year": 2015,
"sha1": "98d9830e533d49bdc45a22f587c5e8992661e899",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00392-015-0881-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cde7cd6dee2a1c338741952a45da072b54c25b40",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
36275090 | pes2o/s2orc | v3-fos-license | Integrated optical source of polarization entangled photons at 1310 nm
We report the realization of a new polarization entangled photon-pair source based on a titanium-indiffused waveguide integrated on periodically poled lithium niobate pumped by a CW laser at $655 nm$. The paired photons are emitted at the telecom wavelength of $1310 nm$ within a bandwidth of $0.7 nm$. The quantum properties of the pairs are measured using a two-photon coalescence experiment showing a visibility of 85%. The evaluated source brightness, on the order of $10^5$ pairs $s^{-1} GHz^{-1} mW^{-1}$, associated with its compactness and reliability, demonstrates the source's high potential for long-distance quantum communication.
Introduction
Quantum communication takes advantage of single quantum systems, such as photons, to carry the quantum analog of bits, usually called qubits. Quantum information is encoded on the photon's quantum properties, such as polarization or time-bins of emission [1]. Selecting two orthogonal states spanning the Hilbert space, for instance |H (horizontal) and |V (vertical) when polarization is concerned, allows encoding the |0 and |1 values of the qubit. Moreover quantum superposition makes it possible to create any state |ψ = α|0 + e iφ β |1 , provided the normalization rule |α| 2 + |β | 2 = 1 is fulfilled.
Entanglement is a generalization of the superposition principle to multiparticle qubit systems. Pairs of polarization entangled photons (or qubits) can be described by states of the form where the indices 1 and 2 label the two involved photons, respectively. The interesting property is that neither of the two qubits has a definite value. But as soon as one of them is measured, the associated result being completely random, the state of EQ.1 indicates that the other is found to carry the opposite value. There is no classical analog to this purely quantum feature [2]. This particularity of quantum physics is a resource for quantum communication systems such as quantum key distribution [3], quantum teleportation [4], and entanglement swapping [5]. In today's quantum communication experiments, spontaneous parametric down-conversion (SPDC) in non-linear bulk crystals is the common way to produce polarization entangled photons [6,7]. However, since such experiments are getting more and more complicated, they require sources of higher efficiency together with narrower photon bandwidths [5,8]. In addition, as soon as long-distance quantum communication is concerned, the paired photons have to be emitted within one of the telecom windows, i.e. around 1310 nm or 1550 nm [9].
The aim of this work is to unite all of the above mentioned features in a single source based on a titanium (Ti) indiffused periodically poled lithium niobate (PPLN) waveguide. We report for the first time the efficient emission of narrowband polarization entangled photons at 1310 nm, showing the highest quality of two-photon interference (coalescence) ever reported in a similar configuration [10,11]. In the following, we will first describe the principle of the source. Then, we will detail the characterizations leading to the validation of the emitted photon wavelength and associated bandwidth, and to the estimation of the source brightness. Afterwards, we will move on to the interferometric setup designed to evaluate the quality of the quantum properties of the emitted pairs. This experiment amounts to a typical Hong-Ou-Mandel interference involving two photons [12]. We will finally discuss the results taking into account an additional observable, i.e. energy-time entanglement, which depends on the phase-matching condition.
Principle of the polarization entangled photon-pair source
To date, the creation of entangled photon-pairs is usually performed by exploiting spontaneous parametric down-conversion (SPDC) in non-linear bulk or waveguide crystals [1]. The interaction of a pump field (p) with a χ (2) non-linear medium leads indeed, with a small probability, to the conversion of a pump photon into so-called signal (s) and idler (i) photons. Naturally, this process is ruled by conservation of energy and momentum where Λ and u represent, in the specific case of any periodically poled crystal, the poling period and a unit vector perpendicular to the domain grating, respectively. Note that the latter equation is also known as quasi-phase matching (QPM), which allows, compared to birefringent phasematching in standard crystals, to compensate for dispersion using the associated grating-type k-vector ( 2π Λ · u). Then by an appropriate choice of Λ, one can quasi-phase match practically any desired interaction within the transparency window of the material. In this work, we take advantage of a Ti-indiffusion waveguide integrated in PPLN, for which the QPM condition has been chosen such that we expect, starting with a pump laser at 655 nm, the generation of pairs of photons at the telecom wavelength of 1310 nm. This way, for single photon counting, we can take advantage of passively-quenched Germanium avalanche photodiodes (Ge-APDs) which do not require any additional gating signal on the contrary to the experiments of Refs. [10,11].
From the quantum side, since the generation of cross-polarized photons is necessary, the waveguide device has to support both vertical and horizontal polarization modes. Therefore, the well-established Ti-indiffusion technology can be applied for waveguide fabrication and a type-II SPDC process, exploiting the d 24 non-linear coefficient of the material, can be used [10]. Starting from H-polarized pump photons this process leads, at degeneracy, to the generation of paired photons having strictly identical properties, but with orthogonal polarizations. SPDC ensures a simultaneous emission of the paired photons. After filtering out the remaining pump photons, they can be separated using a 50/50 beam-splitter (BS) whose outputs are labelled a and b. In average, the separation occurs with a probability of 1 2 , but when successfull, the two possible output states, |H a |V b and |V a |H b , have equal probabilities so that the related twophoton state corresponds to the entangled state of EQ.1. Two steps are therefore cascaded for obtaining such a configuration, where η and η * stand for the efficiencies of SPDC process and of the entire source, respectively.
Fabrication of the PPLN waveguide, and characterization of the source
To meet our goals, a 3.6 cm long sample with several Ti-indiffused waveguides of various widths (5, 6, and 7 µm) was prepared. The waveguides were fabricated by an indiffusion of 104 nm thick Ti-stripes into a 0.5 mm thick Z-cut LiNbO 3 substrate. Diffusion was performed at 1060 • C for 8.5 hrs. In this configuration, the required poling period for the generation of photon-pairs at the degenerate wavelength of 1310 nm was calculated to be around 6.6 µm. Subsequently, electric field-assisted periodic poling of the whole substrate was done with different periodicities (6.50 to 6.65 µm with steps of 0.05 µm).
The first characterization of the sample concerns SPDC spectra that we measured in the single photon counting regime. At an operating temperature of 70 • C and a pump wavelength of 655 nm, photon-pair emission from 7 µm wide waveguides with different poling periodicities was observed as shown in Fig.2. Following this, a fine tuning of the temperature up to 72 • C allowed us getting exactly both signal and idler photons at the degenerate wavelength of 1310 nm out of a 6.6 µm-period waveguide, as depicted in Fig.3.
The measured bandwidth of those photons is very close to the resolution of our optical spectrum analyzer (0.6 nm). After deconvolution with the monochromator resolution, we estimated the full-width at half-maximum (FWHM) bandwith to be approximately 0.7 nm. This result is in good agreement with the theoretical bandwidth calculated taking into account the 3.6 cm length of our sample. Moreover, it has already been reported that type-II phase-matching [10,11] leads to much narrower bandwidths than type-0 or type-I phase-matching. For instance, sources based on a proton-exchanged PPLN waveguide [13] and on a bulk KNbO 3 crystal [14] provided photon-pairs at 1310 nm within a bandwidth of 40 and 70 nm, respectively. Our type-II PPLN waveguide therefore enables generating narrowband polarization photons. This is a clear advantage for long distance quantum communication since photons are less subject to both chromatic and polarization mode dispersions in optical fibers, preserving the purity of entanglement.
Another important figure of merit is the brightness of the source, i.e. the normalized rate at which the pairs are generated. The commonly accepted brightness unit (s −1 GHz −1 mW −1 ) is defined as the number of pairs produced per second, per GHz of bandwith, and per mW of pump power. Having a high-brightness source is of particular interest for both laboratory and practical quantum communication experiments since low power, compact, and reliable pump diode lasers are sufficient for obtaining high counting rates. Furthermore, if additional ultra-narrow filtering is necessary for some applications, having a very bright source still enables using commercially available mid-power lasers as pumps [8]. In the configuration of Fig.1, the pair creation rate, N, has been estimated following the loss-independent method introduced in Ref. [13] where we find N = S a S b 2R c . Here S a,b and R c stand for the single and coincidence counting rates, respectively, when two single photon detectors are placed after the BS in spatial modes a and b. As already mentioned, we employ two passively-quenched Ge-APDs featuring 4% detection efficiencies and 30 kHz of dark count rates. These APDs are connected to an AND-gate for coincidence counting. Experimentally, we measured S a ≈ S b ≈ 100 ·10 3 s −1 and R c ≈ 330 s −1 , and we estimated N to be of about 1.5 · 10 7 s −1 . Then, taking into account a pump power of 0.4 mW and a bandwidth of 0.7 nm, we conclude our source emits 3 · 10 5 pairs s −1 GHz −1 mW −1 . This high brightness result is mainly due to the waveguide configuration that permits confining the three waves, pump, signal, and idler, over longer distances than in bulk devices. Moreover, the reported brightness is of the same order as those reported in Refs. [10,11] for similar schemes.
Quantum characterization of the source
Obtaining polarization entangled photon-pairs (see EQ.3) requires these two photons to be indistinguishable for any degree of freedom, but the polarization, before they reach the BS of Fig.1. Especially, they have to arrive at the BS exactly at the same time with an accuracy better than their coherence time. However, since lithium niobate is birefringent, the two generated polarization modes do not travel at the same speed along their propagation in the waveguide. Knowing the length L W G and the birefringence of the waveguide, it is easy to calculate the average time delay between the two polarization modes : where ∆n LiNbO 3 corresponds to the difference of the group refractive indices for the two polarization modes, and c the speed of light. The spectrum analysis (see Fig.2 and related discussion) allows estimating a coherence time on the order of Comparing these two values indicates there is no temporal overlap at the beam-splitter for the generated photons, making them distinguishable. This leads to a separable two-photon state after the beam-splitter. In this context, a birefringent crystal placed between the waveguide and the beam-splitter would be necessary to compensate for the propagation time mismatch and to recover the desired entangled state at the output of the source. Without such a compensation crystal, a standard Bell test based on two polarization analyzers and a suitable coincidence detection apparatus cannot be employed to characterize entanglement [6,10]. Provided these two photons are turned indistinguishable, it is nevertheless possible to infer the potential amount of entanglement by making them coalesce at a BS using a HOM type setup [12]. Indistinguishability means that the photons have the same wavelength, bandwidth, polarization state, and spatial mode. In this case, if the two photons enter the BS through different inputs at the same time, the destructive interference makes them exit the device through the same output. Consequently, no coincidences are expected when two detectors are placed at the output of the BS. Our interferometric apparatus, made of both free space and fiber-optics components, is depicted in Fig.4. On the contrary to Fig.1, a polarization beam splitter (PBS) is used to separate the paired photons into two spatial modes regarding their polarization states (H,V). A motorized retroreflector and two polarization controllers are employed to erase any temporal and polarization distinguishability before the two photons reach the 50/50 BS. Two Ge-APDs connected to an AND-gate permit recording the single and coincidence rates as a function of the path length difference which is adjusted thanks to the retroreflector. A so-called HOM dip in the coincidence rate is expected when the arrival times of the photons at the BS are identical. Here, two parameters are of interest. On the one hand, the visibility (or depth), which depends on any experimental distinguishability, is the figure of merit which is linked to the quality of the entangled state produced by the source of Fig.1. On the other hand, the width of the dip is directly related to the coherence time of the single photons [12]. Fig.5 exhibits the coincidence rate as a function of the path length difference between the two arms and clearly shows a HOM interference while single photon detection remains constant in both APDs. The net visibility, i.e. when noise is discarded, is of about 85% when the two photons characteristics are carefully adjusted to be identical. To our knowledge, this result is the best ever reported for similar configurations, i.e. waveguide-based sources emitting polarization entangled photons at telecom wavelengths [10,11]. Moreover, when the distorsion of the dip is taken into account (see discussion in the next section), the full width at half maximum is estimated to be 1.5 mm. According to Ref. [12], this corresponds to a coherence time of 3.5 ps for the single photons which is in good agreement with the value previously obtained in EQ.5.
Discussion
As we can see in Fig.5, the HOM dip is noticeably distorted and the bump on the right of the dip becomes more important as we detuned the two photons central wavelength away from degeneracy. This effect is shown in Fig.6 where we observe an overall decreasing visibility as the two photons wave-functions less and less overlap in terms of wavelength. Here we have to take into account another observable in the description of our two-photon state to explain the observed beating in the dips of Fig.5 and 6. More precisely, it is worth noting that CW SPDC naturally provides, in addition to any other observable, energy-time entangled photons.
It is known that two photons produced in the singlet Bell state |ψ − , i.e. of the form , is the only state that gives rise to a coincidence peak when submitted to a HOM experiment due to symmetry considerations. Such a state plays a crucial role in teleportation-like experiments where the BS acts as a Bell state measurement apparatus [4,5]. Note that the three other Bell states that constitute the Bell basis lead to a HOM dip instead [15]. In our source, the polarization modes of signal and idler photons are always associated with their wavelengths, as shown by the quasi-phase matching curve of Fig.3. This no longer holds when the two photons are degenerate since they are produced in an entangled state of the form ω p where the frequency δ ω is spanning over a frequency bandwidth corresponding to the 0.7 nm obtained in SEC.3, and δ x c the relative difference in arrival time between signal and idler photons at the BS [16]. It therefore comes, for a particular value δ x − , e ı2δ ω δ x − c = −1 making EQ.6 become a |ψ − state which is not fully reachable in our specific configuration. As a result, we believe the beating in the coincidence rate can be seen as the sum of a bump signature and a dip signature as a function of δ x for a given δ ω. The overlapping part of our produced twophoton state with |ψ − is responsible for the bump, the remaining non-overlapping part being responsible for the dip. We can therefore conclude that the phase-matching condition in our waveguide gives rise to a state that partially overlaps with the |ψ − state for the energy-time observable [17]. Fig.6 clearly indicates that tuning the emitted wavelengths does change the overlap of the actual created two-photon state with |ψ − . This means that the wavelengths of the photons have to be perfectly controlled to avoid such an effect which is moreover not an issue in typical quantum communication based on polarization entangled qubits. In any case, we have a clear signature that a high quality of polarization entanglement can be expected from the setup of Fig. 1. Performing a Bell test experiment, together with a compensation crystal, would be a next step to properly characterize the entanglement created by our source.
Finally note that Okamoto and co-workers reported the engineering of a partial |ψ − state at degeneracy using high-order phase dispersion in a bandpass filter placed on the path of one of the two photons [18]. They observed comparable results to that of Fig.5, i.e. an asymmetry in their HOM dip. In our case however, this possibility has to be excluded since both photons go through the bandpass filter, cancelling the dispersion effect. Moreover, a run without the filter led to a similar shaped dip with a lower visibility due to higher background noise.
Work is currently in progress to address this interesting feature of the source.
Conclusion and prospects
Using a type-II PPLN waveguide, we have demonstrated a narrowband and bright source of cross-polarized paired photons emitted at 1310 nm within a bandwidth of 0.7 nm. We estimated the normalized production rate to be on the order of 10 5 pairs/s/GHz/mW which is one of the best ever reported for similar configurations [10,11]. Furthermore, using a HOM-type setup, we obtained an anti-coincidence visibility of 85% indicating a high level of photon indistinguishability. To our knowledge, this visibility is the best ever reported for similar configurations. These results, together with the compactness and reliability of the source, make it an high-quality generator of polarization entangled photon-pairs for the first time at 1310 nm. This work clearly highlights the potential of integrated optics to serve as key elements for longdistance quantum communication protocols. Fig. 3. QPM curve as function of the temperature for Λ = 6.60 µm. The degeneracy point can be reached by fine tuning of the temperature up to 72 • C. Note that before degeneracy, the longest wavelength is associated with the V polarization mode, and the shortest to the H polarization, and vice-versa beyond degeneracy. The straight line is a guide for the eye. Fig. 4. Two-photon interference experiment. The two polarization modes are first separated using a polarization beam-splitter (PBS). A retroreflector (R) placed in one arm is employed to adjust the relative delay of the two photons. After being coupled into single mode optical fibers, these photons are recombined at a 50/50 coupler (BS) where quantum interference occurs. Note that both polarization modes are adjusted to be identical using fiber-optics polarization controllers (PC) in front of the coupler. The overall losses of the interferometer were estimated to be of 5.5 dB. Fig. 6. Coincidence rate at the output of the 50/50 beam-splitter as function of the relative length of the two arms for various phase matching conditions leading to photons near degeneracy (∆λ = λ H − λ V ≤ 0.7 nm). It is then interesting to note the decrease of the overall visibility, from (a) to (d), as the single photon wavelengths are tuned away from degeneracy by an increase of the crystal temperature from 72 to 73 • C. | 2009-03-12T17:02:24.000Z | 2009-01-19T00:00:00.000 | {
"year": 2009,
"sha1": "a7b1eeb7e00b536ebdbee960cac5a0d7dc4a41de",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.17.001033",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "79879c00fb6dcb8c3e8c77cc5ab11b36e86b45af",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
118982992 | pes2o/s2orc | v3-fos-license | Intermediate phase in the spiral antiferromagnet Ba_2CuGe_2O_7
The magnetic compound Ba_2CuGe_2O_7 has recently been shown to be an essentially two-dimensional spiral antiferromagnet that exhibits an incommensurate-to-commensurate phase transition when a magnetic field applied along the c-axis exceeds a certain critical value H_c. The T=0 dynamics is described here in terms of a continuum field theory in the form of a nonlinear sigma model. We are thus in a position to carry out a complete calculation of the low-energy magnon spectrum for any strength of the applied field throughout the phase transition. In particular, our spin-wave analysis reveals field-induced instabilities at two distinct critical fields H_1 and H_2 such that H_1<H_c<H_2. Hence we predict the existence of an intermediate phase whose detailed nature is also studied to some extent in the present paper.
I. INTRODUCTION
A recent experimental investigation 1-5 of the magnetic properties of Ba 2 CuGe 2 O 7 in its low-temperature phase (T < T N = 3.2 K) established the occurrence of spiral antiferromagnetic order due to a Dzyaloshinskii-Moriya (DM) anisotropy 6,7 . A schematic illustration of the spiral abstracted from experiment may be found in Fig. 5 of Ref. 1. It was further demonstrated that a Dzyaloshinskii-type 8 commensurate-incommensurate (CI) phase transition is induced by a magnetic field H applied along the c-axis. As the field approaches a critical value H c ≈ 2 T, the spiral is highly distorted while its period (pitch) grows to infinity. For H > H c the ground-state configuration is thought to degenerate into a uniform spin-flop state. This phase transition is similar to the cholesteric-nematic transition induced by an external magnetic field in liquid crystals [9][10][11] .
It is of obvious interest to describe theoretically the magnon excitations measured by inelastic neutron scattering 5 , but progress has been hindered by the great formal complexity of the calculation. Here we explore a new approach in which the original discrete system is replaced by a continuum field theory. We are thus able for the first time to carry out a complete calculation of the low-energy excitation spectrum for any strength of the applied field and any direction of spin-wave propagation. In addition, our analysis reveals the existence of a new intermediate phase whose properties we examine and compare with experiment.
In Sec. II the low-energy dynamics is described in terms of a nonlinear σ model that is compatible with symmetry. In Sec. III we present a brief demonstration of the conventional CI transition which will provide the basis for all subsequent work. The complete field theory is first applied in Sec. IV for an analytical calculation of the field dependence of the magnon spectrum in the high-field commensurate phase. Interestingly, the uniform spin-flop state is shown to be locally stable only for H > H 2 > H c where the new critical field H 2 is predicted to be equal to 2.9 T. A first contact with the measured spectrum is also made in Sec. IV.
The main thrust of our calculation is presented in Sec. V where the determination of the magnon spectrum in the low-field spiral phase is reduced to a quasione-dimensional band structure problem that is solved numerically. While an earlier calculation 5 of the spectrum at H = 0 is confirmed, we are also in a position to analyze existing experimental data at nonzero field and to predict the results of possible future experiments. A byproduct of this analysis is yet another critical field H 1 = 1.7 T < H c beyond which the flat spiral ceases to be locally stable. Therefore, the combined results of Secs. IV and V suggest the existence of an intermediate phase in the field region H 1 < H < H 2 whose nature is studied in Sec. VI where we show that a nonflat spiral becomes energetically favorable. The main results are summarized in the concluding Sec. VII, while discussion of some technical issues is relegated to two Appendices.
II. LOW-ENERGY DYNAMICS
The unit cell of Ba 2 CuGe 2 O 7 is partially illustrated in Fig. 1 where we display only the magnetic Cu sites. The lattice constants are a = b = 8.466Å and c = 5.445Å. Since the Cu atoms form a perfect square lattice within each plane, with lattice constant d = a/ √ 2 ≈ 6Å, it is also useful to consider the orthogonal axes x, y and z obtained from the original crystal axes a, b and c by a 45 • azimuthal rotation. The complete magnetic lattice is formally divided into two sublattices labeled by A and B because the major spin interaction between nearest in-plane neighbors is antiferromagnetic. In contrast, the interaction between out-of-plane neighbors is ferromagnetic and weak 1 . Therefore, the interlayer coupling is not crucial for our purposes and is thus ignored in the following discussion which concentrates on the 2D spin dynamics within each layer. The space group of this crystal is D 3 2d or P42 1 m and imposes significant restrictions on the possible types of spin interactions. Such symmetry constraints underlie most of the earlier work 1-5 but were not spelled out in sufficient detail. We have thus found it necessary to carry out afresh a complete symmetry analysis, including both nearest-neighbor (nn) and next-nearest-neighbor (nnn) couplings. For the moment, we restrict attention to nn interactions and write the 2D spin Hamiltonian as the sum of four terms : (2.1) Here describes the isotropic exchange over nn in-plane bonds, denoted by < kl > , with J kl = J for all such bonds. Similarly, stands for antisymmetric DM anisotropy where the vectors D kl assume four distinct values: which are distributed over the 2D lattice as shown in Fig. 2 where nn bonds are accordingly labeled by I, II, III or IV. Here D and D ′ are two independent scalar constants, while e 1 , e 2 and e 3 are unit vectors along the x, y and z axes of Fig. 1. It should be noted that the z-components of the DM vectors alternate in sign on opposite bonds, a feature that could lead to weak ferromagnetism. No such alternation occurs for the in-plane components of the DM I III I II IV IV II III I III IV II IV II I III I FIG. 2. Illustration of the dimerization process on a finite portion of the 2D lattice cut along the axes x and y. The indices α and β advance along the crystal axes a and b not shown in this figure. The meaning of the Roman labels on bonds connecting nn sites is explained in the text. vectors (2.4) which are responsible for the observed spiral magnetic order or helimagnetism.
The third term in Eq. (2.1) contains all "symmetric" anisotropies. Since single-ion anisotropy is not possible in this spin s = 1 2 system, the most general form of W A is where the indices i and j are summed over three values corresponding to the Cartesian components of the spin vectors along the axes x, y and z. Accordingly, G kl = (G ij kl ) are 3 × 3 symmetric matrices, one for each bond < kl >. Again, there exist four distinct such matrices: which are all expressed in terms of the four scalar parameters K 1 , K 2 , K 3 and K 4 . The latter may be further restricted by the trace condition K 1 + K 2 + K 3 = 0 because the isotropic component of the exchange interaction is already accounted for by Eq. (2.2). Finally, describes the usual Zeeman interaction with an external field H. The discrete Hamiltonian could be employed to analyze this system by standard spin-wave techniques, but the calculational burden is rather significant and has so far prevented a complete determination of the magnon spectrum 5 . Nevertheless, the relevant low-energy dynamics can be efficiently calculated in terms of a continuum field theory which provides a reasonable approximation for Ba 2 CuGe 2 O 7 because the period of the observed spiral is equal to about 37 lattice constants along the x-direction. A similar approach is often invoked in the related subject of weak ferromagnetism 12,13 and can be implemented by a straightforward step-by-step procedure starting from the original discrete Hamiltonian 14,15 .
The first step is to group spins into dimers as shown in Fig. 2. Each dimer contains a pair of spins denoted by A and B and labeled by a common set of sublattice indices α and β that advance along the crystal axes a and b. A more convenient set of variables is given by the "magnetization" m and the "staggered magnetization" n which are defined as and satisfy the classical constraints m · n = 0 and m 2 + n 2 = 1. We also introduce space-time variables according to where ε is a dimensionless scale whose significance will become apparent as the discussion progresses. The final result will be stated in terms of the coordinates along the x and y axes of Fig. 1. One should keep in mind that actual distances are given by xd/ε and yd/ε where d = a/ √ 2 is the lattice constant of the square lattice formed by the Cu atoms. Finally, we introduce rescaled anisotropy constants and magnetic field as where we display only those combinations of constants that survive in the effective low-energy dynamics. In particular, the constant K 4 does not appear to leading order. The further notational abbreviations will prove convenient in all subsequent calculations. Now, a consistent low-energy expansion is obtained by treating m as a quantity of order ε while n is of order unity. To leading order, the classical constraints reduce to m · n = 0, n 2 = 1, (2.13) m is expressed entirely in terms of n by (2.14) and the T = 0 dynamics of the staggered magnetization n is governed by the Lagrangian density L = L 0 − V where The dot denotes differentiation with respect to the time variable τ , ∂ 1 and ∂ 2 are partial derivatives with respect to x and y, and (n 1 , n 2 , n 3 ) are the Cartesian components of n along the axes xyz of Fig. 1. Consistency requires that all physical predictions derived from Eqs. (2.14) and (2.15) must be independent of the specific choice of the scale parameter ε. This fact will be explicitly demonstrated or used to advantage in the continuation of the paper.
We have further examined possible modifications of the low-energy dynamics due to nnn spin interactions along the diagonals of the Cu plaquettes. Our symmetry analysis revealed that both antisymmetric (DM) and symmetric anisotropies are present over nnn bonds and introduce a new set of parameters. Nevertheless, in the continuum limit, all new parameters merge with those already present in the Lagrangian (2.15). The implied remarkable rigidity of the effective low-energy spin dynamics is obviously due to the special crystal structure of Ba 2 CuGe 2 O 7 .
In the remainder of this section we make contact with the static energy functional derived by Zheludev et al. 5 , restricted to T = 0, which appears to differ in some respects from the potential V of Eq. (2.15). First, we note that we have omitted from the potential some additive field-dependent constants which play no role except to relate the energy to the magnetization. The latter will be obtained in Sec. III by a direct application of Eq. (2.14). A more interesting point concerns the special choice of exchange anisotropy made in Ref. 5, which was suggested by the work of Kaplan 16 , Shekhtman, Entin-Wohlman and Aharony 17 , and is referred to as the KSEA anisotropy. If the original perturbative derivation of the antisymmetric DM interaction 7 is carried to second order 17 , a symmetric anisotropy results that is described by a special case of the matrices (2.6) with in addition to a simple renormalization of the exchange constant J. The parameter κ 0 of Eq. (2.11) is then given by κ 0 = λ 2 − λ ′2 and the parameter κ of Eq. (2.12) vanishes. Since a nonzero κ is allowed by symmetry, we shall keep it throughout our theoretical development. However, our numerical demonstrations will also be restricted to the KSEA limit (κ = 0). Finally, the term (h × d z ) · n in the potential V of Eq. (2.15) is absent from the energy functional of Zheludev et al. 5 . A contribution of that nature is present in the early work of Andreev and Marchenko 12 and plays a significant role in various aspects of weak ferromagnetism 15 . This term vanishes when the field is applied along the c-axis (h × d z = 0) and thus does not affect the analysis of the CI transition. However, such a term is important in the case of an in-plane magnetic field which is also of experimental interest 3 and is briefly discussed in the concluding paragraph of Sec. III.
III. GROUND STATE
An important first step in the calculation of the T = 0 dynamics is the search for the classical spin configuration that minimizes the static energy where V is the potential of Eq. (2.15). For a field applied along the c-axis, h = (0, 0, h), the potential is given by which depends only on the parameter λ that measures the strength of the in-plane component of the DM anisotropy, and the combination of parameters that includes the external field h. A notable feature of the potential (3.2) is its invariance under the simultaneous transformations This is a peculiar realization of U (1) symmetry in that the usual 2D rotation of spatial coordinates with an angle ψ 0 is followed by an azimuthal rotation of the staggered magnetization with an angle −ψ 0 . The minimization problem was extensively studied in the earlier work 1-5 . Here we briefly describe a slightly simplified version of the obtained solution in order to establish convenient notation for our subsequent dynamical calculations. If we invoke the usual spherical parametrization of the unit vector n defined from n 1 + in 2 = sin Θ e iΦ , n 3 = cos Θ, (3.5) the minimum of the energy is sought after in the form of the one-dimensional (1D) Ansatz Θ = θ(x), Φ = 0, (3.6) which assumes that the staggered magnetization is confined in the xz-plane and depends only on the spatial coordinate x, modulo a U (1) transformation given by Eq. (3.4). The potential (3.2) then simplifies to where the prime denotes differentiation with respect to x, and stationary points of the energy (3.1) satisfy the ordinary differential equation θ ′′ + γ 2 cosθ sinθ = 0 whose distinct feature is that it does not depend on λ. A first integral of this equation is given by θ ′2 − γ 2 cos 2 θ = C = δ 2 , where we anticipate the fact that the minimum of the energy is achieved at positive integration constant C. Thus the desired solution Θ = θ(x) is given by the implicit equation and is a monotonically increasing function of x. The corresponding spin structure repeats itself when θ is changed by an amount 2π ; i.e., when x advances by a distance L = 4 π 2 0 dθ δ 2 + γ 2 cos 2 θ , (3.9) which will be called the period of the spiral. The free parameter δ is determined by the requirement that the average energy density w = 1 L L 0 V dx is a minimum, where V is the potential (3.7) calculated for the specific configuration (3.8). A direct computation shows that δ must satisfy the algebraic equation 2 π π 2 0 dθ δ 2 + γ 2 cos 2 θ = λ, (3.10) and the corresponding energy density is The configuration described above will be referred to as the flat spiral because the staggered magnetization is confined in the xz-plane. It is clear that the root δ of Eq. (3.10) decreases with increasing γ. In fact, δ vanishes at a critical value of γ which is easily calculated by setting δ = 0 in Eq. (3.10) to obtain γ = γ c = λπ/2. In view of Eq. (3.3), the corresponding critical field is given by and a spiral state is possible only for h < h c . At the critical point, the energy density (3.11) becomes w = λ 2 /2 and is equal to the energy of the uniform spin-flop state n = (1, 0, 0). The latter is a stationary point of the energy functional for any strength of the applied field and is thought to be the absolute minimum for h > h c . The actual stability of the spin-flop state for h > h c , and of the spiral state for h < h c , will be addressed more carefully in Secs. IV and V. Next we calculate the T = 0 magnetization m = (m 1 , m 2 , m 3 ) which can be obtained from Eq. (2.14) applied for the static configuration n = (sinθ, 0, cosθ) and averaged over the period L of the spiral. The only term that survives in the average is and can be expressed in terms of quantities already considered, namely For h > h c , the spin-flop state n = (1, 0, 0) is inserted in Eq. (2.14) to yield after a trivial computation The latter formula is the only place where the oscillating component of the DM anisotropy appears and produces a field-independent weak ferromagnetic moment along the y-axis.
In order to make definite quantitative predictions we use as input 5 the spin value s = 1/2, an exchange constant J = 0.96 meV, and a gyromagnetic ratio g = g c = 2.474 for a field applied along the c-axis. Concerning anisotropy, we adopt the KSEA limit (κ = 0) and thus the only relevant parameter is λ which may be estimated from the observed spin rotation by an angle ∆θ ≡ 2πζ over a distance d = a/ √ 2 along the x-axis. The incommensurability parameter ζ is related to the period L of Eq. (3.9) by ζ = ε/L, where ε is the scale parameter introduced in Eq. (2.9). One may actually choose the free parameter ε as ε = D/J and thus λ ≡ 1 and At zero field, Eq. (3.10) is applied for λ = 1 = γ to yield δ 2 = 0.53189772 and the period is calculated from Eq. (3.9) as L = 6.49945169. Hence, ε = ζL = 0.1774, where we have also used the value ζ = 0.0273 measured at zero field 5 . To summarize, our final choice of constants is and should be completed with the stipulation that the unit of field (h = 1) corresponds to 2s √ 2εJ/g c µ B = 1.682 T, while the unit of frequency (energy) is 2s The constants (3.16) are inserted in Eq. (3.12) to yield a critical field h c = 1.21 in rationalized units or H c = 2.04 T in physical units. This theoretical prediction is consistent with experiment and is thought to be a good indication that the KSEA limit (κ = 0) may provide an accurate description of anisotropy 5 . Now, Eqs. (3.10) and (3.9) are applied with λ = 1 and γ 2 = 1 + h 2 to yield the root δ = δ(h) and the period L = L(h) at field h. The field dependence of the energy density computed from Eq. (3.11) for h < h c , and w = 1/2 for h > h c , is depicted by a solid line in Fig. 3a. Similarly, the field dependence of the incommensurability parameter ζ = ζ(h) is calculated from where ζ(0) and L(0) are the zero-field parameters already discussed, and is depicted by a solid line in Fig. 3b. The results of Fig. 3 will be completed and further discussed in Sec. VI. The same numerical data may be employed in Eqs. (3.14) and (3.15) to calculate the field dependence of the magnetization and the corresponding susceptibility. Finally, we return to the U (1) transformation (3.4) which may be applied to the special solution (3.6) to yield a family of degenerate ground-state configurations: where ψ 0 is an arbitrary angle. The propagation vector of the resulting spiral forms an angle ψ 0 with the x-axis, while the normal to the spin plane forms an angle π/2−ψ 0 with the same axis. For the special rotation ψ 0 = π/4, the magnetic propagation vector and the normal to the spin plane are parallel (screw-type spiral). This symmetry operation is the basis for the bisection rule discovered by Zheludev et al. 3 when the external field is applied in a direction perpendicular to the c-axis, at an angle χ 0 with respect to the x-axis. The normal to the spin plane rotates almost freely to align with the external field, and thus χ 0 = π/2 − ψ 0 , in order to minimize (eliminate) the positive term (n · h) 2 in the potential (2.15). The new term (h × d z ) · n in the above potential does not affect the bisection rule but it does modify the profile of the spiral. For example, when the field is applied along the y-axis where γ 2 = κ + λ 2 is now field independent. Nevertheless, the external field reappears in a different form and requires a new calculation of the spiral based on Eq. (3.19). Such a calculation might actually explain the observed (weak) field dependence of the magnitude of the magnetic propagation vector 3 and provide an estimate for the strength λ ′ (or D ′ ) of the oscillating component of the DM anisotropy.
IV. SPIN-FLOP PHASE
We now begin to address questions of dynamics based on the complete Lagrangian L = L 0 − V of Eq. (2.15) applied for a field h = (0, 0, h). If we also insert the spherical parameters (3.5) we find that and where ∇ = (∂ 1 , ∂ 2 ) is the usual 2D gradient operator, while the Laplacian will be denoted in the following by ∆ = ∂ 2 1 + ∂ 2 2 . We first study the high-field commensurate phase (h > h c ) where the absolute minimum of the classical energy is thought to be the uniform spin-flop state n = (1, 0, 0) or Θ = π/2 and Φ = 0. Small fluctuations around this state are calculated by introducing Θ = π/2 + f and Φ = g in Eqs. (4.1) and (4.2) and keeping terms that are at most quadratic in the small amplitudes f = f (x, y, τ ) and g = g(x, y, τ ). Linear terms do not appear because we are expanding around a stationary point of the energy functional, whereas constants and total derivatives can be omitted because they do not contribute to the equations of motion. Thus the corresponding linearized equations are found to bë Performing the usual Fourier transformation with frequency ω and wave vector q = (q 1 , q 2 ) one obtains a homogeneous system whose solution requires that the corresponding determinant vanish. This condition leads to two branches of eigenfrequencies which will be referred to as the optical or acoustical mode, corresponding to the plus or minus sign, respectively.
A notable feature of the calculated dispersions is their strong anisotropy. In particular, the low-q acoustical branch reads and demonstrates that the spin-wave velocity depends on the direction of propagation. It also makes it clear that an instability arises when γ 2 < 4λ 2 . In fact, the complete acoustical frequency of Eq. (4.4) becomes purely imaginary over a nontrivial region in q-space when is a new critical field. For our choice of parameters (3.16) It is also interesting to examine the gap of the optical branch at q = 0 where ω + (q = 0) = γ = (κ+ λ 2 + h 2 ) 1/2 . This result may be used to illustrate our earlier claim concerning the role of the scale parameter ε. If we recall the definition of the rescaled parameters (2.11) and also include the factor 2s √ 2εJ to account for the physical unit of frequency, the calculated gap is independent of ε and is expressed entirely in terms of constants that appear in the original discrete Hamiltonian of Sec. II. Hence, in the KSEA limit, we find that in agreement with the magnon gap given in Ref. 5. Incidentally, this special result is the only feature of the spectrum actually calculated in the above reference for nonzero field. The complete dispersions are illustrated in Fig. 4 for H = 3 T, and for spin-wave propagation along the xor the y-axis. The anisotropy of the spectrum is made especially apparent by the fact that the dispersion of the acoustical mode is strictly linear in the x-direction, but almost ferromagnetic-like in the y-direction because the chosen field is only slightly greater than the critical field H 2 ≈ 2.9 T. The numerical data for Fig. 4 were obtained from Eq. (4.4) applied for our choice of units and constants given in Eq. (3.16). Thus we set λ = 1 and γ 2 = 1 + h 2 , with h = 3/1.682 = 1.784, and also include an overall factor 0.241 meV to account for the physical unit of energy. Finally, Q = εq is the wave vector defined on the complete square lattice formed by the Cu atoms within each layer, while relative units are defined from Q[r.l.
Unfortunately, there seem to exist no experimental data in the field region H > ∼ 3 T. In fact, the only published data 5 were obtained for H = 2.5 T < H 2 and spin-wave propagation along the x-axis. For this special direction (q 2 = 0) the theoretical dispersions (4.4) do not "see" the instability. One may then deliberately apply them for H = 2.5 T and compare the results to the actual data, as is done in Fig. 5 where a systematic disagreement is apparent in both dispersions. In particular, the numerical fits to the data represented by dashed lines indicate a significant 20% reduction in the measured spin-wave velocity; as was already noted in Ref. 5.
Of course, our earlier discussion makes it clear that the dispersions (4.4) cannot be applied for H = 2.5 T because the corresponding ground state is predicted to be unstable. At best, the fully polarized spin-flop state n = (1, 0, 0) survives in the field region H < H 2 as a metastable state thanks to some small tetragonal anisotropy that may be present in the discrete system 3 but drops out of the leading continuum approximation. An appealing scenario suggested by our calculation is that the system actually enters a different (intermediate) phase for H < H 2 which consists of some sort of mixed domains with no definite axis of polarization. Such a picture could explain the effective reduction of the spinwave velocity, also taking into account the anisotropy of the acoustical mode.
As mentioned already, the continuum model does not contain anisotropies that would necessarily polarize the staggered magnetization along the x (or the y) axis. Instead, there is a family of degenerate spin-flop states n = (cos Φ 0 , sin Φ 0 , 0) with the same energy for any constant angle Φ 0 . The corresponding small fluctuations are now studied by introducing Θ = π/2 + f and Φ = Φ 0 + g in Eqs. (4.1) and (4.2). A short calculation similar to the one presented for Φ 0 = 0 leads to the magnon dispersions where e = sinΦ 0 e 1 +cosΦ 0 e 2 is the unit vector obtained by rotating e 2 with an angle −Φ 0 . The emerging picture is yet another manifestation of the peculiar nature of the U (1) symmetry (3.4), in some respects similar to the bisection rule discussed in the concluding paragraph of Sec. III. In any case, the main conclusion of the present section persists; namely, the acoustical mode develops maximum instability along the direction e and leads to the same critical field given earlier in Eq. (4.6).
The nature of the intermediate phase will be discussed in Sec. VI. The present section is concluded with a word of caution concerning the validity of the continuum approximation at nonzero field, which roughly requires that g c µ B H ≪ J. This strong inequality becomes increasingly marginal for field strengths in the region H > ∼ H 2 .
V. SPIRAL PHASE
The calculation of the low-energy magnon spectrum in the spiral phase (h < h c ) is significantly more complicated, but the general strategy is identical to that followed in Sec. IV. Hence we introduce new fields according to where θ = θ(x) is the profile of the ground-state spiral given by Eq. (3.8) while f = f (x, y, τ ) and g = g(x, y, τ ) account for small fluctuations. The special rescaling chosen in the second equation is equivalent to working in a rotating frame 18 whose third axis is everywhere parallel to the direction of the background staggered magnetization n = (sin θ, 0, cos θ).
The new fields (5.1) are introduced in the complete Lagrangian given by Eqs. (4.1) and (4.2) which is then expanded to second order in f and g. The required algebra is lengthy but the final result for the linearized equations is sufficiently simple: where U 1 = − γ 2 cos(2θ), are effective potentials that can be calculated for any desired set of parameters, as explained in Sec. III. The general idea that the calculation of the spectrum in a spiral antiferromagnet can be reduced to a Schrödingerlike problem in a periodic potential is not new 19 , but the specific structure of Eqs. (5.2) requires special attention. We found it instructive to consider first the special case of spin-wave propagation along the x-axis (∂ 2 f = 0 = ∂ 2 g) at zero external field (h = 0). This is actually the only case for which the low-energy spectrum was previously calculated starting from the discrete Hamiltonian 5 . If we further perform the temporal Fourier transformation with frequency ω, Eqs. (5.2) reduce to where the prime denotes differentiation with respect to x. Therefore, in this special case, the eigenvalue problem is reduced to two decoupled 1D Schrödinger equations of the standard type with potentials U 1 and U 2 calculated at zero field. Also note that both potentials are periodic functions of 2θ and thus their period is actually L/2 where L is the period of the background spiral.
The eigenvalue problems (5.4) are solved in Appendix A. The numerical procedure yields eigenfrequencies ω = ω(q 1 ) as functions of Bloch momentum q 1 . The latter can be restricted to the zone [−2π/L, 2π/L], because the period of the potentials is L/2, or to the zone [−ζ, ζ] in relative units defined as in Sec. IV. Several low-lying eigenvalues are illustrated in Fig. 6a using a reducedzone scheme. Solid and dashed lines correspond to the first and second eigenvalue problem in Eq. (5.4) and are superimposed in the same graph for convenience. We also find it convenient to refer to the two types of modes as acoustical and optical. In either case, there is only one discernable gap that occurs between the first and the second band at the zone boundary. The calculated boundary gaps are 0.123 meV and 0.049 meV, respectively, while the absolute gap of the optical mode at the zone center is 0.170 meV. All of the above theoretical predictions agree with those obtained in Ref. 5 by a different method. They also agree with experiment, except for the small (0.049 meV) gap that has not yet been resolved at zero field.
The same results are depicted in Fig. 6b using an extended-zone scheme. In fact, this figure displays two replicas of the acoustical mode centered at ±ζ. The need for two replicas follows from the structure of dynamic correlation functions in the laboratory frame, rather than in the rotating frame actually used in the calculation of the magnon spectrum 5 . Our results in Fig. 6b are obviously consistent with both the experimental and the theoretical results obtained in the above reference at zero field.
We are now in a position to extend the calculation to the general case of nonzero field and arbitrary direction of spin-wave propagation. The external field enters Eqs. (5.2) in two distinct ways. First, it affects the structure of the potentials U 1 and U 2 because the background spiral is further distorted. Second, the field induces first-order time derivatives which originate in the "nonrelativistic" term of Eq. (4.1) and couple the two linear equations (5.2). Additional coupling between the two equations appears in the case of arbitrary direction of propagation because ∂ 2 f and ∂ 2 g no longer vanish. Altogether we are faced with a nonstandard eigenvalue problem that is also solved in Appendix A.
Here we present explicit results for four typical values of the rationalized field h = 0, 0.3, 0.6 and 0.9 which will be quoted from now on by their rounded physical values H = 0, 0.5, 1 and 1.5 T. In Fig. 7 we illustrate the calculated spectrum for spin-wave propagation along the x-axis (q 2 = 0) using a highly reduced zone scheme. An important check of consistency is provided by the fact that the H = 0 results of Fig. 7 agree with those presented earlier in Fig. 6a, except that the zone is now reduced down to [−ζ/2, ζ/2] for reasons explained in Appendix A. Furthermore, we no longer employ solid and dashed lines to distinguish between acoustical and optical modes. Such a distinction is not a priori possible in the current algorithm because of the coupling (hybridization) of the two types of modes at nonzero field.
One should keep in mind that the extent of the zone [−ζ/2, ζ/2] slides with the applied field, a feature that is not apparent in Fig. 7 because the scale of the abscissa is adjusted accordingly. The incommensurability parameter ζ = 0.0273 measured at H = 0 is used as input in our calculation. The calculated values for H = 0.5, 1 and Table I. 1.5 T are ζ = 0.0271, 0.0264 and 0.0245.
At first sight, it would seem difficult to extract useful information from the highly convoluted spectra shown in Fig. 7. Nevertheless, the most vital information concerning the low-energy dynamics is easily abstracted from Fig. 7 because the low-lying bands are clearly segregated. In particular, it is still possible to distinguish between the acoustical and the optical mode, at least in an operational sense. Thus we unfold the first six branches back to the zone [−ζ, ζ] and then proceed to the extended-zone scheme of Fig. 6b including two replicas of the acoustical mode centered at ±ζ. The resulting low-energy spectra are shown in Fig. 8.
The H = 0 entry of Fig. 8 is but a magnified version of the lower-central portion of Fig. 6b, as expected. This version reveals a certain "anomaly" that is not conspicuous in Fig. 6b, namely, a relative crossing between the two modes in a narrow region around the zone center. The calculated maximum splitting of 0.005 meV is within the error margin of the continuum approximation and, in any case, beyond experimental detection. But the resolution of this theoretical curiosity is interesting: when the direction of spin-wave propagation departs slightly from the x-axis (q 2 = 0) and/or a finite field is turned on, the crossing points become avoided crossings. Therefore, strictly speaking, the solid and dashed lines must be interchanged in the narrow region between the two crossing points. This explains the apparent slight inconsistency in the labeling of the five characteristic points of the spectrum denoted by 1, 2, 3, 4 and 5 in Fig. 8. The calculated TABLE I. Energy in units of meV at the five characteristic points of the spectrum denoted by 1, 2, 3, 4 and 5 in Fig. 8. Table I. We now concentrate on the optical mode. The gap E 2 = 0.176 meV calculated at zero field agrees with the measured 0.18(1) meV. Our calculation further shows that the above gap evolves quickly with increasing field to reach the asymptotic value 0.26 meV around which it oscillates mildly. The complete optical mode evolves into a snake-like dispersion with energy values in the range 0.25 meV < E < 0.29 meV. These predictions are generally consistent with experiment 5 . However, some of the finer details deserve closer attention. The calculated energy at point 5 in the spectrum remains practically constant at E 5 ≈ 0.31 meV for H < ∼ 1 T, while a steep crossover takes place for higher field values which leads to E 5 ≈ 0.35 meV for H = 1.5 T. These predictions are also in agreement with experiment 5 . But the calculated splittings of the optical dispersion E 5 − E 3 = 0.02 meV and 0.06 meV, for H = 1 and 1.5 T, disagree with the measured 0.05 meV and 0.11 meV. It appears that the observed splittings are better described by E 5 − E 2 = 0.04 meV and 0.09 meV. In fact, the above identification may not be completely arbitrary. For instance, the lowest branch in the optical dispersion measured for H = 1.5 T shows a clear local maximum of 0.28 meV at the zone center, which agrees with the calculated maximum E 3 = 0.285 meV at the zone boundaries ±ζ rather than the gap E 2 = 0.255 meV at the zone center. It seems that the lowest branch in the observed optical dispersion for H = 1.5 T is composed of two replicas of the calculated dispersion centered at ±ζ. On the other hand, experimental data 5 at higher energies not shown in Fig. 8 indicate the appearance of two replicas centered at ±2ζ. Unfortunately, we cannot resolve this issue of proper replication of the basic modes because our current formalism does not directly address the relevant dynamic correlation functions.
Next we discuss the acoustical mode. Our calculation shows that the energy at point 4 in the spectra of Fig. 8 remains remarkably stable at E 4 ≈ 0.30 meV for all field values considered. This feature is also in agreement with experiment which indicates only a mild decline from the above value with increasing field. Nevertheless, a clear disagreement occurs in the lowest branch of the acoustical mode. Although explicit data points are not given for this branch by Zheludev et al. 5 , the solid lines in their Figs. 6 and 7, and the corresponding wording in their text, suggest that the lowest branch in the measured spectrum is also largely insensitive to the applied field. In contrast, our calculation predicts a robust reduction of the energy gap E 1 with increasing field (see Fig. 8 and Table I). The calculated spin-wave velocity is also reduced, albeit at a slower rate.
The preceding apparent disagreement with experiment is especially important because it is directly related to the issue of local stability of the spiral phase. Indeed, a careful numerical investigation reveals that the gap E 1 vanishes at the critical field h 1 ≈ 1.01, or H 1 ≈ 1.70 T, while an unstable mode develops for H > H 1 . This mode is first detected by the appearance of a real eigenvalue in the matrix M of Eq. (A5), when H crosses H 1 , which corresponds to purely imaginary frequency. As the field increases beyond H 1 the instability occurs over a nontrivial region in q-space. Therefore, the flat spin spiral constructed in Sec. III is predicted to be locally stable only for H < H 1 < H c .
It is interesting that the experimental work 4,5 already provided evidence for the existence of a critical field H 1 = 1.7 T that coincides with our theoretical prediction. However, one should also contemplate the possibility that such a coincidence may be fortuitous, in view of the apparent contradiction between experimental and theoretical predictions for the gap E 1 . In any case, our current result together with the discussion of Sec. IV clearly suggest the existence of an intermediate phase in the field region 1.7 T < H < 2.9 T. The nature of the intermediate phase is discussed in Sec. VI.
In the remainder of this section we take a different view of the low-energy magnon spectrum by considering spin-wave propagation along the normal to the plane of the flat spiral. Our algorithm is adapted to this case simply by setting the Bloch wave number q 1 = 0 and calculating frequencies as functions of the wave number q 2 in the y-direction. It is interesting that no theoretical or experimental results exist in this case even at zero field. Our results are illustrated in Fig. 9 for the same set of field values employed in the preceding discussion.
The most stable feature of Fig. 9 is its lowest branch which exhibits quadratic dependence on q 2 near the origin. Clearly this branch is the extension of the acoustical dispersion in the y-direction originating at its points where E = 0. Therefore, the complete acoustical mode is Goldstone-like in the x-direction but ferromagnetic-like in the y-direction. Such a characteristic anisotropy is in some respects similar to the situation encountered in the spin-flop phase discussed in Sec. IV.
Higher branches labeled as 1, 2, 3, 4 and 5 in Fig. 9 also possess a simple interpretation, for they are the extensions in the y-direction of the special spectral points numbered accordingly in our earlier Fig. 8. In contrast to the fundamental ferromagnetic-like branch, higher branches evolve vigorously with the applied field. In particular, branch 1 in Fig. 9 is quickly depressed with increasing field to become degenerate with the fundamental branch at the critical field H 1 = 1. course, this is the instability described earlier in the text viewed from a different perspective.
We have thus provided a fairly complete theoretical picture of the low-energy magnon spectrum, including predictions for which there exist no experimental data at present. It is interesting to see whether or not future experiments could resolve the apparent discrepancy in the field dependence of the magnon gap E 1 , and thus illuminate the important issue of local stability of the spiral phase, as well as confirm the predicted characteristic anisotropy in the low-energy spectrum.
VI. INTERMEDIATE PHASE
We now focus on the predicted intermediate phase and examine its nature through a direct numerical minimization of the complete energy functional W of Eqs. (3.1) and (3.2). The method of calculation is a relaxation algorithm formulated on the basis of a discretized form of the energy functional defined on a square grid. After long experimentation with 2D simulations, it progressively became apparent that the optimal configuration for h > h 1 is actually a 1D nonflat spiral characterized by a staggered magnetization whose three components are all different than zero.
Therefore, an accurate calculation of the nonflat spiral was eventually obtained by a relaxation algorithm applied directly to a 1D restriction of the energy functional whose variation leads to the coupled stationary equations: These are ordinary differential equations because both angular variables Θ and Φ are assumed to be functions of the single coordinate x, while the prime again denotes differentiation with respect to x. Nevertheless, it does not seem possible to obtain analytical solutions of Eqs. (6.1), except for the case of the flat spiral (Φ = 0) discussed in Sec. III. A significant obstacle is the fact that the period of the nonflat spiral is not known a priori. Hence our numerical solution was carried out on a periodic 1D grid with specified length L, until a relaxed configuration was obtained with energy density w = w(L). We then varied L to achieve the least possible energy for each field h, and the corresponding optimal period L = L(h). An important check of consistency is that the above algorithm reproduces the results for the flat spiral obtained more directly in Sec. III, but only when h < h 1 = 1.01. Instead, a nonflat spiral emerges as the optimal solution for h > h 1 . The calculated configuration is illustrated in Fig. 10 for a field value h = 1.21 deliberately chosen to be equal to the critical field h c of the conventional CI transition. The energy of the nonflat spiral depicted by a dashed line in Fig. 3a is smaller than the energy of both the flat spiral and the uniform spin-flop state throughout the intermediate region h 1 < h < h 2 . One should also stress that the nonflat spiral is here predicted to occur for a field applied strictly along the c-axis, and is not due to sample misalignment 4 or the presence of a transverse magnetic field 20 .
In a sense, the predicted intermediate phase smooths out the original sharp CI transition. This smoothing is also apparent in the calculated field dependence of the period L = L(h) which is inserted in Eq. (3.17) to yield the results for the incommensurability parameter shown by a dashed line in Fig. 3b. The same figure displays experimental data taken from Ref. 4 where they were analyzed in terms of the conventional CI transition based solely on a flat spiral. It should be noted that both the measured zero-field incommensurability parameter ζ(0) = 0.0273 and an experimental critical field H c = 2.15 T were used as adjustable parameters in the theoretical analysis of Refs. 4 and 5 to obtain a reasonable overall fit. Yet the experimental data indicate some smoothing of the CI transition near the critical field. This fact is made apparent in our Fig. 3b where theoretical results for both the flat spiral (solid line) and the nonflat spiral (dashed line) are calculated using as input only the zero-field parameters given earlier in Eq. (3.16).
Nevertheless, the results of Fig. 3b cannot be interpreted as unambiguous evidence for the existence of an intermediate phase, especially because the experimental data were taken at the relatively high temperature T = 2.4 K. It is feasible that the T = 0 theoretical predictions could be further focused by invoking deviation from KSEA anisotropy that is allowed by symmetry; i.e., by repeating the calculation for nonzero values of the free parameter κ. One should also keep in mind that a completely accurate description of the CI transition may not be attainable within the classical approximation.
The nonflat spiral exists as a stationary point of the energy functional throughout the intermediate phase and degenerates into a uniform spin-flop state polarized along the y-axis near the upper critical field h 2 = 1.73. Actually, our calculation was not pushed all the way to the critical field h 2 because of numerical difficulties that occur as the period grows to infinity. The theoretical analysis should be completed with a detailed study of the stability and dynamics of the nonflat spiral within the full 2D context, in a manner analogous to our treatment of the flat spiral in Sec. V. The required computational effort is too great to be included in the present paper, especially because the profile of the nonflat spiral is obtained numerically through the relaxation algorithm. A future analysis could, in principle, reveal the existence of yet another critical field within the intermediate region, beyond which the nonflat spiral may cease to be locally stable. It is thus important to also examine the nature of instability at the upper critical field h 2 , as discussed further in Appendix B.
The configuration of Fig. 10 may be viewed as a conical spiral that nutates around the y-axis. It is interesting that a simple conical spiral without nutation had been discussed theoretically in connection with the cholestericnematic transition in liquid crystals 9,10 but has not yet been observed experimentally because its realization requires an anomalously small bend modulus 11 . In contrast, the parameters of Ba 2 CuGe 2 O 7 favor the occurrence of the currently predicted intermediate phase.
VII. CONCLUSION
We have presented a field theoretical description of the low-energy dynamics in the spiral antiferromagnet Ba 2 CuGe 2 O 7 . We have thus been able to calculate the low-energy magnon spectrum for any strength of the applied field and any direction of spin-wave propagation. In this respect, the present work significantly extends the results of Ref. 5 where the spectrum was calculated only at zero field and for propagation along the direction of the spiral. Therefore, our theoretical results are relevant for the analysis of experimental data obtained for nonzero field, which were previously analyzed mostly in terms of empirical formulas.
An interesting byproduct of this detailed spin-wave analysis is the identification of the two new critical fields H 1 and H 2 , and a corresponding prediction of an intermediate phase that does not seem to be inconsistent with available experimental data. The apparent discrepancy in the field dependence of the magnon gap E 1 pointed out in Sec. V needs to be clarified, but could be due to poor experimental resolution at this rather low energy scale (0.1 meV or less). The field dependence of the incommensurability parameter discussed in Sec. VI could be rectified by invoking a slight deviation from the KSEA limit that is allowed by symmetry. Susceptibility data 4 taken at T = 2 K display a rounded maximum which could be explained as a finite-temperature effect but does not a priori exclude an intermediate phase. Furthermore, the set of data for the magnon dispersion discussed in connection with Fig. 5 is too limited to provide a clear picture. Therefore, a clear identification or disproof of the intermediate phase may require additional experimental work guided by the theoretical predictions of the present paper.
On the other hand, it is desirable to carry out a complete theoretical analysis of the stability and dynamics of the intermediate phase along the lines outlined in Sec. VI. A related project is to extend our approach to the case of a field applied in a direction perpendicular to the c-axis 3 . The field-dependent modifications of the spiral can be computed on the basis of Eq. (3.19), and a corresponding calculation of the low-energy magnon spectrum can be carried out by a straightforward extension of the methods developed in Sec. V.
Finally, we must comment on the two basic approximations made in the present work. The adopted classical approach is equivalent to the usual semiclassical approximation obtained by the 1/s expansion restricted to leading order. The omitted quantum (anharmonic) corrections are not negligible in this 2D problem but are offset in part by the fact that the input parameters are consistently estimated within the classical approximation 1-5 . One should also question the validity of the continuum approximation whose relative accuracy can be roughly estimated from ε 2 ≈ 0.03 at zero field, but may deteriorate in the presence of a strong external magnetic field.
Incidentally, the corresponding parameter ε in a typical weak ferromagnet such as an orthoferrite (YFeO 3 ) or a high-T c superconductor (La 2 CuO 4 ) is at least one order of magnitude smaller. In any case, the physical picture derived is sufficiently complete to provide a basis for a meaningful discussion of further refinements.
ACKNOWLEDGMENTS
We thank A. Bogdanov for bringing Refs. 20 and 21 to our attention, S. Trachanas for valuable suggestions concerning the eigenvalue problems studied in the present paper, and M. Marder for a careful reading of the manuscript. The work was supported in part by a Marie Curie Fellowship (HPMT-GH-00-00177-03), by a TMR program (ERBFMRXCT-960085), and by VEGA 1/7473/20.
APPENDIX A: EIGENVALUE PROBLEMS
The eigenvalue problems (5.4) were solved numerically, as explained here for the first equation. Taking into account that the period of the potential is L/2, the Bloch representation of the wave function reads and the wave equation becomes where the Fourier coefficients of the potential are given by Here we use the fact that U 1 is an even function of θ or x, and x = x(θ) is given by the integral (3.8 Table II using as input the zero-field parameters quoted in Sec. III. The numerical procedure just described yields eigenfrequencies ω = ω(q 1 ) as functions of Bloch momentum q 1 that can be restricted to the We now return to the general case of nonzero field and arbitrary direction of spin-wave propagation. We first rewrite Eqs. (5.2) in a form that contains only firstorder time derivatives. Hence we treat u =ḟ and v =ġ as independent fields and introduce the four-component spinor X defined from X T = (u, v, f, g). Then Eqs.
where M is the differential operator Here D 1 = −∆ + U 1 , D 2 = −∆ + U 2 , D 3 = 2λ sin θ ∂ 2 , D 4 = 2h cos θ, and I is the unit operator. The chief advantage of M is that it does not contain time derivatives. A superficial disadvantage is that M is not a hermitian operator. In fact, Eq. (A4) suggests that the eigenvalues of M are purely imaginary and come in pairs ±iω where ω is the desired physical frequency. A real eigenvalue in M would correspond to purely imaginary physical frequency and thus indicate instability of the ground-state spiral. All of these features are explicitly realized in the following numerical calculation. Our task is then to construct a matrix representation of the differential operator M . Attention should be paid to the fact that the Bloch theorem must now be applied with the full period L of the spiral because of those terms in Eq. (A5) that are proportional to cos θ and sin θ. Hence the operator D 1 = −∆ + U 1 is replaced by a matrix (D 1,nm ) with elements where q 1 is now restricted to the zone [−π/L, π/L], or [−ζ/2, ζ/2] in relative units, while q 2 is unrestricted because the spiral depends only on x. Accordingly, the Fourier coefficients of the potential are given by which differs from Eq. (A3) only in that the full period L, instead of L/2, is employed. As a result, odd coefficients in Eq. (A7) vanish, while the collection of even coefficients coincides with that obtained from Eq. (A3). The operator D 2 is treated in exactly the same way replacing U 1 with U 2 . On the other hand, the operator D 3 = 2λ sin θ ∂ 2 in Eq. (A5) is replaced by 2λq 2 S where S is an antisymmetric matrix whose n-th codiagonal has all its elements equal to and D 4 = 2h cos θ is replaced by 2hC where C is a symmetric matrix whose n-th codiagonal has all its elements equal to An interesting fact is that both S n and C n vanish for even n. The most important terms are those with n = ±1, whereas higher-order terms account for distortion of the spiral from its ideal shape θ = λx. Such a distortion occurs even at zero field in the presence of KSEA anisotropy.
A finite-matrix representation of the differential operator M is then obtained by restricting the indices m and n to the finite interval [−N, N ] where N may again be chosen as low as 20. The resulting nonsymmetric 4(2N + 1) × 4(2N + 1) matrix is diagonalized numerically to yield eigenvalues that are indeed purely imaginary and come in pairs ±iω where ω = ω(q 1 , q 2 ) is the sought after physical frequency. We have thus obtained a number of results using as input the spiral parameters λ = 1, γ 2 = 1 + h 2 , δ = δ(h) and L = L(h) calculated for each field h as explained in Sec. III. The numerical burden is insignificant and can be carried out interactively. Explicit results are discussed in Section V.
APPENDIX B: VORTEX STATES
In the original picture of the CI transition 8 the highfield commensurate phase is rendered unstable through domain-wall nucleation at the critical field h c to become a spiral phase for h < h c . The instability at the higher field h 2 > h c suggested by the spin-wave analysis of Sec. IV is clearly caused by 2D fluctuations. Therefore, it is conceivable that the uniform spin-flop phase is actually destabilized by nucleation of 2D vortices rather than 1D domain walls, as advocated by Bogdanov et al. 21 in a number of related models.
We thus search for genuinely 2D stationary points of the static energy that are compatible with U (1) symmetry. First, we introduce the usual polar coordinates (r, ψ) from x = r γ cos ψ, y = r γ sin ψ, where the overall rescalling by the constant γ will simplify subsequent calculations. A configuration that is strictly invariant under the U (1) transformation (3.4) reads where the minus sign in the second equation is again due to the peculiar nature of U (1) symmetry in the present problem. Under normal circumstances, e.g., an isotropic antiferromagnet in an external field 14 , both choices Φ = ψ and Φ = −ψ are compatible with axial symmetry and are referred to as vortex and antivortex.
Here only antivortices are possible within the axiallysymmetric ansatz but will be called vortices for brevity. When the Ansatz (B2) is introduced in the potential V of Eq. (4.2) the corresponding total energy W = V dxdy reads W = π ∞ 0 rdr dθ dr 2 + sin 2 θ r 2 + cos 2 θ −ν dθ dr + cosθ sinθ r , where ν = 2λ/γ is the only relevant parameter in this static calculation. Also note that we have droped the additive constant term λ 2 /2 from the potential (4.2) and thus the energy of the uniform spin-flop state is set equal to zero. Variation of the energy functional (B3) with respect to the unknown amplitude θ(r) leads to the ordinary differential equation r d 2 θ dr 2 + dθ dr + r − 1 r cosθ sinθ = ν sin 2 θ, which reduces to the familiar equation for ordinary spin vortices in the extreme limit ν = 0. For ν = 0, solutions of Eq. (B4) exhibit slow decay at large distances, namely which turns into exponential decay for ν = 0. Explicit solutions were obtained by a straightforward relaxation algorithm and are illustrated in Fig. 11 for three characteristic values of the parameter ν= 0, 1 and 2.
One may restrict the integral in Eq. (B3) to the finite range 0 < r < R and examine its behavior for large R. A short calculation taking into account the asymptotic expansion (B5) leads to W = π 1 − ν 2 ln R + f inite terms, ( and thus the energy exhibits the familiar logarithmic divergence. This asymptotic result demonstrates the crucial role played by the parameter ν. For ν < 1, the energy of a single vortex is greater than the energy of the uniform spin-flop state by a logarithmically divergent quantity. This is the usual situation encountered in the case of ordinary vortices (ν = 0). The vortex energy is finite for ν = 1 and becomes again logarithmically divergent but negative for ν > 1. The special point ν = 2λ/γ = 1 leads to the same critical field h 2 given earlier in Eq. (4.6). Therefore, for h < h 2 , the energy of the uniform spinflop state can be lowered by vortex nucleation. Because of the logarithmic dependence of the energy on the size of the system, it is clear that a single vortex cannot by itself produce a thermodynamically significant effect. Instead, one should expect that a large number of vortices is created for h < h 2 , probably in the form of a vortex lattice 21 . We have actually performed several numerical experiments using the full 2D relaxation algorithm described in the beginning of Sec. VI. Although we have already obtained some "spectacular" pictures indicating the formation of a vortex lattice, we have not yet been able to lower its energy below that of the nonflat spiral. It appears that the complete (2D) energy functional displays glassy behavior in the intermediate region, which may lead to several nearly degenerate local minima. | 2019-04-14T01:57:21.895Z | 2001-03-09T00:00:00.000 | {
"year": 2001,
"sha1": "e085787262d9f3676d4a21d321f871069bc650f6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0103217",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b89c831ecc298805022c4d020a01c9e5b3e52993",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
271154106 | pes2o/s2orc | v3-fos-license | Enhancers regulate genes linked to severe and mild childhood asthma
Background Children with severe asthma suffer from recurrent symptoms and impaired quality of life despite advanced treatment. Underlying causes of severe asthma are not completely understood, although genetic mechanisms are known to be important. Objective The aim of this study was to identify gene regulatory enhancers in leukocytes, to describe the role of these enhancers in regulating genes related to severe and mild asthma in children, and to identify known asthma-related SNPs situated in proximity to enhancers. Methods Gene enhancers were identified and expression of enhancers and genes were measured by Cap Analysis Gene Expression (CAGE) data from peripheral blood leukocytes from children with severe asthma (n = 13), mild asthma (n = 15), and age-matched controls (n = 9). Results From a comprehensive set of 8,289 identified enhancers, we further defined a robust sub-set of the high-confidence and most highly expressed 4,738 enhancers. Known single nucleotide polymorphisms, SNPs, related to asthma coincided with enhancers in general as well as with specific enhancer-gene interactions. Blocks of enhancer clusters were associated with genes including TGF-beta, PPAR and IL-11 signaling as well as genes related to vitamin A and D metabolism. A signature of 91 enhancers distinguished between children with severe and mild asthma as well as controls. Conclusions Gene regulatory enhancers were identified in leukocytes with potential roles related to severe and mild asthma in children. Enhancers hosting known SNPs give the opportunity to formulate mechanistic hypotheses about the functions of these SNPs.
Enhancers regulate genes linked to severe and mild childhood asthma -Supplementary Material S1 and S2 | TCs annotation Figure S2 | Flowchart: A computational workflow Figure S3 | Genes, TCs and enhancers in expression signatures Table S3 | Differentially expressed (DE) genes in TCs Figure S4 | TCs linked to mild and severe asthma Table S4 | WikiPathway analysis of DE TCs Figure S5 | Test gene expression signatures in an independent dataset Figure S6 and Table S5 | Compare TCs to FANTOM5 TCs Table S6 | Classification of enhancers Figure S7 | Identification of novel enhancers Figure S8 | Conservation analysis of enhancer regions Figure S9 | Identification of genomic enhancer cluster Table S7 | Genomic clusters of densely positioned enhancers Table S8 | WikiPathway analysis of genomic enhancers cluster Table S9 | Genomic enhancers cluster on chromosome 2 Table S10 | GWAS asthma lead and LD associated SNPs (with gene associations) within enhancers region Figure S10 | Genes are regulated by enhancers Figure S11 | Enhancers linked to severe and mild asthma Table S11 | Differentially expressed (DE) enhancers Table S12 | WikiPathway analysis of DE enhancers Figure S12 | Hierarchical clustering in severe and mild asthma Figure S13 | Identify enhancer-TC interactions Table S13 | Genes are regulated by enhancers We identified 78,176 tag clusters (TCs) by using the PARACLU clustering method [3].Then we normalized the raw expression counts as tags per million (TPM).Based on the total set of 78,176 TCs as comprehensive TCs, a subset of 40,273 TCs were defined as robust TCs after applying an expression cut off 3 TPM per sample [4].Tables S1 and S2 | TCs annotation CAGE TCs are annotated using Ensembl designations to map their locations relative to annotated genes (download date 2020/06).
Table S1: TCs annotation; an excel file contains all CAGE tag clusters (TCs) and their corresponding association to ENSEMBL transcripts, gene symbols, and biotypes (protein-coding, non-coding, etc.).The distribution of the genomic locations shows ~78% within protein-coding genes, ~5-6% processed transcripts (lncRNA, antisense, non-coding transcript, and so on), ncRNA, and pseudogenes both individually contain ~2%, and ~10% remains unknown.The comprehensive and robust sets are comparable and consistent in terms of their genomic locations.
Figure S2 | Flowchart: A computational workflow
Graphical abstract of the computational workflow.
Figure S2: Computational workflow of an integrative analysis to detect asthma-linked enhancers, TCs (Genes), and their interactions from CAGE-seq data.
Figure S3 | Genes, TCs and enhancers in expression signatures
We counted the intersection of our differentially expressed TCs, genes and enhancers in any of the 3 pairwise comparisons.
Figure S3: Venn diagram showing the numbers of genes (blue), TCs (blue in brackets) and enhancers (green).Dark red color represents genes connected to enhancers.Note: Two TCs can be associated to the same gene so that the number of genes in the Venn diagram does not add up to the total number of genes considered.
The expression signature consists of 381 TCs corresponding to 321 unique genes.
Table S4 | WikiPathway analysis of DE TCs
To determine the biological pathways associated with the differentially expressed genes reported in Table S3, we performed the enrichment analysis using Enrichr -an integrative web-based tool [6] through WikiPathways [7].The enrichment results listed in Table S4, an excel file include all significant (p-value < 0.05) pathways for three pairwise comparisons.
Figure S5 | Test gene expression signatures in an independent dataset
We applied our gene expression signatures to an independent publicly available dataset of gene expression microarray data for CD4+ and CD8+ T cells [8].These data were from adults with severe and non-severe (mild) asthma.This study showed the widespread change in the activation of CD8+ but not CD4+ T cells from patients with severe asthma.First, we selected all available microarray probes and converted them into gene symbols by using Ensembl biomart.We found 761 probes (panels A and B) and 286 unique probes (panels C and D) for 321 genes (381 TCs) from the expression signature.We performed unsupervised clustering (Pearson correlation, complete linkage clustering) of the expression profiles of the CD4+ and CD8+ T-cell types separately.
Figure S5: Testing gene expression signatures in an independent dataset.Unsupervised expression clustering was performed for 761 probe sets (several probe sets for the same genes included) from [8] corresponding to the gene expression signature of 321 genes (A and B).Clustering was performed for 286 probes where one probe was selected to represent one gene (C and D).
Figure S6 and Table S5 | Compare TCs to FANTOM5 TCs We compared our TCs with reported TCs in the FANTOM5 [9].First, FANTOM5 samples are selected by matching the terms, "CD", "Basophil", "Eosinophil", "Neutrophil", "Macrophage", "Monocyte", "Peripheral", "Whole", and "blood".This resulted in 231 samples.Then took TCs from there for three different settings based on the human blood cell types total CAGE peak expression level, i) >= 4M, ii) >=2M and iii) >=1M tag counts.Figure S4 shows the distribution of tag counts of FANTOM5 samples and Table S4 shows the overlaps between our TCs and the FANTOM5 TCs.Interestingly, after the comparison, we noticed a fewer number of samples with high expression having more TCs.
Figure S7 | Identification of novel enhancers
All CAGE enhancers were classified according to how many samples they were observed in (Table S6).For further analysis, we followed previously used identification criteria [10] focused on two sets: i) comprehensive enhancers (n= 8,289) with at least 2 tags in 1 sample, and ii) robust enhancers (n= 4,738) with at least 2 tags in 6 samples.
To identify novel enhancers, we compared our CAGE-defined enhancers with the FANTOM5 enhancers [11].Despite a high level of concordance many enhancers discovered in our comprehensive and robust set were not present in FANTOM5.Taken together, those missing enhancers are termed novel comprehensive and novel robust enhancers.
Figure S8 | Conservation analysis of enhancer regions
Conservation analysis can be used to annotate the functionality of the genome in all vertebrates [12].To estimate transcriptional evolutionary conservation with our predicted enhancers, we used UCSC [13] phyloP100wayAll track.We randomly sampled 500,000 genomic regions of 4001 bp from the intergenic regions in chromosomes 1 to 22, X, and Y to assess the background level of conservations.Deeptools (computeMatrix command) was used to analyze the correlation among CAGE-defined comprehensive and robust set enhancers, novel comprehensive, novel robust set, and random non-genic region [14].
Figure S8: Conservation analysis of enhancer regions.The x-axis shows the distance from the center of regions and the y-axis shows the coverage of PhylopP100 vertebrate conservation score for robust enhancers (n=4,738), comprehensive enhancers (n=8,289), the novel comprehensive enhancers (n= 3,277), novel robust enhancers (n= 1,439) and random non-genic regions (n= 500,000).
Figure S9 | Identification of genomic enhancer cluster
We identified clusters of densely positioned genomic enhancers, which were delineated based on linear nucleotide proximity using a sliding window approach.A window size of 15 kb around the enhancers was selected, and the genomic enhancers cluster was determined using the BEDtools cluster command.Additional 10 kb windows were extended from both the beginning and the end of the genomic enhancers cluster to identify genes that coincide within the genomic enhancers cluster.
Figure S9: The schematic drawing depicts how genomic enhancers clusters were identified using sliding windows.
Table S7 | Genomic clusters of densely positioned enhancers
The list of genomic enhancer clusters is sorted by cluster size, along with the corresponding genes and GWAS hits SNPs.
Table S8 | WikiPathway analysis of genomic enhancers cluster
We utilized genomic enhancer clusters (cluster size >= 8) by expanding the window around these clusters by 10 kb and examining the genes in their vicinity.Enrichment analysis of these genes was conducted using Enrichr, an integrative web-based tool (Chen et al., 2013) accessed through WikiPathways (Pico et al., 2008).Table S7 in the supplementary material details the significant pathways (p<0.5)associated with genes from the top 26 clusters.
Table S10 | GWAS asthma lead and LD associated SNPs (with gene associations) within enhancers region
Table S10, an excel file, has listed GWAS asthma lead SNPs and LD associated SNPs that were located within enhancer regions.To establish SNPs within the enhancer regions we considered the disease traits in Table 2 as well as the following by choosing a genomic region of 500 bp upstream of the 5' end of the enhancers and 500 bp downstream of the 3' end of the enhancers.We explored 12 different traits in our search for SNPs associated with asthma.Unfortunately, enhancer regions did not align with the following traits: (D) The lower panel shows the expression analysis from our CAGE data.The lower-left plot demonstrates the correlation between the TC (CXCL1) and the enhancer.The lower-middle and right plots display the expression in TPM for control, MA, and SA in a box plot for TC (CXCL1) and the enhancer, respectively.
CXCL1 gene was described in the context of asthma [15] [16].CXCL1 is expressed broadly in many cell types and tissues including smooth muscle and lymphatic endothelial cells (Figure S14).CXCL1 expression was elevated in children with severe asthma compared to mild asthma.We identified two robust enhancers, 3.6 kbp and 5.4 kbp downstream of CXCL1, both of which displaying distinct bi-model and bi-directional expression patterns characteristic for enhancers.Both enhancers were expressed specifically in neutrophils and CD14+ monocytes and expression in children with severe asthma was elevated.Moreover, the expression of these two enhancers correlated with the expression of CXCL1 indicating that the gene might be regulated by both enhancers in neutrophils and CD14+ monocytes.
CXCL1 gene is possibly regulated by two enhancers.Two enhancers were identified downstream of CXCL1 (A) which were expressed in neutrophils and CD14+ monocytes (B).Expression of both enhancers correlated with CXCL1 expression (C).CXCL1 as well as both enhancers are elevated in expression in children with severe asthma (D).
(A) The upper panel depicts the Zenbu genome browser with UCSC gene models.CAGE TSS expression is shown for both strands: green for the forward strand and purple for the reverse strand.All identified TCs and enhancers are visualized (red for novel, blue for robust, and black for comprehensive enhancers).Significantly correlated TCs and enhancers are connected by a light blue interaction line.The zoom-in box displays the expression of this enhancer, which might regulate VAV3.This enhancer is a robust enhancer.Asthma-related GWAS lead SNPs and LD-associated SNPs are shown in brown.Within a 10 kb window from the enhancer, there are three GWAS SNPs found.These SNPs are related to the response to bronchodilators, with distances from the enhancer of 8.4 kb, 5.2 kb, and 6.6 kb (brown colored).Promoters capture HiC data in different cell lines is also shown.
(B, C) The middle panel shows the expression in different cell types from FANTOM5, with VAV3 on the left side and the enhancer on the right.
(D) The lower panel shows the expression analysis from our CAGE data.The lower-left plot demonstrates the correlation between TC (VAV3) and the enhancer.The lower-middle and right plots display the expression in TPM for control, MA, and SA in a box plot for TC (VAV3) and the enhancer, respectively.
(E) PTEN PTEN, an airway hyperresponsiveness-relevant gene, is linked to (regulated by) an enhancer.
(A) The upper panel shows the Zenbu genome browser with UCSC gene models.CAGE TSS expression is displayed for both strands: green for the forward strand and purple for the reverse strand.All identified TCs and enhancers are visible (red for novel, blue for robust, and black for comprehensive enhancers).Significantly correlated TCs and enhancers are linked by a light blue interaction line.The zoom-in box illustrates the expression of this enhancer, which may regulate PTEN.Several enhancers are robust and novel.The blue line indicates the range of this cluster of densely positioned enhancers, which contains GWAS asthma lead SNPs and LD-associated SNPs related to FEV/FVC (brown colored).Promoter capture HiC data from CD4 cell lines is also included.S11, an excel file provides only significantly up or down-regulated enhancers.
Table S13, an excel file lists 5056 possible enhancer-TCs interactions within TAD boundaries.
Table S14 | Enhancers are intersecting with genes
Table S14 shows our interactions confirmed by publicly available promoter capture Hi-C from several populations of human leukocytes to provide some experimental support for enhancer-gene interactions.We extended the section "Enhancers possibly regulate expression of asthma genes" and corresponding Table 3 by reporting the support of identified enhancer-gene interactions with promoter capture Hi-C data [20].We looked at mRNA differential expressions in normal tissues according to GTEx [21] for the RUNX3 gene.This gene is overexpressed mostly in the whole blood and lung.
Figure S1 :
Figure S1: TCs selection.Distribution of TCs based on their expression in log2TPM.The X-axis represents log2TPM and the Y-axis shows the TC density.The blue dotted line corresponds to an average expression cut-off of 3 TPM per sample (37*3=111 TPM in total).TCs on the right side of the dotted line are defined as the robust set.
Figure S4 |
Figure S4 | TCs linked to mild and severe asthma Volcano plot of all differentially expressed CAGE TCs colored by significance.
Figure S4 :
Figure S4: Volcano plot of pairwise differential expression analysis of TCs.The x-axis for expression change in log2 scale and the y-axis for FDR in -log10 scale.TCs (genes) passed a significant level of given FDR highlighted in blue.
Figure S6 :
Figure S6: The distribution of tag counts of FANTOM5 samples.After an expression cut-off of 4M, 2M, 1M raw read count per library resulted in 45, 142, and 172 highly expressed libraries, respectively.
Figure S7 :
Figure S7: Expression of comprehensive enhancers (n=8,289) and robust enhancers (n=4,738) in all individuals/ samples (n=37).Red color represents comprehensive enhancers (n= 5,012) and pink color represents robust enhancers (n=3,299) found by the FANTOM5 project, cadet blue and bright blue represents novel comprehensive (n=3,277) and novel robust enhancers (n= 1,439) discovered in this work.
(B, C) The middle panel displays the expression in various cell types from FANTOM5, with SBNO2 on the left side and the enhancer on the right.(D,E) The lower panel presents the expression analysis from our CAGE data.The lower-left plot demonstrates the correlation between TC (SBNO2) and the enhancer.The lower-middle and right plots depict the expression in TPM for control, MA, and SA in a box plot for TC (SBNO2) and the enhancer, respectively.(B) SLC19A1 SLC19A1 is an example of a lung function (FEV1/FVC)-related gene linked with (possibly regulated by) our CAGE TCs and enhancers.(A) The upper panel depicts the Zenbu genome browser with UCSC gene models, showing CAGE TSS expression for both strands: green for the forward strand and purple for the reverse strand.All identified TCs and enhancers are visualized.Significantly correlated TCs and enhancers are connected by a pink interaction line.The zoom-in box displays the expression of this enhancer, which might regulate SLC19A1.Asthma-related GWAS lead SNPs and LD-associated SNPs are shown in brown.This enhancer is a robust enhancer close to a GWAS SNP linked with lung function (FEV1/FVC).(B, C) The middle panel shows the expression in different cell types from FANTOM5, with SLC19A1 on the left side and the enhancer on the right.(D, E) The lower panel shows the expression analysis from our CAGE data.The lower-left plot demonstrates the correlation between the TC (SLC19A1) and the enhancer.The lower-middle and right plots display the expression in TPM for control, MA, and SA in a box plot for TC (SLC19A1) and the enhancer, respectively.(C) CXCL1 CXCL1 is regulated by two enhancers in neutrophils.(A)The upper panel depicts the Zenbu genome browser for gene models with UCSC genes and RefSeq genes.It shows CAGE TSS expression for both strands: green for the forward strand and purple for the reverse strand.All identified TCs and enhancers are visualized.Significantly correlated TCs and enhancers are connected by a pink interaction line.The zoom-in box displays the expression of this enhancer, which might regulate CXCL1.This enhancer is a robust enhancer.(B,C) The middle panel shows the expression in different cell types from FANTOM5, with CXCL1 on the left side and the enhancer on the right.
(B,C) The middle panel shows the expression in different cell types from FANTOM5, with PTEN on the left side and the enhancer on the right.(D)The lower panel shows the expression analysis from our CAGE data.The lower-left plot demonstrates the correlation between TC (PTEN) and the enhancer.(E)The lower-middle and right plots show the expression in TPM for control, MA, and SA in a box plot for TC (PTEN) and the enhancer, respectively.
Figure S11 |
Figure S11 | Enhancers linked to severe and mild asthma Differentially expressed enhancers are identified in three contrasts shown in volcano plots.
Figure S11 :
Figure S11: Differential expression analysis of enhancers using glm.The volcano plots show differentially expressed enhancers.The x-axis depicts the expression change in the log2 scale, the y-axis depicts the significance (FDR) of the fold changes in the -log10 scale.Enhancers are named by gene names of the genes closest to the respective enhancer.Enhancers (closest genes) passed given significant levels highlighted in blue.Severe vs. control: 39 enhancers; mild vs. control: 52 enhancers Severe vs. mild: 9 enhancers.
Figure S12 |
Figure S12 | Hierarchical clustering in severe and mild asthma
Figure S13 |
Figure S13 | Identify enhancer-TC interactionsFlow chart illustrating how we identified possible interactions between enhancers and TCs .
Figure
Figure S13 (a): Work-flow to identify possible regulatory interactions between enhancer and tag-clusters (TCs).
Figure
Figure S13(b): Relationship of FDR values and correlation coefficients for enhancer-gene interaction candidates.Enhancer-gene interactions with R>=0.50 are provided in TableS13.
Table S14 |
Enhancers are intersecting with genes Figure S14 | Expression of RUNX3 across tissues Table S15 | Enhancers overlap with McErlean et al.'s asthma enhancers References
Table S2 :
The genomic location of the CAGE TCs, the comprehensive and the robust set.
Table S3 |
Differentially expressed (DE) genes in TCs [5]E TCs raw count matrices were submitted to EdgeR version 3.28.0[5]fordifferentialexpression(dispersion estimation, and glm-model used) analysis by pairwise comparison among severe, mild, and control asthma.TableS3: DE TCs; an excel file contains only significantly (after FDR correction) up or down-regulated genes or CAGE TCs.
Table S5 :
Comparison of identified TCs to the FANTOM5 TCs.
Table S6 |
Classification of enhancers
Table S6 :
Classification of enhancers.The detected enhancers were classified in terms of how many samples the enhancers were observed in.The classification letters correspond to TableS6in the Gitlab repository (Enhancer rawcount table).
Table S7 :
Genomic enhancer cluster coordinates are provided with associated genes and SNPs for clusters containing up to 7 enhancers.
Table S8 :
Pathway analysis (p<0.5) was conducted for 48 genes in the vicinity of the top 26 genomic enhancer clusters.
Table S9 |
Genomic enhancers cluster on chromosome 2
Table S9 :
Pathway analysis for genomic enhancers cluster on chromosomes 2.
The upper panel displays the Zenbu genome browser featuring UCSC gene models.CAGE TSS expression is depicted for both strands: green indicates the forward strand and purple the reverse strand.All identified TCs and enhancers are visualized, with red representing novel enhancers, blue for robust enhancers, and black for comprehensive enhancers.Pink interaction lines connect significantly correlated TCs and enhancers.The zoom-in box illustrates the expression of this enhancer, which potentially regulates SBNO2.This enhancer is not only novel but also part of a genomic cluster of densely positioned enhancers.The blue line delineates the range of this genomic enhancer cluster, which contains GWAS SNPs, including one childhood-onset asthma SNP (light blue colored).Asthma-related GWAS lead SNPs and LD-associated SNPs are shown in brown.
Table S11 |
Differentially expressed (DE) enhancersCAGE-defined enhancer raw count matrices were used as input for EdgeR to estimate the differential expression by pairwise comparison among severe, mild, and controlled asthma.Table Are SNPs in enhancers (10 kb window) of interactions where both overlap: robust promoter with bait-end and robust enhancer with OE *Outside gene
Table S15 |
Enhancers coincide with McErlean et al.'s asthma enhancers
Table S15 :
Identified enhancers are validated by McErlean asthma enhancers without flanking regions and with flanking regions of 1kb. | 2024-07-14T15:54:21.234Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "7601114be7e3033e3da00603639f2f57d6ea6ebe",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.heliyon.2024.e34386",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e1e0cd80260d3e93a5516bdb7b5ca8378c86865",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14451760 | pes2o/s2orc | v3-fos-license | Evaluation of the effectiveness of chemical dependency counseling course based on patrick and partners.
Background The twelve step program is one of the programs that are administered for overcoming abuse of drugs. In this study, the effectiveness of chemical dependency counseling course was investigated using a hybrid model. Methods In a survey with sample size of 243, participants were selected using stratified random sampling method. A questionnaire was used for collecting data and one sample t-test employed for data analysis. Findings Chemical dependency counseling courses was effective from the point of view of graduates, chiefs of rehabilitation centers, rescuers and their families and ultimately managers of rebirth society, but it was not effective from the point of view of professors and lecturers. The last group evaluated the effectiveness of chemical dependency counseling courses only in performance level. Conclusion It seems that the chemical dependency counseling courses had appropriate effectiveness and led to change in attitudes, increase awareness, knowledge and experience combination and ultimately increased the efficiency of counseling.
Introduction
Today in definition of addiction more emphasis is on the social and psychological aspects. For example, in definition of addiction it is said that "addiction is a mental, social and economic illness that arises from use of the unnatural and illegal substances such as alcohol, opium, hashish, etc. and leads to physiological dependence of the person (addicted) to these materials and have adverse effects on physical, psychological and social performance of person". 1 Following the emphasis on social aspects of addiction, the role of peer groups is considered to be important. Peer group is an initial and informal group of people who share a similar status, or have interaction within the social group and its member have similar interests, links and background and link together with a joint assumption. 2 One of the programs based on the importance and role of peer groups is twelve step program that links between anonymous addicts and their sponsors in order to overcome the abuse of drugs. 3 In the process of support, the addict who continuously participates in some addiction programs and has good progress, helps other addicts to achieve their maintenance and improvement. 4 Empowerment of members of the peer group guides individuals and peer communities to a position for personal, social and relationship change. 5 Considering the importance of peer group role and also its social, mental and cultural dimensions and parallel to macro policy in the areas of prevention, treatment and demand reduction and using the potential of peer group in Iran, Charitable Society of Rebirth held chemical dependency counseling course in Tehran and some other cities using the basics of twelve step program and scientific principles. In other words, the Charitable Society of Rebirth in an innovative and executive course, educated more than 400 addiction rehabilitation counselor and family counselor with cooperation of Pierce College and University of Social Welfare and Rehabilitation Sciences of Iran, parallel to realization of national strategies to reduce demand and help to make change in drug control.
Chemical dependency counseling course is a one year course (two semesters) for two groups of improving persons and their families and also experts in the field of behavioral sciences and related fields of addiction. This course has been recorded in Technical and Vocational Training Organization of Iran as two employment and training standards named "addiction rehabilitation counselor" and "addict family counselor". In these courses, students learn the scientific basis for treatment and consultation. After the completion of the course, they can work as an adjuvant in residential centers, clinics and other related facilities. Curriculum of this course for improved group contains 45 theoretical and practical units and for family group contains 41 curriculum units. The content of this course are: individual counseling, time dependency, addictive disorders, intervention for withdrawal, group therapy skills, addiction and family, understanding medications, adjuvant principles, addiction alternative methods etc.
The main objectives of adjuvant training include: 1. Entrepreneurship and employment creation in areas of treatment and addiction harm reduction, particularly for socially disadvantaged and improved drug.
2. Training addiction counselors and using the capacity of peer groups.
3. Helping the people who do not have scientific evidence, but can help addicts' community as a counselor.
4. Introducing twelve-step philosophy to scientific community and helping physicians and psychiatrists to introduce and utilize philosophical principals of the twelve-step.
Addiction adjuvant: By passing the educational course, this person can do some tasks like individual counseling in drug addiction treatment and prevention, helping addicts to support and care systems, awareness about risk behaviors and the prevention of viral diseases such as hepatitis, advising people addicted to chemicals and drugs, counseling to those who are quitting addiction in the short-term residential centers and other drug treatment centers and drop in centers (DIC). 6 Counselor of addict's family: By passing the educational course, this person can do some tasks like being counseling, guidance and counseling to family members taking or recovery in addiction counseling centers and provide individual counseling to addicts' families in preventing and treating drug, how to deal with an addicted family member, knowledge of medications and counseling to family members. 7 Educational effectiveness: the origin of the word effectiveness in the Latin root is "effectīvus" which means creative, productive and used effectively. 8 Peter Drucker states that effectiveness relates to getting the right things done. 9 It can be said that the effectiveness, the ability to produce the desired result, is when something is considered effective meaning that the intended or expected has been realized. 10 When talking about the effectiveness of instructional effectiveness in education it is the "knowledge of compliance with the expectations of scholars, desires, goals, doing something right, the skills, knowledge and attitudes acquired in training". 11 Developing short-term residential centers and educating and training counselor in addition to the center activities and also obligation of Welfare Organization for having drug adjuvant to take Licensure, and paying attention to the role and importance of social factors and peer groups, evaluating the effectiveness of chemical dependency counseling course are necessary to correct weaknesses and reinforce strengths.
Therefore, we aimed to know that the process of chemical dependency counseling course, from the point of view of beneficiaries (professors and lecturers, managers and supervisors of rehabilitation centers, recovered drug users and their families, and ultimately senior managers of forum and graduates) was effective or not.
Methods
Since each stage of training contains needs assessment, goal setting, planning, implementation and evaluation and also has some beneficiaries, in this research all these steps were evaluated from the view of chemical dependency counseling course beneficiaries.
There are several models in the evaluation of training courses, but there is no version of the evaluation set. According to most models of evaluation, having specific goals, process, outcome and immediate results are essential for the evaluation. 12 Therefore, in order to assess the effectiveness of chemical dependency counseling course, a combination of providing satisfaction for beneficiaries' groups model and Patrick's model of evaluating the effectiveness was used.
According to the provide satisfaction for beneficiaries' group model, for knowing the effectiveness of a curriculum at first you should know effectiveness for whom and from which perspective. In other words, either participant in the course have some expectations or demands, yet organizations and their managers are looking for specific purpose in education. Similarly, teacher, designer of the program and other interested parties could be considered and their goals of participation in course or helping in its implementation could be studied.
Therefore, the main point in the approach of the beneficiaries' group is that the best method of assessing effectiveness is nomination of results for different partners and necessarily importance, value and benefits of training course should be surveyed from their viewpoint. When we say partner, it means the person who interest to have successful educational efforts and yet will be affected from curriculum and expect that the concerned curriculum have distinct results for him or his organization. Such an approach, provides mutual accountability and responsibility and any one of factors alone will be responsible for the success or failure of the course. 13 Most beneficiaries of chemical dependency counseling course are graduates, teachers, managers and supervisors of drug rehabilitation centers, people in recovery and their families and senior managers of forum.
In Patrick's model, there are four levels for evaluating education: Reaction (first level): This level measures participant feeling about training. These surveys sought the feedback of the participants about training, syllabus, assignments, equipment and training content.
Learning (second level): Determines learning skills, techniques and facts that participants have been learned in course and can be measured before and after training courses.
Behavior (third level): How and the extent of changes that can be obtained in behavior of participants in training courses. These changes can be measured by assessing the real work environment.
Results (forth level): It means that how much of goals that are directly linked to the organization have been obtained. Measurement of this level is very difficult and can be measured by reviewing evidence such as cost reduction and re-work, increase in products quality, benefit and sale. 14 Totally, according to table 1, effectiveness of this course was evaluated using survey method and also combination of two explained models.
Population and sample
The studied population was beneficiaries of chemical dependency counseling course that included graduates, professors, teachers of short-term residential centers, senior managers of forum and people in recovery and their families. The population and sample size of survey have been showed in table 2.
The surveyed population in this study included all of the managers of Tehran's rehabilitation center, teachers of chemical dependency counseling course and senior managers of forum because of their low population size. Appropriate number of graduates was selected using the Morgan table of sampling. 15 Then, a list of graduates, including the first and fourth of Chemical Dependency Counseling courses and their phone number were provided and the subjects were randomly selected equally among courses. In the case of people in recovery and their families, 68 patients were randomly selected due to the unavailability of society.
Data gathering tools
Each of questionnaires used in this research was assess from one group of beneficiaries viewpoint and ultimately effectiveness of chemical dependency counseling course was examined. Reliability of questionnaires was evaluated using Cronbach's Alpha. These questionnaires were as follows:
1-Evaluation from graduate's viewpoint:
In this questionnaire, questions 1-5 concerned the selection process, questions 6-11 concerned to the content of course, questions 12-16 were about course plan, questions 17-23 concerned the implementation and ultimately questions 24-27 was about evaluation at the end of the term. Its reliability was 86%.
2-Evaluation from teachers' viewpoint:
In this questionnaire, questions 1 and 2 concerned assessment, questions 3-5 concerned goals appointment, questions 6-9 concerned planning, questions 10-12 concerned implementation and questions 13 and 14 concerned evaluation at the end of the term. Its reliability was 84%.
3-Evaluation from rehabilitation center managers' viewpoint:
This questionnaire contained 13 questions that assessed results of the course considering educational content of the course. Its reliability was 79%.
4-Evaluation from people in recovery and their families' viewpoint:
This questionnaire contained 11 questions that were adjusted according to goals of the course and its educational content. Its reliability was 80%.
5-Questionnaires of senior managers of rebirth forum:
This questionnaire had four twopart questions. In the first part, quality, achievements and evaluation of course were examined using Likert scale and in second part, managers were asked to give their views on the same fields descriptively. Its reliability was 87%.
Results
Using one sample t-test we studied effectiveness of chemical dependency counseling course from viewpoint of graduates, managers of rehabilitation centers, teachers of chemical dependency counseling course, senior managers of forum, people in recovery and their families. Table 3 shows significant difference comparing averages in processes of selection, content, performance and total scores with 3 as the test value, but in evaluation process there was not any significant difference. Therefore, from viewpoint of graduates all levels of chemical dependency counseling course were effective except evaluation. Table 4 shows that there was no significant difference between test value (3) and averages in processes of assessment, goals determination, planning, evaluation and total scores, but in performance process there was a significant difference. Accordingly, from viewpoint of teachers, chemical dependency counseling course was effective only in performance level.
As calculated t with degree of freedom of 11 in 0.05 level is smaller than critical value of t (1.96), so it can be said that with 95% of confidence that from viewpoint of managers of rehabilitation center, chemical dependency counseling course was effective [Average 50.1 ± 7.19 (mean ± SD), test value = 3, t = 3.80, difference from average = 11.1, P = 0.013]. From viewpoint of people in recovery and their families, chemical dependency counseling course was effective (Average 41.5 ± 44.4, test value = 3, t = 15.7, difference from average = 8.5, P < 0.001). From viewpoint of senior managers of rebirth society, chemical dependency counseling course was effective. (Average 15.4 ± 2.5, test value = 3, t = 3.02, difference from average = 3.4, P = 0.039).
Discussion
Our findings showed that from viewpoint of graduates, all levels of chemical dependency counseling course (selection, content of the course, planning and performance) were effective except evaluation. However, teachers had different attitude and considered assessment, goals determination, and planning and evaluation levels of chemical dependency counseling course as ineffective. It seems that from teachers' viewpoint, better conditions were expected for performance of chemical dependency course, taking into account their scientific levels and expectations. In the next part we explain some of their suggestions for improving the quality and quantity of course. Furthermore, from viewpoint of managers of rehabilitation center, people in recovery and their families and senior managers of rebirth society, chemical dependency counseling course was effective.
Suggestions: It seems that chemical dependency counseling course was effective and led to change in attitudes, combination of knowledge and experience and ultimately increase the efficiency of counseling. However, according to opinions of students, teachers, senior managers of forum and researchers experiences, some suggestion have been made to increase effectiveness and quality of chemical dependency course.
It is necessary to updated resources that are taught in this course, in order to meet the skills and new needs of counseling. Therefore, there should be a lesson group with the experts and professors specializing in this area.
Bias in selection should be completely controlled and test and proper scientific interviews should be conducted. All of these require a comprehensive system of evaluation and education. Unit of forum should have required authority to make it possible that appropriate persons be able to participate in the course.
Project or theses are very important in the curriculum because they provide context of practical engagement of counseling with one of the important issues of addiction. Accordingly, it is necessary that framework to engage learners and explore a problem is provided to acquire necessary skills to solve problems that may be encountered in future.
In the training course, it is necessary that enforceable supervision to be accomplished by the teachers and reports are prepared regularly to make counseling be able to implement what they learned. The certification was one of the things that learners complained of delays in its delivery. Provision must be made to provide timely reduction of certificates.
Some learners look at this course as an academic degree, so it is necessary that before starting each course their opinions be modified and they realize that the goals of this course are acquaintance with non-medical protocols training, adding experience and increasing efficiency.
As the content of the course is specialized, so more care should be taken place in selection of the learners, because the level of education (diploma is the minimum that is necessary) facilitates transferring the concepts by the teachers to learners and learning becomes smoother and deeper.
It is necessary that learners participate in this course with enthusiasm and motivation in order to maximum effectiveness obtained. A mechanism for creating and enhancing motivation among learners should be designed.
In assessment of learners, not only theoretical taught but assessment of skills and ability and also changes in attitudes and behaviors should be considered.
It is necessary that appropriate time for this training be considered and also teachers must abide the principles of adult education. If the class time can be set such that learners could more easily participate in it, class performance and learners eagerness will be increase.
Conflict of Interest: The Authors have no conflict of interest. | 2016-08-09T08:50:54.084Z | 2012-07-14T00:00:00.000 | {
"year": 2012,
"sha1": "b83c39cee5e49c7d051f8e7414e47b259899086b",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b83c39cee5e49c7d051f8e7414e47b259899086b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
118738726 | pes2o/s2orc | v3-fos-license | Ab initio study of $Z_2$ topological phases in perovskite (111) $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ multilayers
Honeycomb structures formed by the growth of perovskite 5d transition metal oxide heteroestructures along the (111) direction in $t_{2g}^5$ configuration can give rise to topological ground states characterized by a topological index $\nu$=1. Using a combination of a tight binding model and ab initio calculations we study the multilayers $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ as a function of parity asymmetry, on-site interaction and uniaxial strain and determine the nature and evolution of the gap. $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ is found to be a topological semimetal. $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ is a topological Mott insulator that can be driven to a trivial insulating phase by an external electric field.
Honeycomb structures formed by the growth of perovskite 5d transition metal oxide heterostructures along the (111) direction in t 5 2g configuration can give rise to topological ground states characterized by a topological index ν=1, as found in Nature Commun. 2, 596 (2011). Using a combination of a tight binding model and ab initio calculations we study the multilayers (SrTiO3)7/(SrIrO3)2 and (KTaO3)7/(KPtO3)2 as a function of parity asymmetry, on-site interaction and uniaxial strain and determine the nature and evolution of the gap. According to our DFT calculations, (SrTiO3)7/(SrIrO3)2 is found to be a topological semimetal whereas (KTaO3)7/(KPtO3)2 is found to present a topological insulating phase that can be understood as the high U limit of the previous one, that can be driven to a trivial insulating phase by a perpendicular external electric field.
I. INTRODUCTION
Topological insulators (TI) 1,2 are a type of materials which show a gapped bulk spectrum but gapless surface states. The topological nature of the surface states protects them against perturbations and backscattering. [3][4][5][6][7][8][9] In addition, the surface of a d-dimensional TI is such that the effective Hamiltonian defined on its surface cannot be represented by the Hamiltonian of a d-1 dimensional material with the same symmetries, so the physics of a (d-1)-surface of a topological insulator may show completely different behavior from that of a conventional (d-1)-dimensional material. Surface states 10 can be understood in terms of solitonic states which interpolate between two topologically different vacuums, the topological vacuum of the TI and the trivial vacuum of a conventional insulator or empty space.
The ground state of a system can be classified by a certain topological number 11,12 depending on its dimensionality and symmetries present which define its topological classification. 2,13 Twodimensional single-particle Hamiltonians with time reversal (TR) invariance are classified in a Z 2 (ν = 0, 1) topological class. 12 Two-dimensional TR systems with nontrivial topological index (ν = 1) show the so-called quantum spin Hall effect (QSHE) which is characterized by a non-vanishing spin Chern number 14 and a helical edge current. 15 This state has been theoretically predicted and experimentally con-firmed in HgTe quantum wells, [16][17][18] as well as predicted in several materials such as two-dimensional Si and Ge 19 and transition metal oxide (TMO) heterostructures. 20 All these systems have in common that they present a honeycomb lattice structure with two atoms (A,B) as atomic basis. In that situation, it is tempting to think that the effective Hamiltonian in certain k-points could have the form of a Dirac Hamiltonian. The components of the spinor would be some combination of localized orbitals in the A or B atoms, whereas the coupling would take place via non-diagonal elements due to the bipartite geometry of the lattice. The simplest and best known example is graphene, where the Hamiltonian is a Dirac equation in two nonequivalent points K and K'. At half filling, graphene with TR and inversion symmetry (IS) has ν = 1, 21 so a term which does not break those symmetries and opens a gap in the whole Brillouin zone would give rise to the QSHE state. The way in which an IS term can arise in a graphene Hamiltonian is due to spin-orbit coupling (SOC), however it is known that the gap opened this way is too small. 22 In contrast, a sublattice asymmetry, which breaks IS, will open a trivial gap, as in BN. 23,24 How a honeycomb structure can be constructed from a perovskite unit cell can be seen in Fig. 1a and 1b, a perovskite bilayer grown along the (111) direction made of an open-shell oxide is sandwiched by an isostructural band insulating oxide. The metal atoms of the bilayer form a buckled honeycomb lattice. It has been shown that perovskite (and also pyrochlore) (111) multilayers can develop topological phases, 20,25-30 as well as spin-liquid phases and non-trivial superconducting states. 31 Topological insulating phases have been predicted for various fillings of the d shell, 20 here we will focus on the large SOC limit (5d electrons) and formal d 5 filling. We will study two different multilayers, (SrTiO 3 ) 7 /(SrIrO 3 ) 2 and (KTaO 3 ) 7 /(KPtO 3 ) 2 and we will focus on the realization of a nontrivial ν = 1 ground state. SrIrO 3 (a SrIrO3 = 3.94Å) 32 is a correlated metal 33 whose lattice match with SrTiO 3 (STO) would be close enough (a SrTiO3 = 3.905Å) 34 for them to grow epitaxially with standard growth techniques. 35 KPtO 3 has not been synthesized (to the best of our knowledge) but our calculations (a KPtO3 = 4.02Å) show a reasonable lattice match with KTaO 3 would be possible (a KTaO3 = 3.98Å). 36 The first multilayer is an iridate very similar to the well known Na 2 IrO 3 . 37-39 This system presents a layered honeycomb lattice of Ir atoms at t 5 2g filling, whereas the present bilayers show a buckled honeycomb lattice, and is predicted to develop the QSHE, however electron correlation would lead to an antiferromagnetic order in the edges. We will study the dependence of the topological ground state on the applied uniaxial strain and the electron-electron interaction and we will determine a transition between two topological phases in both materials. The work is organized as follows. In Section II we introduce a simple tight binding (TB) model as in Ref. 20 focusing on the t 5 2g case. In Section III we use density functional theory (DFT) calculations to study the evolution of both multilayers with uniaxial strain and on-site Coulomb repulsion and we determine the ground state of each material. In Section IV we study the stability of the topological phase against TR and IS breaking using both TB and DFT calculations. Finally in Section V we summarize the results obtained.
II. TIGHT BINDING MODEL
The qualitative behavior of this system can be understood using a simple TB model for the 5d electrons in the TM atoms, as shown in Ref. 20. In Section IIA we will give the qualitative behavior of the effective Hamiltonian. In Section IIB we will show numerical calculations of the full model. (111) direction in such a way the two atoms will form a honeycomb lattice. The Z atom corresponds to the insulating layer (in our case SrTiO3 or KTaO3) and does not participate in the honeycomb. (c) Scheme of the multilayer considered in the DFT calculations.
A. Full Hamiltonian
The octahedral environment of oxygen atoms surrounding the transition metal atoms decouples the d levels in a t 2g sextuplet and an e g quadruplet. Given that the crystal field gap is higher than the other parameters considered we will retain only the t 2g orbitals. The Hamiltonian considered for the t 2g levels takes the form H SO is the SOC term, which gives rise to an effective angular momentum J ef f = S − L which decouples the t 2g levels into a filled j=3/2 quadruplet and a half filled j = 1/2 doublet. H t is the hopping between neighboring atoms via oxygen that couples the local orbitals. H tri is a local trigonal term 20 which is responsible for opening a gap (as we will see below) without breaking TR and IS. H m is a term which breaks IS making the two sublattices nonequivalent tending to open a trivial gap by decoupling them. In the following discussion we will suppose that this last term is zero, but we will analyze its role in Section IV.
We are interested in two different regimes as a function of SOC strength: strong and intermediate. We call strong SOC to the regime where the j=3/2 and j=1/2 are completely decoupled so that there is a trivial gap between them. We will refer to an intermediate regime if the two subsets are coupled by the hopping. The key point to understand the topological character of the calculations is that a t 5 2g configuration can be adiabatically connected from the strong to the intermediate regime without closing the gap. The argument is the following, beginning in the strong SOC regime it is expected that a four-band effective model will be a good approximation. In this regime the mathematical structure of the effective Hamiltonian turns out to be equivalent to graphene. The trigonal term is responsible for opening a gap ∆ via a third order process in perturbation theory where λ tri is the trigonal coupling, t is the hopping parameter and α the SOC strength. It can be checked by symmetry considerations that the restriction of the matrix representations leads to this term as the first non-vanishing contribution in perturbation theory. Eq. (2) has been checked by a logarithmic fitting of numerical calculations of the full model. This term will open a gap in the K point conserving TR and IS and thus realizing a ν = 1 ground state. 21 As SOC decreases, the gap becomes larger while the system evolves from the strong to the intermediate regime, so the intermediate regime is expected to be a topological configuration 20 with a non-vanishing gap. Note that even though perturbation theory will only hold in the strong SOC regime, the increase in the gap as the system goes to the intermediate regime suggests that the t 5 2g configuration will always remain gapped. This argument is checked by the numerical calculations shown below.
B. Results from tight binding calculations
In Fig. 2 we show the results of a calculation using the TB Hamiltonian proposed above. Figure 2a is the bulk band structure for strong and intermediate SOC strength α. If a non-vanishing trigonal term is included, it opens a gap in the Dirac points of the band structure generating topologically non-trivial configurations. We can see this clearly in Fig. 2b, where the band structure close to the Fermi level in the vicinity of the K point is shown.
The topological character of each configuration is defined by the ν topological invariant which for a band in an IS Hamiltonian can be calculated as 21 where the product runs over the four time reversal invariant momenta (TRIM). The full invariant of a configuration will be the product of the last equation over all the occupied bands For a t 5 2g filling the first unfilled band has always ν b = 1 but the difference between strong and intermediate SOC is the ν b invariant of the last filled band. For strong SOC the j=1/2 and j=3/2 are completely decoupled so a t 4 2g configuration would be topologically trivial, being the invariant of the fifth band ν b = 1. However, when SOC is not sizable the bandwidths are large enough to couple the j=1/2 and j=3/2 levels so that the t 4 2g filling is a topological configuration. In both cases the t 5 2g filling is topologically non-trivial. 20 We will see below using DFT calculations that the systems under study (TMO's with 5d electrons in a perovskite bilayer structure) are in this intermediate SOC regime. Figure 2b shows the bulk band structure, focusing now on the j=1/2 bands, for negative, zero and positive trigonal terms. The left numbers are the ν b invariant of the band while the right numbers are the sum of the invariants of that band and the bands below it. No matter what the sign of λ tri is, the configuration becomes non-trivial, being its role to open a gap in the K point around the Fermi level. In the DFT calculations below, it will be seen that a change of sign of the trigonal term can be understood as a topological transition between a low U topological insulating phase (LUTI) and a high U topological insulator (HUTI), across a boundary where the system behaves as a topological semimetal (TSM).
A. Computational procedures
Ab initio electronic structure calculations have been performed using the all-electron full potential code wien2k 40 The unit cell chosen is shown in Fig. 1c. It consists of 9 perovskite layers grown along the (111) direction, 2 layers of SrIrO 3 (KPtO 3 ) which conform the honeycomb and 7 layers of SrTiO 3 (KTaO 3 ) which isolate one honeycomb from the other.
For the different off-plane lattice parameters along the (111) direction of the perovskite considered, the structure was relaxed using the full symmetry of the original cell. The exchange-correlation term is parametrized depending on the case using the generalized gradient approximation (GGA) in the Perdew-Burke-Ernzerhof 41 scheme, local density ap-proximation+U (LDA+U) in the so-called "fully located limit" 42 and the Tran-Blaha modified Becke-Jonsson (TB-mBJ) potential. 43 The calculations were performed with a k-mesh of 7 × 7 ×1, a value of R mt K max = 7.0. SOC was introduced in a second variational manner using the scalar relativistic approximation. 44 We have already discussed that the systems chosen to study a d 5 filling in a honeycomb lattice with substantial SOC are the multilayers (SrTiO 3 ) 7 /(SrIrO 3 ) 2 and (KTaO 3 ) 7 /(KPtO 3 ) 2 formed by perovskites grown along the (111) direction.
First the structure is optimized for different c lattice parameters respecting IS using GGA and without SOC. This means that mainly only the interplanar distances in the multilayers are relaxed. For the energy minimum the band structure is calculated turning on SOC.
The band structure using three exchangecorrelation schemes (GGA, LDA+U and TB-mBJ) develops the same structure. The ν b topological invariant of each band is calculated as in the TB model, 21 the topological invariant being the sum of the ν invariants over all the occupied bands. Figure 3 shows the band structure calculated with TB-mBJ as well as the ν b invariants also obtained ab initio. The difference in the curvature of the bands with respect to the result obtained with the TB model is due to the existence of bands near the bottom of the j=3/2 t 2g quadruplet which are not considered in the TB Hamiltonian. Each band has double degeneracy due to the combination of TR and IS. At the optimized c, the gap between the last filled and the first unfilled band is located at the corner of the Brillouin zone (K-point). At low c (unstable energetically but attainable via uniaxial compression), GGA predicts that the system can become a metal by closing an indirect gap between the K and M points, however TB-mBJ calculations predict that a direct gap is localized at the K point. For all the calculations the ground state has ν = 1 and thus it develops a topological phase. The last filled band has ν b = 0 so by comparison with the TB results the system corresponds to the intermediate SOC limit in which the J ef f = 1/2 and J ef f = 3/2 are not completely decoupled. 39 If a 5d electron system like this is in the intermediate SOC limit, it is hard to imagine how one can build a TMO heterostructe closer to the strong SOC limit (the only simple solution would be to weaken the hopping between the TM somehow to increase the α/t ratio). The first unfilled band has ν b = 1 as expected since a t 6 2g configuration will be a trivial insulator with a gap opened by the octahedral crystal field.
The way a trigonal field is present in the DFT calculations is mainly in two ways. On one hand, strain along the z direction varies the distance to the first neighbors in that direction, so that the electronic repulsion varies as well. We define this deformation as c0 where c 0 is the off-plane lattice constant with lowest energy. On the other hand, an on-site Coulomb repulsion defined on the TM by using the LDA+U method has precisely the symmetry of the bilayer, i.e. trigonal symmetry, so varying in some way the on-site potential (always preserving parity symmetry) will have the effect of a trigonal term in the Hamiltonian (see A.3 for further details).
According to this, it is expected that in a certain regime, variations in zz can be compensated by tuning U. In this regime, similar to what we discussed above, the system will develop a transition between two topological phases: a LUTI and a HUTI. At even higher U the system will show magnetic order. We will address this point later and by now we will focus first on the non-magnetic (NM) phase. In order to study the phase diagram defined by the parameters zz and U, we will perform calculations keeping one of them constant and determine how the gap closes as the other parameter varies, keeping track of the parities at both sides of the transition.
C. Evolution of the gap with uniaxial strain
Here we will discuss the behavior of the gap in the K point with zz for various U and J values. First we analyze the behavior of both materials in parameter space showing their similarities finishing characterizing the actual position of the ground state of the system in the general phase diagram.
First we focus on the (SrTiO 3 ) 7 /(SrIrO 3 ) 2 case. As shown in Figure 4a, the gap closes as a function of c, so uniaxial strain can drive the system between two insulating phases just as λ tri does in the TB model. However, for high U (see below the discussion on the plausible U values) the transition point disappears (two such cases are plot in Fig. 4a). For low U (see Figs. 4 c,d), zz can drive the system from a positive trigonal term to a negative one. This means that uniaxial strain can change the sign of the effective trigonal term of the Hamiltonian. However, for high U (Fig. 4a), strain is not capable of changing the trigonal field, so the system remains in the same topological phase for every zz . For the calculations using the TB-mBJ scheme (we will see below to what effective U this situation would correspond), the transition point takes place almost at zz = 0, so based on this scheme, the Ir-based multilayer would be classified rather as a TSM than as a TI. Now we will focus on the (KTaO 3 ) 7 /(KPtO 3 ) 2 system (see Fig. 4 b,e,f). For the GGA calculations the transition with zz disappears, being the system always in the HUTI phase. If the system is calculated using LDA+U with U negative (circles in Fig. 4b), the behavior is similar to the previous system and the transition point across a TSM reappears. For more realistic values of U and J (stars in Fig. 4b) the transition point disappears again. Thus, the present system (Pt-based), though isoelectronic and isostructural, can be understood as the strong-U limit of the previous system (Ir-based). The band gap is larger, which is a sought-after feature of these TI, but not large enough to make it suitable for room temperature applications. We observe that changing J does not vary the overall picture, just displaces slightly the phase diagram in U-space.
D. Evolution of the gap with U at constant zz
The Hamiltonian felt by the electrons depends also on the on-site Coulomb interaction between them. If the variation in the term of the Hamiltonian that controls it takes place only in the TM atoms, the symmetry of the varying term will have the same local symmetry as the TM atoms, i.e. trigonal symmetry. Thus, it is expected that a variation in U will have a similar effect as λ tri in the TB model, so the gap can also be tuned by the on-site interaction. Figure 4 c,d,e,f shows the behavior of the gap for both systems in an LDA+U scheme with J=0.27 (realistic) and 0.95 (large) eV as the parameter U is varied.
The slow increase of the gap with U suggests that the gap opened is not that of a usual Mott insulator and reminds rather to the slow increase obtained in the TB model. In fact, the calculation of the ν invariant shows again a topological phase on both sides of the transition.
Both systems develop a transition between the LUTI to the HUTI by increasing U. However, the transition point of (SrTiO 3 ) 7 /(SrIrO 3 ) 2 is at positive U's, for the (KTaO 3 ) 7 /(KPtO 3 ) 2 the transition appears at negative U's, so for all possible reasonable U values the system will be in the HUTI phase.
TB-mBJ calculations have proven to give accurate results of band gaps in various systems, 46-49 including s-p semiconductors, correlated insulators and d systems, however it might give an inaccurate position of semicore d orbitals 47 and overestimate magnetic moments for ferromagnetic metals. 48 For (SrTiO 3 ) 7 /(SrIrO 3 ) 2 it is possible to use the transition between the LUTI and the HUTI with the TB-mBJ scheme to estimate which value of U should be used in an LDA+U calculation for these 5d systems to reproduce the result of the TB-mBJ calculation. The actual value of U needed for a correct prediction of the properties under study is often a matter of contention when dealing with insulating oxides containing 5d TM's. 37 Other works relate to a value of U on the order of 2 eV, 50,51 however due to the well known property dependence of the value of U, 52 it is still unclear which is the correct value to study these topological phases. Therefore, our result can serve as a reference for other ab initio based phase diagrams for iridates where topological phases have been predicted as a function of U. 45 Moreover, we study this system in a broad range of U values and using different exchange-correlation schemes to provide a broad picture of the system, rather than using a fixed U value that would yield a more restricted view of the problem.
For the Pt-based multilayer we could also consider the hypothetical effective value for which gap would close at negative U (U ef f = U − J) for both values of J, we obtain the values U ef f = −1.42 eV for J = 0.27 eV and U ef f = −2.1 eV for J = 0.95. So, it is clearly seen that the gap of these systems is not only dependent on the parameter U − J, but also has an strong dependence on J, both in the realistic picture of the Ir-based multilayer as well as in the negative U regime of the Pt-based multilayer.
Recently topological phases dominated by interactions, called topological Mott insulating phases, have been theoretically found, 53,54 being this term employed for physically different phenomena. In the HUTI phase, the topological gap of the systems is enhanced by increasing the U parameter so that the system seems to be robust against electronelectron interactions. In the same fashion a usual band insulator can be connected to a Mott insulating phase, 55,56 the previous robustness suggests that the HUTI phase might be adiabatically connected to a Z 2 non-trivial interacting topological phase. 57,58 Whether this is an artifact of the DFT method or an acceptable mean field approach of a many body problem is something that can only be clarified with experiments.
IV. STABILITY OF THE TOPOLOGICAL PHASE
So far we have considered a system with both TR and IS. However, given that the Z 2 classification is valid only for TR invariant systems it is necessary to determine if the ground state possesses this symmetry. IS breaking could destroy the topological phase opening a trivial gap by decoupling both sublattices, as would happen if the honeycomb is sandwiched by two different materials. 20 TR symmetry breaking will be fulfilled by a magnetic ground state whereas IS breaking will be realized by a structural instability. In this Section we will study the two possibilities and conclude that both systems are structurally stable and remain NM according to TB-mBJ in their ground states.
A. Stability of time reversal symmetry
Increasing electronic interactions will drive the NM ground state to a magnetic trivial Mott insulating phase at very high U. For a magnetic d 5 S=1/2 localized-electron system, from Goode- nough's rules 59 an antiferromagnetic (AF) exchange between the two sublattices is expected which would create an AF ground state breaking both TR and IS. To check this result, we have performed DFT calculations within an LDA+U scheme taking J I = 0.95 eV and J II = 0.27 eV and varying U. For both systems the calculations have been carried out at different U's for the NM and AF configurations at the two J's. In both cases the ground state is AF at high U. Also, the sublattice magnetization increases with U. Figures 5a and 5b show the evolution of the sublattice magnetic moment for both compounds.
Also, a ferromagnetic (FM) phase has been analyzed, being the least preferred one. In the Ir compound a FM phase can be stabilized but has always higher energy than the NM or AF. In the Pt compound a FM phase could not be stabilized for any of the U values considered. The true ground state of the system would be a TI phase in the Pt-based multilayer (or TSM in the Ir-based multilayer according to TBmBJ) depending on the value of U employed, so if the correct value to be used is larger than the critical value (of about 3 eV, which is large according to our previous discussions), the topological Z 2 phase will break down and the Kramers protection of the gapless edge states will disappear; whether the edge states would become gapped or not requires further study. Thus the experimental measurement of the magnetic moment of the ground state of these bilayers would shed light into the correct value of U which should be used in these and other similar compounds. In the Pt-based multilayer the sublattice magnetization is almost only dependent on U ef f as can be checked by the shifting of the curves (Fig. 5b), however in the Ir-based multilayer there is a stronger dependence on J (Fig. 5a). Again, simplifying the evolution in terms of the effective U ef f = U − J is discouraged for these systems according to our results. The non trivial effect of J 60 has been also observed in several compounds such as multi-band materials. 61,62 To summarize, we have obtained the magnetization of both compounds as a function of U, showing the system shows a NM ground state for both compounds until a certain U (larger than the value of U that would be equivalent to TB-mBJ calculations) where the system becomes AF (see Fig. 5c).
B. Stability of inversion symmetry
The topological properties of this system rely on both TR and IS. Tight-binding calculations predicted that non-invariant parity terms with energy associated of the order of magnitude of the topological gap could eventually drive the topological phase to a trivial one.
First we will discuss the (SrTiO 3 ) 7 /(SrIrO 3 ) 2 system. The simplest IS breaking could be driven by a structural instability. To study the structural stability, we have displaced one of the TM atoms from its symmetric position and then relaxed the structure. As a result, the structure returned to the symmetric configuration. However, due to being in the transition point between the two topological phases, any external perturbation (such as a perpendicular electric field) could break inversion symmetry. This system should be classified more as a topological semimetal rather than a topological insulator due to the (almost) vanishing gap of the relaxed structure. Now we will proceed to (KTaO 3 ) 7 /(KPtO 3 ) 2 . The first difference between this and the previous system is that in the present case the structure is well immersed in the HUTI phase. A large IS breaking is expected to drive the system to a trivial phase where the sublattices A and B would be decoupled. However, to change its topological class, the system has to cross a critical point where the gap vanishes in some point of the Brillouin zone. According to that, the expected behavior is that the gap closes and reopens as the sublattice asymmetry grows, going from the original topological phase to a trivial insulating phase.
To check this, we will compare the results obtained from the TB model and DFT calculations. In the TB case, we introduce a new parameter which is a diagonal on-site energy whose value is +m for A atoms and −m for B atoms. This new parameter will break IS and its value will take into account the amount of breaking. When IS is broken the index ν cannot be calculated with the parities at the TRIM's. However, we can study the topological character searching for gapless edge states. For that sake we calculate the band structure of a zig-zag ribbon of 40 dimers width with α = t and λ tri = −t. The calculations were also carried out in an armchair ribbon and the same behavior is found (not shown), however the mixing of valleys makes the band structure harder to understand. The color of the bands indicates the expectation value of the position along the width of the ribbon of the eigenfunction corresponding to that eigenvalue, it is checked that the edge states are located in the two edges (red and blue) whereas the rest of the states are bulk states (green). According to the result of the TB model shown in Figs. 6a and 6b, for small m the system remains a topological insulator although the introduction of m weakens the gap. If m keeps increasing the system reaches a critical point where the gap vanishes and if m increases even more the gap reopens but now the edge states become gapped so the system is in a trivial insulating phase.
To model the symmetry breaking in the DFT calculations we move one of the Pt atoms in the z direction. As the distance to the original point increases IS gets more broken. We show the four closest eigenvalues to the Fermi level in the K point for the DFT calculations (Fig. 6c) and TB model (Fig. 6d). For the symmetric structure (∆z = 0), the combination of TR and IS guarantees that each eigenvalue is twofold degenerate. Once the atom is moved the degeneracy is broken and the eigenvalues evolve with the IS breaking parameter. The analogy between the two calculations suggests that in the DFT calculation once the gap reopens the new state is also a trivial insulator. The dashed lines in Fig. 6c correspond to the eigenvalues at the K point for the fully relaxed structure allowing IS breaking. Comparing with the curves obtained for the evolution of the eigenvalues it is observed that the relaxed structure is in a slightly asymmetric configuration but it remains in the HUTI phase.
Due to the dependence of the topological state on IS, tuning this behavior would allow to make a device formed by a perovskite heterostructure which can be driven from a topological phase to a trivial phase applying a perpendicular electric field. The device will be formed by the TM honeycomb lattice sandwiched between layers of the same insulating (111) perovskite from above and below. In this configuration the system will be in a HUTI phase. However, a perpendicular electric field will break more the sublattice symmetry inducing a much greater mass term in the Hamiltonian proportional to the applied field. Modifying the value of the electric field it would be possible to drive the system from the topological phase ( E = 0), to the trivial phase (high E). This can be exploited as an application of this TI in nanoelectronic and spintronic devices. [63][64][65][66] The sublattice asymmetry needed to make the transition is of the same order of magnitude as the topo-logical gap, as can be checked in Fig. 6a.
V. SUMMARY
We have studied the gap evolution in the t 5 2g perovskite multilayers (SrTiO 3 ) 7 /(SrIrO 3 ) 2 and (KTaO 3 ) 7 /(KPtO 3 ) 2 as a function of the on-site Coulomb interactions and uniaxial perpendicular strain conserving time reversal and inversion symmetry. The behavior of the system has been understood with a simple TB model where SOC gave rise to an effective j=1/2 four-band Hamiltonian. Uniaxial strain and on-site interactions have been identified as a trigonal term in the TB model whose strength controls the magnitude of a topological gap. The topological invariant ν has been calculated using the parities at the TRIM's both in TB and DFT calculations. Comparisons between the invariants of the bands determines that both of these 5d electron systems stay in the intermediate SOC regime. The small value of the gap in the K-point comes from being a contribution of third order in perturbation theory. In contrast, sublattice asymmetry contributes as a first order term, so the topological phase can be easily destroyed by an external perturbation that gives rise to an IS-breaking term in the Hamiltonian.
(SrTiO 3 ) 7 /(SrIrO 3 ) 2 has been found to be a topological semimetal at equilibrium zz . Comparing TB-mBJ and LDA+U calculations reasonable results can be obtained for U in the range 1 -2 eV. In (KTaO 3 ) 7 /(KPtO 3 ) 2 a HUTI phase at equilibrium has been found. This last system can be driven from topological insulating state to a trivial one by switching on a perpendicular electric field which would break inversion symmetry. Also, we have verified that the properties of these systems are dependent on both U and J instead of only in Although the smallness of the gap (less than 10 meV according to TB-mBJ) makes the t 5 2g configuration not particularly attractive for technological applications, the simple understanding of the system turns it physically very interesting. The present system can be thought of as an adiabatic deformation of a mathematical realization of the four band graphene with SOC, with an experimentally accessible gap, the roles of S and H SO being played now by J ef f and H tri , but with a different physical nature of the topological state.
Spin-orbit term
We want to obtain the form of the SOC restricted to the t 2g subspace. Taking the basis can be easily seen that the representation L i =hl i takes the form where σ i are the usual Pauli matrices acting on spin space. The representation follows the commutation relation [l i , l j ] = −i ijk l k so defining J i = S i − L i the usual commutation relations hold. This change of sign introduces an overall minus sign on the eigenvalues so the spectrum results in a j = 3/2 quadruplet with E j=3/2 = −α and j = 1/2 doublet with E j=1/2 = 2α.
Hopping term
Each site has three bonds directed along the edges of the perovskite structure. The bonds link the A and B sublattices so there will be three different overlaps t i = A|H t |B depending on the direction (t x , t y , t z ) = t(τ x , τ y , τ z ) (A5) The hopping takes place through the overlap of the t 2g orbitals of the TM and the oxygen p orbitals, It can be easily checked that the main contribution gives a matrix form that can be casted in the form (τ x , τ y , τ z ) = (l 2 x , l 2 y , l 2 z ) (A6)
Trigonal term
Due to the geometry of the system, a possible term in the Hamiltonian that does not break the spatial symmetries of the system is a trigonal term. This term will differentiate the perpendicular direction from the in-plane directions. This term will behave as an on-site term which mixes the t 2g states without breaking the symmetry between them so the general form in the t 2g basis is given that the previous matrix is diagonal in the (111) direction, being the perpendicular eigenvalues degenerated, thus preserving the trigonal symmetry. The way zz enters in this term is on one hand by anisotropy of charge density due to the lack of local spatial inversion and on the other hand by distortion of the cubic perovskite edges (and thus of the octahedral environment) by expansion/contraction of the (111) direction. The absence of local octahedral rotational symmetry is also responsible for the dependence of λ tri on U. Since varying the onsite interaction will modify the local electron density, provided the local trigonal symmetry is conserved but not the local inversion (as it is broken explicitly by the multilayer), this modification in the electronic density will influence the electrons across the Hartree and exchange-correlation terms, by a term with those symmetries. In summary, local symmetry forces that a spin-independent perturbation, that includes zz and spin-independent U-terms, can be recasted on the previous form.
Whether spin-mixing trigonal terms are relevant for the effective model of this system or not should be clarified in the future. Either way, the agreement of the predictions both of the TB model and the DFT calculations, in addition with the explicitly checked topological phase at high U suggests that this model is a well behaved effective model to study the NM phase of this type of systems.
Mass term
A term which makes the two sublattices nonequivalent will break inversion symmetry. The minimal term which fulfills this is where I A and I B are the identity matrix over the A and B sublattices | 2013-09-27T12:30:25.000Z | 2013-06-10T00:00:00.000 | {
"year": 2013,
"sha1": "8e1081aadeeb3b3ff4fb201ed02bc50f137aa860",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1306.2238",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8e1081aadeeb3b3ff4fb201ed02bc50f137aa860",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
235349493 | pes2o/s2orc | v3-fos-license | Water reuse and growth inhibition mechanisms for cultivation of microalga Euglena gracilis
Background Microalgae can contribute to more than 40% of global primary biomass production and are suitable candidates for various biotechnology applications such as food, feed products, drugs, fuels, and wastewater treatment. However, the primary limitation for large-scale algae production is the fact that algae requires large amounts of fresh water for cultivation. To address this issue, scientists around the world are working on ways to reuse the water to grow microalgae so that it can be grown in successive cycles without the need for fresh water. Results In this study, we present the results when we cultivate microalgae with cultivation water that is purified and reused. Specifically, we purify the cultivation water using an ultrafiltration membrane (UFM) treatment and investigate how this treatment affects: the biomass and biochemical components of the microalgae; characteristics of microalgae growth inhibitors; the mechanism whereby potential growth inhibitors are secreted (followed using metabolomics analysis); the effect of activated carbon (AC) treatment and advanced oxidation processes (AOPs) on the removal of growth inhibitors of Euglena gracilis. Firstly, the results show that E. gracilis can be only cultivated through two growth cycles with water that has been filtered and reused, and the growth of E. gracilis is significantly inhibited when the water is used a third time. Secondly, as the number of reused water cycles increases, the Cl− concentration gradually increases in the cultivation water. When the Cl− concentration accumulates to a level of fivefold higher than that of the control, growth of E. gracilis is inhibited as the osmolality tolerance range is exceeded. Interestingly, the osmolality of the reused water can be reduced by replacing NH4Cl with urea as the source of nitrogen in the cultivation water. Thirdly, E. gracilis secretes humic acid (HA)—which is produced by the metabolic pathways for valine, leucine, and isoleucine biosynthesis and by linoleic acid metabolism—into the cultivation water. Because HA contains large fluorescent functional groups, specifically extended π(pi)-systems containing C=C and C=O groups and aromatic rings, we were able to observe a positive correlation between HA concentration and the rate of inhibition of E. gracilis growth using fluorescence spectroscopy. Moreover, photosynthetic efficiency is adversely interfered by HA, thereby reductions in the synthetic efficiency of paramylon and lipid in E. gracilis. In this way, we are able to confirm that HA is the main growth inhibitor of E. gracilis. Finally, we verify that all the HA is removed or converted into nutrients efficiently by AC or UV/H2O2/O3 treatments, respectively. As a result of these treatments, growth of E. gracilis is restored (AC treatment) and the amount of biomass is promoted (UV/H2O2/O3 treatment). Conclusions These studies have important practical and theoretical significance for the cyclic cultivation of E. gracilis and for saving water resources. Our work may also provide a useful reference for other microalgae cultivation. Supplementary Information The online version contains supplementary material available at 10.1186/s13068-021-01980-4.
Background
Unicellular eukaryotic microalgae are a diverse and ubiquitous group of plants that are a promising biomass source and a biological feedstock [1,2]. However, the use of microalgae as a feedstock is limited because a large amount of water is needed for its cultivation, and this negatively affects economic viability and environmental sustainability [3]. Wastewater treatment is currently being used to recycle the water that is necessary for microalgae cultivation [4]. It has been reported that to produce 1 kg of algae biomass, 1564 L of water are required under pond conditions [5]. Reusing cultivation water can reduce water usage and nutrient requirements as well as the need for algal wastewater treatment [3]. Consequently, water reuse after algae harvesting is essential for the economic viability of the microalgae industry and for environmental sustainability.
The effects of water reuse on algae growth are different across algae taxa [6]. The most researched taxa are green algae, diatoms, cyanobacteria, haptophytes, eustigmatophytes, chrysophytes, and xanthophytes [3]. However, among the genus of Euglena spp. algae, especially, there have been no reports of E. gracilis cultivation with reused water. E. gracilis is a unicellular flagellated alga characterized by the absence of a cell wall. It produces a wide variety of bioactive compounds such as paramylon, carotenoids, tocopherols, euglenophycin, and lipids. It has tremendous potential for metabolic engineering and commercialization [7]. Therefore, a deeper understanding of water reuse in the cultivation of the microalga E. gracilis, and the mechanisms underlying its growthinhibiting secretions, are urgently needed.
After the microalgae assimilate nutrients ions, unabsorbed counter ions such as Cl − , Na + , and K + from NH 4 Cl, NaHCO 3 , and KH 2 PO 4 , respectively, can accumulate in the cultivation water. When these ions accumulate, the osmotic pressure of the cultivation water increases, thereby inhibiting microalgae growth [8,9]. Therefore, finding a suitable medium, which can balance the osmotic pressure between microalgae and cultivation water, is important for improving the effects of water reuse.
In addition to accumulated ions, excreted metabolites, such as dissolved organic matter (DOM) from microalgae is considered to be the main cause of negative biomass growth [3,6,10]. For Scenedesmus sp. LX1, DOM concentrations between 6.4 and 25.8 mg/L in reused water resulted in a decrease in the maximum algae cells density and the maximum growth rate by 50-80% and 35-70%, respectively [11]. The DOM in that study was classified into two fractions: hydrophobic or hydrophilic. Each of these fractions was further classified as acids, neutrals, or bases for a total of six fractions. In that study, all six fractions showed inhibited algal growth. Moreover, Lu et al. [10] who also used this fractionation approach, reported that the DOM of Scenedesmus acuminatus in the reused water included palmitic acid and octadecanoic acid, both of which inhibited the growth of this algae species. Although many studies have attempted to characterize the growth inhibitor present in DOM, it is unclear which major metabolic pathways within microalgae cells regulate and secrete these inhibitory substances into the cultivation water.
Recently, many researchers have tried to use traditional methods of wastewater treatment on microalgae cultivation water. Zhang et al. [12] reported that the removal of DOM with activated carbon (AC) in the reused water of cultivated Nannochloropsis oceanica moderately reduced growth inhibition and lipid accumulation. Moreover, the AC treatment significantly increased the final dry weight of S. acuminatus to 2.33 ± 0.04 g/L, which was almost the same as the dry weight obtained after growth in fresh media [13]. Advanced oxidation processes (AOPs) are another type of treatment technology in which organic pollutants are destroyed by powerful oxidizing agents [14,15]. O 3 , UV/H 2 O 2 have been successfully applied in the treatment reused water for the cultivation of Scenedesmus sp. LX1 [16] and S. acuminatus GT-2 [17], respectively. To date, few studies have investigated the removal of DOM by AC or AOPs in reused water of cultivated E. gracilis.
In this study, the main objectives were: (1) to identify the effect of treatment with ultrafiltration membrane (UFM) on the biomass and biochemical components of cultivated E. gracilis; (2) to identify the characteristics of the growth inhibitors in reused water and uncover the mechanism whereby potential inhibitors are secreted by E. gracilis via metabolomics analysis; (3) and to evaluate the effect of AC and AOPs on the removal of growth inhibitors.
Effects of reused water on the growth of E. gracilis
Since the UFM can cut off substances with a molecular weight ≥ 50 kDa, viruses, bacteria, macromolecular proteins, polysaccharides, and other substances can be filtered out [10,12], so only unconsumed ions and DOM remain in the reused water. This study found that the DW of algae cells gradually decreased in the reused water with successive cycles. The DW of algae with each cycle of cultivation decreased by 13.1% (UFM-R1, p < 0.05), 28.6% (UFM-R2, p < 0.05), and 79.2% (UFM-R3, p < 0.01) compared to the control group on the last day of cultivation (Fig. 1). By the third cycle of cultivation, the growth of algae cells had been severely inhibited. This suggests that the increase in the presence of growth inhibitors with successive cycles of water reuse reduces algal growth beyond a tolerable range. This phenomenon is similar to the growth inhibition observed for other microalgae such as S. acuminatus [10], Chlorella. SDEC-18 [18], and N. oceanica [12]. We postulate that accumulated ions and algae cells secretions of DOM in the reused water are the main factors that inhibit the growth of E. gracilis.
The effect of accumulated ions on the growth of E. gracilis
Microalgae can selectively absorb some types of ions from inorganic nutrients and assimilate them into their own organic matter. But some ions such as Cl − and Na + , cannot be absorbed by microalgae. These residual ions, which accumulate in reused water, can destroy the balance of osmolality of the microalgae, thereby inhibiting their growth [8,9]. This study found that when algae cells were cultivated in more than eightfold the concentration of PEM medium (NH 4 Cl as the nitrogen source) compared to the control, the relative cell density was lower, algal cytochromes were almost absent, and the algae cells were elongated (Fig. 2a, b). When the concentration of the culture medium was within fivefold of the control medium, the DW of algal cells was about 2.7 g/L, which was not significantly different from the control group (p > 0.05). However, when the concentrations of the medium were increased to above fivefold that of the control medium, the DW gradually decreased. In fact, when the concentration of the medium was increased by eight-, nine-, and tenfold, the DW of cells decreased by 81.3% (p < 0.01), 85.0% (p < 0.01), and 92.5% (p < 0.01), respectively (Fig. 2c). In addition, the Cl − concentration was increased as the medium concentration increases, Fig. 1 Effects of reused water on the growth of E. gracilis. UFM-R0, -R1, -R2, and -R3 represent the number of times water is reused and treated with UFM; The letters F, S, and T combined with 0, 2, 4, 6, 8, and 10 represent the number of days for algae cultivation under the first (F), second (S), and third (T) conditions of reused water, respectively. Asterisk represents p < 0.05; double asterisk represents p < 0.01. The values are represented by mean ± SD, where n = 3 the maximal concentration was reached 11,996.4 mg/L in the tenfold medium (Fig. 2d), suggesting that accumulated Cl − in the medium may be a key growth inhibitor of E. gracilis.
To prove the above hypothesis, we used urea instead of NH 4 Cl as the nitrogen source (equal nitrogen content) and found that as the concentration of urea increased, the DW of algae cells increased significantly. When the concentration of the culture medium was four to tenfold, the biomass of algal cells was stably maintained at about 2.7 g/L, and there was no significant difference among them (p > 0.05) (Fig. 2c). In addition, when urea was used as a nitrogen source, the cells appeared to be fuller. With an increase in the concentration of the culture medium, the relative cells density increased and the relative content of chlorophyll gradually increased as well (Fig. 2e, f ). At the same time, the salinity and osmolality in the medium were much lower than those of the medium using NH 4 Cl as the nitrogen source. For example, the salinities at tenfold medium concentration were 29.4 psu versus 9.1 psu for NH 4 Cl versus urea, respectively. The osmolalities under these conditions were 727.0 mosm versus 167.0 mosm for NH 4 Cl versus urea, respectively. The salinity value for NH 4 Cl was 3.2-fold greater than urea (Fig. 2g, p < 0.01), and the osmolality for NH 4 Cl was 4.3-fold greater than urea (Fig. 2h, p < 0.01). These results show that the growth of algae cells was not inhibited with a fivefold increase in salinity (< 15.6 psu) and osmolality (< 361.1 mosm) of the medium, and we can confirm that during the UFM-R3 culture cycle, the growth of E. gracilis was not hindered by the accumulated ions in the reused water. This phenomenon has also been confirmed by the cultivation of S. acuminatus in reused water [10,19]. Our work also showed that the growth of E. gracilis had a certain tolerance range to ions. If this tolerance range was exceeded, the growth of E. gracilis was inhibited. In addition, it was determined that the traditional medium PEM with NH 4 Cl as the nitrogen source was not suitable for the continuous recycling of cultivation water or for batch-fed cultivation of E. gracilis (such as heterotrophic batch-fed fermentation). When urea is used, it serves as an ideal nitrogen source because it reduces the salinity and osmolality in the culture medium.
Identification of growth inhibitors in E. gracilis secretions
The growth of microalgae is not affected by certain osmotic pressures for reused water, so growth inhibitors may exist in the DOM secreted by microalgae. However, some DOM can promote the growth of microalgae while some have an inhibitory effect on microalgae growth [3], so further study of these DOM characteristics is required. This study also found that E. gracilis continuously secreted DOM during the culture process. By the time UFM-R3 was reached, the DOM concentration was 189.21 mg/L, while the control group contained only 54.92 mg/L DOM, a 3.4-fold difference (p < 0.01) (Fig. 3a). This indicates that, at elevated concentrations, DOM may have an inhibitory effect on the growth of E. gracilis.
3D-FEEM fluorescence spectroscopy is fast and has excellent selectivity and sensitivity for fluorescent substances [20]. Therefore, in this study, this technique was used to identify the types of DOM secreted by algae cells. Chen et al. [21] used 3D-FEEM fluorescence spectroscopy to identify the following substances in the DOM present in cultivation water: aromatic proteins (AP), fulvic acid-like substances (FA), soluble microbial by product-like material (SMBM), and HA. We used those assignments to determine which types of DOM were present in our cultivation water samples (they are labeled with roman numerals in the spectrum in Fig. 3b. See caption b). It can be seen from Fig. 3b that the abundance of organic compounds with fluorescent signals in the reused water from high to low as: HA, SMBM, FA, and AP. Our spectra showed that HA was the potential main type of DOM present in E. gracilis secretions.
In order to further identify the growth inhibitors, we divided the DOM into six major fractions using fractional distillation (Fig. 3c). The percentages from high to low were: HiN (32%), HoA (27%), HoN (25%), HiB (7%), HoB (6%), and HiA (3%). From this result, we know that the DOM is mainly composed of HiN, HoA, and HoN, suggesting that HA, a potential inhibitor of E. gracilis, is composed of these organic acids. We also know that the slope of peaks in a UV spectrum (at 254 nm, given in AU/ cm) for organic matter represents the content of organic functional groups that contribute to fluorescence, such as C=C bonds, C=O bonds, and aromatic rings. The importance of ultraviolet spectra for detecting pollutants in the water treatment process was described by Altmann et al. [22]. In this study, we tested the UV 254 of 80-foldconcentrated DOM and found that the fluorescence intensity from high to low was: HoN (0.58 AU/cm), HoA (0.53 AU/cm), HiA (0.50 AU/cm), HiB (0.44 AU/cm), HoB (0.30 AU/cm), and HiN (0.24 AU/cm) (Fig. 3d). The inhibition of growth, IG%, of E. gracilis for each of these organic substances were: 28.8%, 25.0%, 24.6%, 19.2%, 12.1%, and 5.0% (Fig. 3e), respectively. It suggests that all of these DOM fractions can inhibit the growth of E. gracilis, especially HoN, HoA, and HiA. In addition, it is obvious that the UV 254 absorption value is linearly related to the IG% for E. gracilis based on the graph in Fig. 3f for which R 2 = 0.9. The degree of inhibition was positively correlated with the content of luminescent functional groups in the DOM. Based on these results, we confirmed that all fractions of DOM with C=O bonds, C=C bonds, and aromatic rings have an inhibitory effect on the growth of E. gracilis. In other words, inhibiting the growth of E. gracilis mainly depended on the concentration of different fractions.
According to the above results, DOM mainly includes HA, which was mainly composed of three organic compounds: HiN, HoA, and HoN (Fig. 3b). However, when using U254 signal to characterize these organics, in addition to HoA and HoN with relatively high signal intensity, HiA, HiB, and HoB also had an inhibitory effect on the growth of E. gracilis, suggesting that inhibitors other than HA may also be present in the recycled culture media. These growth inhibitory factors may be derived from SMBM, FA, and AP (Fig. 3b). These hydrophilic/ hydrophobic fractions also have inhibitory effects on the growth of microalgae, such as FA has been proven to have an inhibitory effect on Scenedesmus species [13], indicating that this fraction, as well as HA with its highly fluorescent signals, may be potential inhibitors. However, both the concentration and the UV 254 signal intensity of HiA, HiB, and HoB were lower than HoA and HoN derived from HA. In addition, although the concentration of HiN derived from HA was relatively higher, it obviously reduced the inhibitory effect on the growth of E. gracilis. Therefore, this study finally confirmed that the main growth inhibition of E. gracilis was HA, and the hydrophobic HoA and HoN organics fractions with higher content and higher UV 254 signal intensity played a key inhibitory role. Lu et al. [10] only fractionated HoN-containing fatty acids and showed that they have an inhibitory effect on the growth of S. acuminatus. In addition, Zhang et al. [11] showed that all of the fractions could inhibit the growth of Scenedesmus sp. LX1, especially, HiB, HoB, and HiA. However, HoN and HoA showed the strongest inhibition of E. gracilis. This suggests that different microalgae may have different tolerances to different classifications of DOM. This scientific problem requires further research.
The influence of E. gracilis secretions on its physiology and biochemistry
The Fv/Fm ratio reflects the ability of microalgae to dissipate, absorb, and transmit light energy during photosynthesis. It is a useful parameter that indicates physiological state and growth rate, and is also an internal probe of the relationship between microalgae and their environment [13,23,24]. The Fv/Fm ratio for algae cells was only 0.12 in water containing HA, while that of the control group was 0.64, which is 5.3-fold difference (Fig. 4a, p < 0.01). From this result, it is obvious that HA significantly reduces the algae cell's photosynthetic efficiency. Similarly, studies on S. acuminatus [13] and Arthrospira platensis [23] also showed comparable Fv/Fm reductions when cultivated in reused water, which means that HA has a negative effect on the photosynthetic system of these microalgae too. Thus, this impact has a certain universality.
The paramylon content and TFA content of E. gracilis in the experimental group containing DOM were 7.1% and 12.2%, respectively, while the control group was 21.2% and 35.2%. Both of these values were significantly lower than the control group, which showed a decrease of 66.5% (Fig. 4b, p < 0.01), and 65.3% (Fig. 4c, p < 0.01), respectively. These results were confirmed for the TFA of Scenedesmus sp. LX1 [11]. These results indicate that the HA secreted by E. gracilis may interfere with its own photosynthesis, and that this leads to inhibition of the synthesis of organic matter in the algae cells. The mechanism behind this process is worthy of our in-depth study in the future.
Study on the mechanism of E. gracilis growth inhibition by its own secretions
When UHPLC-QTOF-MS was used to detect metabolites in E. gracilis cells and cultivation media, the range of metabolites detected in negative ion mode was greater than that in positive ion mode, so this study only analyzes metabolites that were observed in negative ion mode to describe the mechanism whereby algae cells secrete DOM. With this analysis, we observed 4130 metabolites (Additional file 2). These metabolites were analyzed by PCA and OPLS-DA, and we can see clear separation between intracellular (IEG) and extracellular (EEG) metabolites (Additional file 1: Fig. S2, S3), indicating that there were significant differences in the metabolites in these two groups. When the OPLS-DA permutation test was performed on the data, the categorical variable Y was randomly changed 1000 times (Additional file 1: Fig. S4) and the original model R 2 Y was equal to 1, indicating that the established model conforms to the real situation for the sample data. The original model had a Q 2 value equal to 0.997, which is very close to 1. This means that if a new sample were added to the model, it would fall within the existing distribution of data points. In general, the original model is robust and can explain the difference between the two sets of samples well. No overfitting was required to fit our data to it.
This study used VIP > 1 and a P-value < 0.05 to screen metabolites, and 108 different metabolites were obtained (see Additional file 2). According to the heat map cluster analysis, the relative concentration of 69 and 39 metabolites in the EEG and IEG were up-regulated, respectively (Additional file 1: Fig. S5). After these metabolites were annotated by the KEGG database, important metabolic pathways were screened according to their position and role in the relevant metabolic pathways (Additional file 2). According to the bubble chart, there are nine main metabolic pathways that are relevant: valine, leucine, and isoleucine biosynthesis; linoleic acid metabolism; arginine biosynthesis; the TCA cycle; pyruvate metabolism; purine metabolism; tyrosine metabolism; pyrimidine metabolism; and phenylalanine metabolism (Fig. 5). Among these, the first two are the key metabolic pathways. Some of the metabolites in these metabolic pathways were highly expressed inside the cell, and some were highly expressed outside the cell, and the latter group of metabolites may be secreted from the cell into the cultivation water. Three pathways-linoleic acid metabolism, the TCA cycle, and valine, leucine, and isoleucine biosynthesis-involve C=O and C=C bonds, while purine and pyrimidine metabolism contribute aromatic rings and C=O bonds. These metabolites accumulate in the medium and gradually become HA, which contains various functional groups (Fig. 6).
Palmitic acid was one of the metabolites secreted by S. acuminatus to inhibit growth [10]. Similarly, this study found that the palmitic acid in linoleic acid metabolism was higher in concentration in the culture medium. Therefore, it was further proved that palmitic acid was also one of the key factors that inhibit the growth of E. gracilis. 2-Isopropylmalate is an intermediate product Fig. 5 The top nine KEGG pathways of groups EEG and IEG are presented in the bubble chart. Each bubble in the bubble chart represents a metabolic pathway. The X-axis of the bubble and the bubble scale indicate the influence factor of the pathway in the topology analysis. The larger the size, the greater the influence factor; the Y-axis and the color of the bubble indicate the enrichment analysis. P value (take the negative natural logarithm, namely − log10(p), the redder the color, the smaller the P value, the more significant the enrichment degree. IEG, EEG represent intracellular and extracellular metabolites, respectively of valine, leucine, and isoleucine biosynthesis. Excessive secretion of this intermediate product from algal cells into the cultivation medium may also inhibit the growth of E. gracilis. However, exactly how this intermediate metabolite inhibits the growth of E. gracilis is a question that requires in-depth research in the future.
HA containing multiple functional groups can complex iron ions that are essential for photosynthesis in microalgae. However, Sun et al. [25] found that the underlying mechanism of the inhibitory effect for cyanobacteria was not to reduce the bioavailability of iron, but to inhibit the oxidative damage of cells mediated by peroxidasemediated. More and more evidence shows that HA could directly interact with certain large plants and algae through their different functional groups, thereby interfering with photosynthesis and growth. Due to their low molecular weight (< 50 kDa), these substances can easily pass through cell membranes. When these quinone-containing metabolites enter the chloroplast, they interfere with the electron transport processes of photosynthesis [26,27]. In fact, the toxic effects of quinones on the growth and photosynthesis of Scenedesmus strains have been confirmed [28]. In addition, we have previously found that there was no significant difference between the experimental group and the control group under heterotrophic conditions containing HA (data not disclosed) Fig. 6 The schematic diagram of the mechanism of E. gracilis secretion of growth inhibitors. Red font indicates that the relative concentration of metabolites outside the cell is greater than that inside the cell, and blue font indicates that the relative concentration of metabolites outside the cell was less than that inside the cell. The solid line represents the direct chemical reaction, and the dashed line represents the indirect chemical reaction. HA represents humic acid and the Fv/Fm ratio was significantly reduced (Fig. 4a), which means that these inhibitors may primarily attack the photosynthetic system of the E. gracilis chloroplast. However, no metabolites related to quinones were found in the different metabolites screened in this study (Additional file 1: Fig. S5), indicating that the photosynthetic machinery of E. gracilis was not affected by quinones. Moreover, it is possible that different functional groups (e.g. C=C and C=O bonds, aromatic rings) interfere with the electron transport processes of photosynthesis. How these functional groups in the compounds secreted by different metabolic pathways interfere with the photosynthetic system of microalgae requires further in-depth study.
Removal of growth inhibitors
Markiewicz et al. [29] have confirmed that DOM in sewage is adsorbed effectively by AC. The fluorescence spectrum after AC treatment showed that the fluorescence signal was very weak (Fig. 7a), indicating that almost all of the HA that can fluoresce had been removed. In addition, the growth curve for the experimental group was almost the same as that of the control group (Fig. 7d). By the time of the last day of culture, the DWs of algal cells were 2.4 g/L (experimental group) and 2.4 g/L (control group), with no significant difference (p > 0.05). This indicates that AC is effective at completely adsorbing and removing substances that inhibit the growth of E. gracilis. Although the reused water of cultivated N. oceanica [12] and S. acuminatus Fig. 3b and its caption. HA represents humic acid. Asterisk represents p < 0.05. The values represent mean ± SD, where n = 3 [13] showed a relatively significant effect from AC treatment, the biomass obtained was slightly lower than the control group, indicating that some growth inhibitors could not be removed. However, AC effectively adsorbs growth inhibitors secreted by E. gracilis in this study. We would like to develop recyclable AC technology, such as biological AC, to increase the utilization rate so that it can be more convenient for large-scale reuse of water resources to cultivate E. gracilis.
AOPs have been widely used in the field of wastewater treatment. Oxidizers create a large number of free radicals under ultraviolet catalysis, such as hydroxyl radicals. These free radicals have strong oxidizing properties and can oxidize organic acids with unsaturated bonds [15]. According to the 3D-FEEM spectra of the reused water after AOP treatment, the fluorescence signal of the UV/ H 2 O 2 /O 3 group was weaker (Fig. 7b), followed by UV/ H 2 O 2 (Fig. 7c), indicating that the oxidation efficiency was higher with the participation of O 3 . In addition, the biomasses of the UV/H 2 O 2 /O 3 group and the UV/H 2 O 2 group on the last day were 2.89 g/L and 2.59 g/L, respectively, with the UV/H 2 O 2 /O 3 experimental group significantly higher than the control group (p < 0.05). These results indicate that the advanced oxidation method not only eliminates the growth inhibitors, but may also oxidize these inhibitors into small organic molecules that could be absorbed by algae cells, thereby increasing their biomass. Our results show that the growth inhibitors were mainly HAs with luminescent functional groups (C=O and C=C bonds, aromatic rings). O 3 and UV/ H 2 O 2 have been shown to work well for the treatment of Scenedesmus sp. LX1 [16] and S. acuminatus GT-2 [17] reused water, respectively. This study combines these two methods and shows that both methods together are more effective at removing growth inhibitors than either O 3 or UV/H 2 O 2 alone. Therefore, we believe that UV/H 2 O 2 /O 3 is an ideal and efficient method for the removal of inhibitors of E. gracilis.
According to our previous research, the free radicals in the reused water after treatment with AOPs could also inhibit the growth of microalgae. Therefore, we need to optimize the AOP treatment process in the future by optimizing the treatment time, the concentration of the oxidizing agent, and the development of indicators for online detection of the concentration of free radicals in reused water (for example, the vitamin C reducing agent neutralization method). Use of AOPs is more conducive to the wide application of water reuse for algae cultivation. In addition, through these treatments, again it is clear that HA secreted by E. gracilis is a main growth inhibitor.
Based on the above results, we have proposed a cyclic culture model for E. gracilis (Fig. 8). The conceptual model is optimal when urea replaces NH 4 Cl as a nitrogen source and the reused water is filtered through an UFM and then treated with UV 254 /H 2 O 2 /O 3 . This model improves the availability of reused water, reduces the cost of cultivation, and increases the biomass of microalga E. gracilis.
Conclusion
Our study demonstrated that cultivation water used three times had a significant inhibitory effect on the growth of E. gracilis. We replaced NH 4 Cl with urea and observed a reduction in the osmotic pressure caused by Cl − accumulation in the reused water, indicating that urea is an ideal nitrogen source. In addition, HA was identified as a main growth inhibitor of E. gracilis, and its content was positively related to the rate of growth inhibition. Moreover, we found that HA interfered with the photosynthetic efficiency of the algae and reduced the efficiency of paramylon and lipid synthesis. We determined that the key metabolic pathways for secreting these HA were valine, leucine, and isoleucine biosynthesis, and linoleic acid metabolism via metabolomics analysis. All HA been efficiently removed or converted into nutrients by AC or UV/H 2 O 2 /O 3 treatment, respectively. As a result, the biomass has been recovered to the same levels as the control group (AC treatment) and even enhanced E. gracilis growth (UV/H 2 O 2 /O 3 treatment). This result provides further confirmation that HA was a main growth inhibitor. An effective model for the cyclic culture of E. gracilis was thus proposed. These studies have important practical and theoretical significance for cyclic cultivation of E. Fig. 8 The cyclic culture model of E. gracilis under reused water conditions. GI represents the growth inhibitors secreted by microalga E. gracilis gracilis or even other species of microalgae and for saving precious water resources.
Microalgae strain and growth conditions
Euglena gracilis CCAP 1224/5Z was obtained from the Culture Collection of Algae and Protozoa (CCAP) and maintained in our lab at Shenzhen University [30]. This strain was cultured in a modified photoautotrophic Euglena medium (PEM) according to Cramer and Myers [31]. PEM includes 1.8 g/L NH 4 Cl, 0.6 g/L KH 2 PO 4 , 1. The pH value of the PEM medium was 3.6 adjusted with 3 mol/L NaOH and 1 mol/L HCl. The microalgae cells were grown in 2-L glass column photobioreactors with a 10-cm internal column diameter. The photobioreactors contained 1.5 L PEM medium that was stirred with 0.2 μm-filtered mixed gas (2% CO 2 , v/v, gas flow rate = 6 L/min) and illuminated with an LED lamp at a light intensity of 150 μmol photons m −2 s −1 (24: 0 h light-dark cycle). The temperature of the cultivation was maintained at 25.0 °C.
Water reuse
When the algae cells were cultured on day 10, the cultivation water sample (UFM-R0) was treated with ultrafiltration membrane (UFM) harvesting equipment [32] that was built in our laboratory (Additional file 1: Fig. S1). This equipment cuts off the molecular weight ≥ 50 kDa. One equivalent volume of the PEM nutrient medium was added to the UFM-R0 water sample. After sterilization, the solution was inoculated with microalgae to a final OD 750 concentration of 0.2 for the microalgae suspension. After the first cultivation, water samples were cycled through this cultivation and UFM cycle three times and these subsequent cycles were named UFM-R1, R2, R3. Samples of the microalgae were taken every other day to monitor the cells' dry weight (DW). DW was measured according to the method described by Wu et al. [33]. Briefly, 5 mL microalgae suspension was filtered through a preheated (105 °C, 24 h), preweighed glass microfiber filter (Whatman GF/C, 47 mm, UK). The filters were washed twice, each with 50 mL of 0.5 mol/L NH 4 HCO 3 . The filters were weighed after drying at 105 °C for 24 h to reach a constant mass. DW was calculated using Eq. (1): where "w a " and "w b " are the mass of the filters at the end and start of cultivation, respectively, and "v" is the volume of the microalgae suspension. Finally, the reused water samples and the algae cells were kept at − 80 °C to prepare for the next study.
The effect of accumulated ions on the growth of E. gracilis
Algae cells were cultured in PEM with nutrient concentrations of 10,9,8,7,6,5,4, 3, 2, 1, 0.5, and 0.2-fold, with 1.8 g/L NH 4 Cl or 1.0 g/L urea (both of them contained equal nitrogen content) as the nitrogen source. When the algae cells were cultured on day 10, the morphology of the algae cells was observed with an inverted microscope (Leica DMI8, Leica Microsystems, Germany). In addition, the DW, Cl − concentration, salinity, osmolality, and the pigment changes of the algae cells in the culture medium were measured. Cl − concentration was measured using a capillary ion chromatograph (ICS 5000 +, Dionex, Sunnyvale, CA, USA) [10]. The salinity of the culture medium was measured using an Orion STAR A329 multiparameter meter (G10919, Thermo Fisher Scientific, USA). Osmolality was determined by measuring the freezing point of the solution with an automatic osmometer (Osmomat 030, Gonotec, Germany) according to Hadj-Romdhane et al. [9]. The osmolality was calculated using the freezing point depression, ΔT(°C), according to the following Eq. (2): where 1.858 is the cryoscopic constant, which is equal to the freezing point depression of a solution with an osmolality of 1 osmol.
The identification of E. gracilis growth inhibitors and quantitative analysis of their effect on growth
The differences in dissolved organic matter (DOM) content between the control (UFM-R0) and the reused water samples (UFM-R1, R2, R3) were measured using a total organic carbon analyzer (Multi N/C 2100, Analytik Jena, Germany). DOM was determined with three-dimensional fluorescence excitation-emission matrix (3D-FEEM) spectrophotometry. Briefly, 3D-FEEM spectra were obtained using a fluorescence spectrophotometer (F-4500, Hitachi, Japan). The excitation (Ex) and emission (Em) slits were set to a bandpass of 5 nm. Ex wavelengths were scanned from 200 to 450 nm, and Em wavelengths were scanned from 220 to 550 nm. All of the 3D-FEEM spectral data were analyzed with Origin Pro 2018 software (https:// www. origi nlab. com/ origin).
The method of pretreated water sample was based on Leenheer [34]. The fractional method of DOM from water sample was optimized by Imai et al. [35] and Zhang et al. [36]. Briefly, the water sample was repeatedly passed through Amberlite XAD-8 resin at a flow rate of 5 mL/ min 3 times, and then 2 bed volumes (BV) of 0.1 mol/L HCl were added to reverse the elution 3 times to obtain the hydrophobic bases (HoB). The pH of the water sample was adjusted to 2.0 using 0.1 mol/L HCl and 0.1 mol/L NaOH, then the water sample was passed through 3 columns with XAD-8 resin, D001(a microporous strong acidic ion exchange resin), and D201 (a macroporous strong base ion exchange resin) orderly. This sequence was repeated 3 times. After the water sample was passed through the resin, the remaining water sample contained only the hydrophilic neutrals (HiN). The XAD-8, D001, and D201 resins were back-eluted with 0.1 mol/L NaOH to obtain hydrophobic acids (HoA), hydrophilic bases (HiB), and hydrophilic acids (HiA), respectively. After the XAD-8 resin was air dried, ethanol was added and soxhlet extraction was run for 12 h to obtain the hydrophobic neutrals (HoN). All solvents were removed using rotary-evaporation (RV3, IKA, Germany). The volume of all DOM fractions was adjusted to 50 mL and transferred to centrifuge tubes. After diluting each group of DOM to a certain concentration, the percentage of the DOM in each fraction was measured by using a total N/C analyser (Multi N/C 2100, Analytik Jena, Germany). An absorbance value of 254 nm (UV 254 ) of DOM in each fraction was measured by using a UV-Vis spectroscopic measurements (UV2350, UNICO, China).
The fractionated organic acids were added to fresh medium according to the percentage of DOM. After culturing the algae cells with this medium for 10 days, the DWs of the control and the experimental groups were measured. The inhibition rate of growth (IG%) was calculated using the following Eq. (3): where "c" is the DW of the control group and "i" is the DW of algae cells under the different fractionated DOM stressed conditions. Finally, the correlation analysis of UV 254 and IG% were performed by using Origin Pro 2018 software.
The effect of growth inhibitors on the Fv/Fm ratio, paramylon and total fatty acid content
After the fractionated DOM was diluted according to the DOM content in the UFM-R3 reused water, as described above, fresh medium was added, and then the algae cells were cultivated under the afore mentioned culture conditions. The Fv/Fm ratio, the paramylon and the total fatty acid (TFA) content of the algae cells were measured after the 10th day of cultivation. The Fv/Fm ratio was determined by dividing the variable fluorescence (Fv) by the maximum fluorescence (Fm), according to the method of Sha et al. [13]. The algae cells were placed in a quartz cube and maintained in the dark for 3 min prior to measurement of Fv/Fm. The Fv/Fm ratio for the algae cells was then measured at room temperature using a PHYTO-ED fluorimeter (Walz, Effeltrich, Germany).
Paramylon content was quantified using the method of Takenaka et al. [37] and Wu et al. [30] with the following modification: 2 mL, 30 mmol/L of EDTA chelating agent was added to 15 mL the algae cells suspensions. After each cell suspension was centrifuged and freezedried, 10 mg freeze-dried algae powder and 5 mL of acetone were transferred to a 15 mL of centrifuge tube and shaken for 30 s, then placed in a shaker for 2 h. After the tube was centrifuged at 2000×g for 5 min, the supernatant was removed. 1.5 mL of 1% sodium dodecyl sulfate (SDS) solution was added to the tube, then the contents were transferred into a 1.5-mL centrifuge tube and heated in a water bath at 85 °C for 2 h. Again, the supernatant was removed after the centrifuge tube was centrifuged at 2000×g for 5 min. The precipitate was washed and centrifuged in 1 mL deionized water, then oven-dried at 70 °C to a constant mass. The resulting precipitate was paramylon. The paramylon content was calculated as shown in Eq. (4): where "P" and "DW" are the DWs of the paramylon and the algae powder, respectively.
TFA content was determined using the method of Wu et al. [38]. Briefly, about 10 mg lyophilized cell pellets were disrupted by grinding three times under liquid nitrogen in the presence of methanol, chloroform, and formic acid (20:10:1, v:v:v) to extract the lipids from the algal biomass. The quantity of TFA content in extracts was measured by using an Agilent 7890B gas chromatograph coupled with a 5977A mass spectrometer (GC-MS).
The metabolic pathways of growth inhibitors secreted by E. gracilis were determined using metabolomics analysis E. gracilis cells and cultivation water from UFM-R0 were collected and a metabolomics analysis was performed. The metabolites in the sample were extracted and analyzed according to the method mentioned by Wu et al. [38]. All of the metabolites were detected using ultra-high performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (4) Paramyloncontent% = P DW × 100% (UHPLC-QTOF-MS). In this study, some metabolite peaks were detected after relative standard deviation noise reduction. Next, the missing values were increased by half of the minimum value. An internal standard normalization method was also employed in this data analysis. The final dataset containing the peak number, sample name, and normalized peak area was imported to a SIMCA16.0.2 software package (Sartorius Stedim Data Analytics AB, Umea, Sweden) for multivariate analysis. Data was scaled and logarithmically transformed to minimize the impact of both noise and high variance of the variables. After these transformations, principal component analysis (PCA), an analysis that reduces the dimensionality of the data, was carried out to visualize the distribution and the grouping of the samples. A 95% confidence interval in the PCA score plot was used as the threshold to identify potential outliers in the dataset. In order to visualize group separation and find significantly changed metabolites, orthogonal-projections-to-latentstructures discriminate analysis (OPLS-DA) was applied. Then a sevenfold cross-validation was performed to calculate the values of R 2 and Q 2 . R 2 indicates how well the data variance fits the model, and Q 2 indicates how well a variable can be predicted. To check the robustness and predictive ability of the OPLS-DA model, 1000 permutations were further conducted. Afterward, the R 2 and Q 2 intercept values were obtained. The intercept value of Q 2 represents the robustness and reliability of the model and the risk of overfitting (for the latter, smaller values are better). Furthermore, the value of variable importance in the projection (VIP) of the first principal component in OPLS-DA analysis was obtained. It summarizes the contribution of each variable to the model. The metabolites with VIP > 1 and p < 0.05 (Student's t-test) were considered to be significantly changed. In addition, commercial databases including KEGG database (http:// www. genome. jp/ kegg/) and MetaboAnalyst (http:// www. metab oanal yst. ca/) were used for metabolic pathway enrichment analysis. From these analyses, bubble diagrams and metabolic pathways were made.
Removal of growth inhibitors
The methods of advanced oxidation that we used (UV/ H 2 O 2 /O 3 and UV/H 2 O 2 ) were similar to the result of Hu et al. [16] and Wang et al. [17]. Briefly, an ozone generator, which produced O 3 with a flow rate of 3000 mg/h, was fed into a solution containing 189.2 mg/L DOM (found in reused water UFM-R3) and 1% H 2 O 2 for 2 h under UV 254 ultraviolet lamp irradiation. This is the UV/H 2 O 2 /O 3 experimental group. In the other set of experiments, O 3 was not used, but all other conditions were the same. This is the UV/H 2 O 2 experimental group. After all the treated water samples were freeze-dried, fresh medium was added to each. The method of AC filtration was similar to that of Sha et al. [13]. Briefly, the water sample containing the same UFM-R3 DOM concentration was filtered repeatedly through a chromatography column containing saturated AC (K04, Hainan Xingguang Active Carbon Co., LTD. China) for 2 h. Then 0.45-micron membrane filters were used to recover the water sample. DOM was characterized by using 3D-FEEM spectroscopy. Finally, the microalgae cells were cultured under the experimental conditions, and the DW of the algae cells was measured every other day.
Statistical analysis
All DWs, Cl − concentration, salinity, osmolality, DOM, UV 254 , IG%, Fv/Fm ratio, paramylon content, and TFA content tests were performed in triplicate and the average and standard deviations was reported. All data were statistically analyzed by Student's t-test analysis to investigate the difference between the control and experimental groups. p-values of less than 0.01 (p < 0.01) were considered significantly different, p < 0.05 values were considered statistically different, and p > 0.05 values were considered not statistically different (NS) compared to the control groups. | 2021-06-06T13:56:10.379Z | 2021-06-05T00:00:00.000 | {
"year": 2021,
"sha1": "75aef0e67a20c77335115f258b84c379b1b10074",
"oa_license": "CCBY",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-021-01980-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "745aa617b8f40cb6f79bf41a27c28cdee9ae2d40",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259655763 | pes2o/s2orc | v3-fos-license | Hepatic segmental arterial mediolysis: A case report and brief literature review
Key Clinical message When evaluating patients with abdominal pain, it is important to consider SAM in the differential diagnosis, along with vasculitis, fibromuscular dysplasia (FMD), atherosclerosis, mycotic aneurysms, and cystic medial degeneration. Abstract Segmental arterial mediolysis (SAM) is a rare arteriopathy which is an under‐recognized and commonly missed diagnosis of abdominal pain. We report a case of a 58‐year‐old female who presented with abdominal pain and was misdiagnosed with a urinary tract infection. The diagnosis was made with CTA and managed with embolization. Despite appropriate intervention and close hospital monitoring, further complications were inevitable. We conclude that though literature has shown better prognosis and even complete resolution after medical and/or surgical intervention, close follow up and monitoring is needed to avoid unexpected complications.
| INTRODUCTION
Segmental arterial Mediolysis (SAM) is a nonatherosclerotic, non-inflammatory vasculopathy of unknown etiology characterized by vacuolization and lysis of outer media layer of artery leading to stricture, aneurysm, occlusion, and dissection. [1][2][3] It usually involves medium and large sized abdominal splanchnic vessels, such as the celiac, mesenteric, and/or renal arteries with occasional carotid, cerebral, and coronary artery involvement. 2 The usual presentation is a middle-aged or elderly patient with abdominal and/or flank pain but could also present with catastrophic hypovolemia or hemorrhagic shock in severe forms. 4 Although histology is still considered the standard method for confirming a diagnosis of SAM, the growing quality of non-invasive imaging techniques such as CT and MR angiograms has led to an increase in the use of these imaging modalities over tissue biopsy. 2,3 This shift is due to current discrepancies regarding anatomic involvement and the use of inflammatory markers, autoimmune serologies, and genetic testing necessary to diagnose SAM. 2,3,5 Moreover, there is significant overlap between SAM and similar arteriopathies such as an aneurysms, dissections, and fibromuscular dysplasia. 2,5 Therefore, it is necessary to standardize the diagnostic criteria for SAM and its mimics. Though there have been reports of complete resolution of vascular lesion of SAM or long-term diseasefree survival following embolization, bypass or resection of affected areas, cases complicated by abdominal hemorrhage had a mortality of 50%, 6 which has been reduced to 25% with the introduction of endovascular interventions. 3 Given its high mortality and emergent presentation, a high degree of suspicion for diagnosis is required to avoid delays in treatment. Here we present a case of 58-year-old woman with a delayed diagnosis of SAM that led to rapid deterioration of her clinical status despite aggressive measures. This case highlights how SAM could be repeatedly misdiagnosed and how a patient could deteriorate quickly emphasizing the importance of clinical judgment.
| CASE PRESENTATION
A 58-year-old female with a past medical history of protein C and S deficiency, deep vein thrombosis (on Warfarin), systemic lupus erythematosus and hypothyroidism presented to an urgent care with right flank and upper quadrant abdominal pain for 3 days. Urinalysis revealed blood and leukocyte esterase and the patient was discharged on Cephalexin for suspected urinary tract infection. Her abdominal pain did not improve, and she started having nausea, prompting her to visit the emergency department. In the ED, a Computed Tomography (CT) of her abdomen showed findings consistent with recently passed calculus and the patient was again discharged home with a plan for outpatient follow up.
She returned after 12 h as her abdominal pain got worse. Her vital signs at presentation were blood pressure 154/80 mm Hg, heart rate 85 beats per min, respiratory rate 17 per min and oxygen saturation of 100% on room air. Physical examination revealed diffuse abdominal tenderness along with right costovertebral angle tenderness. Her initial laboratory test results showed white blood cell count 9.1 bil/L, hemoglobin 13 g/dL, platelets 257 bil/L and INR 4.4. Basic metabolic panel was unremarkable. Liver function panel showed aspartate aminotransferase 658 U/L (compared to 104 U/L 12 h prior), alanine aminotransferase 772 U/L (compared to 82 U/L 12 h prior), bilirubin 0.5 mg/dL, alkaline phosphatase 70 U/L and albumin of 4.6 g/dL. Extensive workup including infectious and rheumatologic testing to understand the etiology of acute elevation in transaminases was grossly unremarkable as in Table 1.
An axial CT angiography abdomen/ pelvis with contrast showed new right hepatic lobe infarct measuring 6.5 × 6.5 cm with hypervascular lesion suspicious for pseudoaneurysm of right hepatic artery as in Figure 1 and occlusion of the anterior right portal vein at the site of aneurysm resulting in anterior right hepatic lobe infarction ( Figure 1). An MRI of her abdomen showed a wedgeshaped hepatic ischemia to portions of segment 5/8 likely secondary to a 3 cm intrahepatic pseudoaneurysm with compression of anterior branch of right portal vein. The clinical history and angiographic findings were strongly suggestive of a diagnosis of SAM. The patient subsequently underwent embolization of hepatic pseudoaneurysm with gelfoam and coil embolization as in Figure 2 and started to show symptomatic improvement. On day 5 of hospitalization, the patient suddenly developed one episode of syncope and diffuse abdominal pain. Lab results were remarkable for a marked hemoglobin drop to 5.2 g/dL (compared to 9.6 g/dL a day prior) and accordingly, the patient underwent transfusion. CT angiogram abdomen/ pelvis was repeated which showed large perihepatic hematoma ( Figure 3) with enhancing structure in right hepatic lobe near site of prior embolization which was concerning for re-bleeding of the pseudoaneurysm and extravasation along with new moderate volume of hemoperitoneum. Subsequently, the patient underwent repeat embolization of the pseudoaneurysm, common hepatic artery, right hepatic artery, medial branch of left hepatic artery and gastroduodenal artery as in Figure 4. She was monitored closely in the surgical intensive care unit and serial monitoring of hemoglobin and liver enzymes was done. Patient reported persistent abdominal pain and subsequently developed frequent loose stools and fever with a maximum temperature of 101.4 F. Clostridium difficile testing was negative and a repeat CT abdomen showed gallbladder wall rupture with large perihepatic biloma ( Figure 5) extending to right paracolic gutter along with perihepatic hematoma extending to right paracolic gutter, infarction of right hepatic lobe and a small splenic infarction. The patient accordingly underwent exploratory laparotomy and a cholecystectomy along with drainage of the intra-abdominal hematoma. She continued to have fever spikes despite being on intravenous broad-spectrum antibiotics (Vancomycin and Piperacillin-Tazobactam). Four days later the patient suddenly became hypoxic and went into cardiac arrest. Bedside emergent decompressive laparotomy was performed for suspected abdominal compartment syndrome. Bowel loops were found to be distended though perfusion was maintained, and no active bleeding or intra-abdominal hematoma noted. Unfortunately, 3 days later the patient went into cardiac arrest again and subsequently passed away.
SAM was initially described by Slavin et al in 1976 as
"Segmental mediolytic arteritis". 1 Since then, we know that it is a non-inflammatory, non-atherosclerotic vasculopathy that affects medium sized vessels, most commonly branches of the aorta. 7 Multiple retrospective reviews, systematic reviews and meta-analyses have been conducted to establish the epidemiology, prognosis, diagnostic criteria, as well as best treatment option for patients. 2,3,[8][9][10][11] However, because the disease is mostly asymptomatic, as well as because it is rare, relatively few reports exist, with no large-scale studies evaluating treatment outcomes. The pathophysiology of SAM has been the topic of extensive study and is mainly guided by histologic findings. 5,7,12 It is theorized that development of vacuoles in the smooth muscle of the outer media of arterial walls (mediolysis) is the first step, followed by granulation tissue formation and fibrosis that leads to the eventual disappearance of the vacuole if rupture does not occur between these two steps. 5,7 Slavin et al, more specifically has mentioned that SAM can be described as a vasospastic arteriopathy that is triggered when norepinephrine binds with alpha-1 adrenoceptor on the vessel walls. As the arteriopathy is repaired, it is converted into various standardized arterial diseases that change the clinical presentation of SAM from a hemorrhagic disorder to an ischemic one. 12 Moreover, one of the histologic hallmarks of SAM is absence of inflammatory cells. 5,7 Most studies concur that SAM most commonly affects men aged 50-60. 3,[8][9][10][11] However, the disease has been reported in all age groups, including neonates. 3,5 No comorbidities have been associated with SAM 3 , but most studies, reported hypertension in a significant number of patients with the disease. 2,3 However, it is possible that this finding reflects the incidence of hypertension in this population. 3 Other reported associations include hyperlipidemia and tobacco use. 2 The natural course of unruptured aneurysms in over two thirds of patients with SAM is stabilization of the aneurysm during follow up. 10 The rest of the aneurysms will either reduce in size or disappear during follow up 10 . A small percentage of patients (4% of patients with unruptured aneurysms) will develop new aneurysms according to Shimohira et al. 10 Approximately 25% of patients present with aneurysm rupture, 10 and will need immediate surgical intervention. Death in SAM usually occurs within 30 days of presentation. 3 Most of the studies published agree on the most common presenting symptom being abdominal pain in over two thirds of cases. [7][8][9]11 Other less common presenting complaints include flank pain, and back pain, as well as no symptoms. 2,9,11 In cases of aneurysmal rupture, hypotension has also been described at presentation. 9 The most affected arteries are branches of the aorta, with different studies reporting different percentages. The splenic artery, celiac trunk, SMA, hepatic artery, and renal arteries are affected in most studies at different rates. 2,3,[8][9][10][11] However, it is worth mentioning that more frequently than not, multiple different arteries are involved, or multiple sites in the same artery. 2,8,11 In our case, the hepatic artery and its branches were involved, with no evidence of other arterial involvement.
Diagnosing SAM always poses a clinical challenge. 5,7 The gold standard for diagnosis is arterial biopsy, which shows the presence of vacuoles in the arterial media 5,7 with lack of inflammation. However, in most cases biopsy not practical or even impossible, due to the acuity of presentation or the location of the lesion. Diagnostic criteria (clinical, imaging, serologic) have therefore been developed that are used widely in the literature for the diagnosis of the disease, 2,7,9 even though they are not universally agreed upon, and have been developed based on expert opinion. The clinical criterion is to rule out other causes that could cause similar presentation, including connective tissue diseases (Marfan syndrome), atherosclerosis, FMD, and other vasculitides. 7,9 In terms of imaging criteria, SAM usually presents with one of 6 presenting imaging findings, according to Slavin et al, which include dissection, fusiform aneurysm, saccular aneurysm, occlusion, stenosis, beaded appearance, wall thickening, pseudoaneurysm, fistula, or organ infarction. 13 CTA is being utilized increasingly for the diagnosis of SAM instead of angiography, due to its less invasive nature, and ability to assess the vessel wall as well. 6,7,11 For the serologic criterion, inflammation also needs to be ruled out, with negative inflammatory markers. This is sometimes challenging, as elevated CRP can be seen in cases of acute illness, as reported in previous studies. 9 Differential diagnosis of this disease is very important, as there are multiple mimics that could make diagnosing SAM particularly challenging. 2,5,7,9,13 Atherosclerosis as a cause needs to be considered, but patients usually have diffuse atherosclerosis of multiple vessels, and the disease affects artery bifurcations. 5 SAM is usually isolated in medium sized vessels and has no preference for bifurcations. 5 Vasculitides need to be ruled out, which is usually done with the assistance of inflammatory markers, as well as the absence of clinical findings that support a systemic vasculitis. 5 Marfan syndrome and cystic medial necrosis also are in the differential and can be ruled out by absence of characteristic clinical findings as well as different histology (cystic medial necrosis). 5 Mycotic aneurysms also need to be considered, but the lack of systemic infection, and inflammation makes these types of aneurysms unlikely in cases of SAM 5 .
Fibromuscular dysplasia requires special mention, as it shares many similar features with SAM. 2,5,7,13 Many studies have examined the differences and similarities between these two conditions, concluding that the best way to differentiate between the two conditions are the patient demographics. 5,13 FMD commonly affects young females and usually presents with hypertension. SAM affects middle aged males and presents with abdominal pain or hemorrhage. 5,7,9,11,13 FMD affects the renal arteries, the intracerebral arteries as well as the carotids. SAM affects arteries of the celiac trunk and intracerebral vessels with different incidence. 2,5,13 It is particularly difficult to distinguish SAM from FMD in the renal arteries as mentioned in previous studies. 2 In our case, the hepatic artery was involved, which is rarely the case in FMD. 5 Histology is the most conclusive way to differentiate between the two conditions. 5 In the background of medical history such as protein C and S deficiency, deep vein thrombosis, systemic lupus erythematosus, and hypothyroidism in our patient, it is possible that they contribute to vasculopathy. The prevalence of vasculitis in SLE is reported to be between 11% and 36%. 14 Although currently the patient's inflammatory disease is in remission as evident by laboratory report, the possibility of formation of pseudoaneurysm while the disease were active cannot be eliminated. This could have been ruled out if histopathology was performed, which is the limitation in our case. The clinical picture of sudden abdominal pain, involvement of arteries from the celiac axis, and imaging findings of multiple pseudoaneurysm are typical findings for SAM and would correlate with it. Multiple cases have been reported with similar findings in the literature as SAM. SLE can lead to aneurysm but multiple pseudoaneurysm still favors the diagnosis of SAM. The presented case can also be a secondary SAM as a rare complication of patient's underlying diseases.
Treatment of SAM is conservative with blood pressure control, anticoagulation, or antiplatelet therapy, 2 and active surveillance with serial imaging in most cases that present with unruptured SAM 10 . Most patients will have stable disease, with some having complete resolution of SAM, whereas others will have aneurysm progression or development of new lesions. 10 In cases presenting with rupture or in unruptured cases that require intervention due to reasons mentioned above, surgical intervention is required either with endovascular procedures or open surgery. 2,3,[8][9][10] Most recent published studies mention that endovascular techniques, with the most popular one being coil embolization, are preferred over surgery (79% of cases), with high success rate, minimal complications, and nearly zero mortality. 2,[8][9][10] Open surgical intervention is reported in 20% of patients 2 with a higher mortality rate (9%). 3 In most studies, like in our case, surgery was performed due to failure of endovascular procedures to control the bleeding. 3,9 4 | CONCLUSION SAM is a rare non-inflammatory, non-atherosclerotic disease of medium sized intra-abdominal vessels, most commonly affecting men between ages of 50-60. The most common presenting symptom is abdominal pain, and it is usually diagnosed with CTA. Differential diagnosis should include vasculitis, FMD, atherosclerosis, mycotic aneurysms, and Cystic medial degeneration. Most patients presenting with unruptured lesions will need continued surveillance but will not require surgical intervention. Patients with disease progression as well as patients with rupture will need surgical intervention, either endovascular or open surgery, with mortality being around 25%. Clinicians need to have a high suspicion for the diagnosis of this disease, especially if presenting acutely. | 2023-07-12T05:08:53.511Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "c130626c2277082090dfc6e896b668e55693f838",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c130626c2277082090dfc6e896b668e55693f838",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253801561 | pes2o/s2orc | v3-fos-license | Demonstration of Frequency Stability limited by Thermal Fluctuation Noise in Silicon Nitride Nanomechanical Resonators
The frequency stability of nanomechanical resonators (NMR) dictates the performance level of many state-of-the-art sensors (e.g., mass, force, temperature, radiation) that relate an external physical perturbation to a resonance frequency shift. While this is obviously of fundamental importance, accurate models and understandings of sources of frequency instability are not always available. The contribution of thermomechanical noise to frequency stability has been well studied in recent years and is often the fundamental performance limitation. Frequency stability limited by thermal fluctuation noise has attracted less interest but is nevertheless of fundamental importance notably in temperature sensing applications. In particular, temperature-sensitive NMR have become promising candidates for replacing traditional bolometers in infrared radiation sensing. However, reaching the ultimate detectivity limit of thermal radiation sensors requires their noise to be dominated by fundamental thermal fluctuation, which has not been demonstrated to date. In this work, we first develop a theoretical model for computing the frequency stability of NMR by considering the effect of both additive phase noise (i.e., thermomechanical, and experimental detection noise) and thermal fluctuation noise in a close-loop frequency tracking scheme. We thereafter validate this model experimentally and observe thermal fluctuation noise in SiN drum resonators of various sizes. Our work shows that by using resonators of specific characteristics--such as high temperature sensitivity, high mechanical quality factors, and high mass-to-thermal-conductance ratio--one can minimize additive phase noise below thermal fluctuation noise. This paves the way for NMR-based radiation sensors that can reach the fundamental detectivity limit of thermal radiation sensing and outperform existing technologies.
Nanomechanical resonators (NMR) frequency stability is the fundamental quantity dictating the performance of sensors measuring physical signals through resonance frequency shifts (e.g., mass [1][2][3][4], force [5,6], and thermal [7][8][9][10][11][12] sensors). Hence, identifying the source of noise that dominates frequency fluctuations in the absence of a signal is fundamental for understanding the performance limit of NMR. Early theoretical investigation of various sources of noise in NMR was proposed by Vig et al. [13] and Cleland et al. [14]. These theoretical works provided a comprehensive picture of multiple sources of noise in NMR, including surface diffusion, absorption-desorption, thermomechanical, detection, and thermal fluctuation noises. Thermomechanical noise was later identified as a dominant source of noise in many cases. Substantial efforts were therefore devoted to resolving the frequency stability limit imposed by thermomechanical noise [15][16][17][18], and to predicting its effect after processing by various frequency tracking schemes. Demir [19] recently provided a theoretical model for predicting the frequency stability of resonators, which considers the combined effect of thermomechanical and detection noise in commonly employed phase-locked loop (PLL) frequency tracking. Such theoretical model was later validated experimentally by Sadeghi et al. [20], confirming that thermomechanical and detection noises can be the dominant noises in high-stress silicon nitride string resonators. * raphael.stgelais@uottawa.ca FIG. 1. Additive phase noise vs. thermal fluctuation (a) Illustration of the characteristic difference between additive phase noise and thermal fluctuation. (b) Increase in temperature coefficient α, resonance frequency fr, mechanical Q-factor, vibration amplitude Arss, mass-to-thermalconductance ratio m ef f /G of the nanomechanical resonator (NMR) minimizes additive phase noise relative to thermal fluctuation noise.
While thermomechanical noise was proven to dominate frequency fluctuation in many high-Q factor NMR, achieving regimes in which thermal fluctuation noise would dominate remains of high interest, especially in temperature sensing applications. In temperature sen-sors, the smallest temperature that theoretically can be measured is ultimately dictated by thermal fluctuation noise. This limit is of particular interest in the field of radiation detection, as it dictates the fundamental detectivity limit that can ultimately be reached by thermal-based radiation detectors [21,22], or bolometers. In mechanical resonator-based radiation sensors [7][8][9][10][11][12], thermal fluctuation is typically not the dominant source of noise, which means that their ultimate limit of performance has not been reached. Likewise, the fundamental limit has not been reached in traditional (i.e., electrical) bolometers since Johnson-Nyquist noise always dominates over thermal fluctuation noise [21]. NMR are potentially interesting due to their immunity to Johnson-Nyquist noise, but so far were found to be limited instead by thermomechanical, detection [7,[9][10][11][12], or flicker noise [8].
In the current work, we experimentally demonstrate low-stress, high Q-factor SiN drum resonators in which frequency instability is minimized down to fundamental thermal fluctuation noise. We also include thermal fluctuation noise with recently proposed models for frequency stability within a closed-loop frequency tracking scheme [19]. Our results and model, therefore, pave the way for radiation sensors that could reach the fundamental detectivity limit of physical radiation detectors.
We separate the sources of frequency fluctuation in a drum resonator into two categories, namely, the additive phase noise S y,add (ω) which includes thermomechanical S y,mech (ω) and detection noise S y,det (ω); and thermal fluctuation noise S y,th (ω). Here y represents fractional frequency δf /f r fluctuations. We also define the intrinsic noise spectral density S int y (ω) before it is modified by the experimental frequency measurement scheme (e.g., open-loop, phase-lock loop frequency tracking, or self-sustained oscillation). For a given eigen frequency f r , the intrinsic frequency fluctuation caused by additive phase noise S int y,add (ω) is therefore [19]: where H mech (jω) = 1/(1 + jωτ mech ) is a one-pole low pass filter accounting for the mechanical time constant τ mech = Q/(πf r ) of the resonator, k B is the Boltzmann constant, T is the eigenmode temperature, m ef f is the mode effective mass, Q is the mechanical quality factor, A rss is the vibration amplitude in steady-state and κ d is a dimensionless parameter scaling the level of detection noise relative to thermomechanical noise, as described in [19]. More specifically, thermomechanical noise is resolved above detection noise when κ d < 1.
In turn, the intrinsic frequency fluctuation caused by thermal fluctuation [13,22] is: where G is the total thermal conductance [12] in W/K between the resonator and its environment, H th (jω) = 1/(1 + jωτ th ) is a one-pole filter accounting for the thermal response time τ th = C th /G of the resonator, and C th is the heat capacity of the resonator in J/K. An analytical model for G (and hence τ th ) in drum resonators is developed in [12] and is used throughout this work. α is the temperature coefficient of fractional frequency shifts, in K −1 . For a drum resonator, a reasonable approximation (< 20% error relative to finite element modeling [12]) of this parameter is: in which E is Young's modulus, α T is the drum material thermal expansion coefficient, σ is the built-in tensile stress, and ν is the Poisson ratio of the drum resonator material. In Eq. (2), T denotes the drum resonator material temperature, whereas T in Eq. (1) denotes the temperature of the eigenmode of frequency f r . We use the symbol T interchangeably in this work since both are assumed to be ≈ 300 K.
A key difference between the additive phase noise and thermal fluctuation noise is schematized in Fig. 1(a), in which the additive phase noise manifests itself as amplitude fluctuation, which then contributes to the frequency noise via Robins' formula [23]. Conversely, thermal fluctuations make the resonance peaks of the NMR fluctuate directly in the frequency domain by affecting the stiffness of the drum material. We note that one can change the level of additive phase noise by utilizing different eigenmodes of the resonators [see Eq. (1)] since f r , Q are mode-dependent. Likewise, additive phase noise can be minimized by increasing the vibration amplitude A rss (within the linear actuation regime). On the contrary, mode and amplitude changes do not affect the level of thermal fluctuation [see Eq. (2)], which depends primarily on drum resonator geometric, material and heat transfer properties (e.g., α and G).
To compare the relative contributions of S int y,mech (ω) and S int y,th (ω) with respect to the overall frequency fluctuation, we define a dimensionless ratio γ: This ratio ignores, for simplicity, the effect of both thermal H th (jω) and mechanical H mech (jω) response filters by setting ω = 0. A value of γ ≈ 1 indicates thermal fluctuation and additive phase noise are at a similar level. If γ is significantly larger than 1 and detection noise is minimized (i.e., κ d << 1), a thermal-fluctuation-dominated noise profile can be achieved. By examining γ, we find that thermal-fluctuationdominated noise profile can be achieved by maximizing the values of Q, m ef f G , α, A rss and f r . Among these listed parameters, the mass-to-thermal-conductance ratio m ef f G is likely the most unintuitive. Hence, in Fig. 2, we illustrate this ratio as a function of the dimensions of a square SiN drum resonators of side length L and thickness t. Values of G are presented in Fig. 2(a) and are calculated using the model developed in [12] which includes conductive and radiative heat transfer. We note that for relatively small drum resonators (L < 1 mm) where heat transfer occurs mostly by conduction, both G and m ef f scale with the thickness, such that t cancels out in the Conversely, as L > 1 mm, radiative heat transfer dominates, and L now cancels, making the curves plateau in Fig. 2(b).
In summary, considering this trend in m ef f G and the expression of γ, we find that minimizing additive phase noise below thermal fluctuation requires (i) a temperature sensitive resonator (i.e., high α) which can be maximized using a low-stress material as suggested in Eq. (3), (ii) a large drum resonator (i.e., large L) to maximize m ef f G , (iii) a high order (i.e., high f r ) and high Q eigenmode, and (iv) excitation at high amplitude A rss within the limits of linear actuation. We finally note that, despite the trend observed in Fig. 2 is likely not a practical approach since high t can also be detrimental to Q-factor [24,25]. To evaluate the spectral density of the frequency fluctuations S y (ω) in a practical experimental setting, we must also consider the effect of the measurement scheme. A phase-locked loop (PLL) frequency tracking scheme, such as in the current work, includes a proportionalintegral controller with proportional gain K p and integral gain K i , and an input demodulator filter of time constant τ demod . As shown in [19], the PLL frequency tracking scheme imposes filters on the additive phase noises of the resonators (i.e., the thermomechanical and detection noises): where H L (jω) = 1/(1 + jωτ demod ) is the demodulator filter. Since S y,mech (ω) and S y,th (ω) are both white noises, the PLL frequency tracking scheme imposes the same filtering effect on them, with the only difference being the time constant τ th : We then incorporate the PLL transfer functions [Eq. (5-7)] with S int y,add (0) and S int y,th (0) to obtain the noise after processing by the PLL frequency tracking: Finally, the overall fractional frequency fluctuation S y (ω) of a resonator under the PLL frequency tracking scheme is simply S y (ω) = S P LL y,add (ω) + S P LL y,th (ω). SiN drum resonators used in this work are fabricated using the process provided in [22]. During characterization, the SiN drum resonators are mounted magnetically onto a steel plate via three pairs of spherical magnets as shown in Fig. 3(b) and exited mechanically via a ceramic piezo actuator mounted on the other side of the steel plate [see Fig. 3(c)]. The magnet mounting method provides minimum contact area between the mounts and the chip, thus minimizing mechanical dissipation. We place the SiN drum resonators inside of a custom-built, high-vacuum (8 × 10 −7 Torr typical operating pressure) chamber to minimize air damping (i.e., maintaining high-Q factor) and convective heat transfer. The chamber is left to thermally stabilize for two days (48 hours) after pumping down before we perform frequency fluctuation measurements.
We detect the vibration signal of the SiN drum resonator, using a laser interferometer that consists of a 1550 nm Orion TM laser with built-in optical isolation, a 90:10 optical fiber coupler, 5 dB optical attenuator, optical isolator and a Thorlabs PDA20CS2 photodetector which is shown schematically in Fig. 3(c). The isolator at the location in Fig. 3(c) eliminates spurious optical cavities between the detector and the sample chip. The laser power output is set at 3.7 mW which is then attenuated to 11.7 µW prior to reaching the SiN drum resonator, via the 5 dB optical attenuator and the 90:10 optical coupler (i.e., 90% power attenuation). Laser power attenuation is critical to minimize the effect of laser heating, as observed in a separate experiment in Supplemental Material [26]. The detected signal is sampled by a lock-in amplifier from Zurich Instrument Ltd. We use the phase-lockloop (PLL) along with a proportional-integral controller provided by the lock-in amplifier to track the resonance frequency of the SiN drum resonators.
We quantify the frequency fluctuation of SiN drum resonators in this work using Allan deviation σ A [27], a metrology standard widely used to characterize the frequency fluctuation in nanoresonators. Based on the theoretically expected spectral density of frequency fluctuation S y (ω), we can numerically compute the theoretical σ A via [19]: where τ is the integration time. The asymptotic limit of σ A (i.e., excluding all intrinsic and PLL filters) for white noises can be computed analytically as S y (0)/τ , such that the asymptote specifically for thermal fluctuation noise is S P LL y,th (0)/τ . In order to observe additive phase noise and thermal fluctuation noise over a broad range of τ , the demodulation time constant in our experiment is set to a high bandwidth (τ demod = 3.18 × 10 −5 s) to minimize signal filtering at the lock-in input. Likewise, the PLL bandwidth is set to ≈ 5 times faster than the thermal fluctuation bandwidth (τ P LL ≈ τ th /5) to prevent filtering of thermal fluctuation noise S int y,th by the PLL. The corresponding K p and K i values for achieving this bandwidth are determined using the relations given in [19].
The geometric and material properties of the SiN drum resonators, and of the eigenmodes chosen for actuation, are decided to minimize additive phase noise related to Considering these drum resonator parameters, we compare, in Fig. 4(b), the expected Allan deviation σ A plots for our model (labelled "ADD+TH") with that of recent model [19] that solely includes additive phase noise (labelled "ADD"). In this case, we consider a low-stress 3 × 3 mm SiN drum resonator at different levels of actuation A rss and we set κ d = 0.012 to account for typical detection noise in our experiment. We note that at small τ 0.005 s, the two models overlap with each other, which indicates that additive phase noise is dominant. More specifically, S y,det dominates, since S int y,th and S int y,mech are attenuated by their respective intrinsic filter H th and H mech (with τ th = 0.09 s and τ mech = 4.15 s). At intermediate integration time 0.005 τ 0.07 s, and when A rss is sufficiently high, thermal fluctuation noise becomes non-negligible. In this case, theoretical σ A plots that include thermal fluctuation converge towards an A rss -independent thermal fluctuation asymptote ( S P LL y,th (0)/τ ) as τ increases. On the contrary, considering only additive phase noise in Fig. 4(b) do not exhibit amplitude-independent-converging effect.
This difference between the models is confirmed experimentally in Fig. 4(c), where we find that experimentally recorded σ A match closely with our model. Conversely, the "ADD" model fails at intermediate τ values (i.e., in the "thermal fluctuation dominated region") when A rss is sufficiently high for thermal fluctuation noise to dominate. Eventually, drift occurs and systematically dominates at τ 0.07 s. Note that values presented in Fig. 4(c) are a limited subset of the more complete data set presented in Fig. 5. The experimental conditions and model fitting parameters are discussed below for Fig. 5 therefore also applies to Fig. 4(c).
To further validate our theoretical model, we repeat Allan deviation σ A measurements for resonators of three different sizes (i.e., L = 1.5 mm, 3 mm, 6 mm) and for several drive amplitudes A rss in Fig. 5. During experiments, we first excite the resonators at low A rss , such that the experimental σ A is overall slightly above the thermal fluctuation asymptote-i.e., additive phase noise is marginally larger than thermal fluctuation noise. We then increase A rss progressively to reduce the additive phase noise. We find that all three drum resonators consistently converge towards the thermal asymptote at intermediate τ values, and when A rss is sufficiently large. The convergence is then rapidly shadowed by drift at large τ . Correspondence between the model and experiment in FIG. 5 was obtained by using only fit parameters that are expected from our experimental uncertainties. Two fit parameters are used for all resonators, while a third one is needed only for the largest resonator. The vibration amplitude of the resonators A rss and the detection noise scaling factor κ d are fitted for all three resonators to account for misalignment uncertainly between the location of our optical fiber and the mode vibration anti-node. This misalignment is found to underestimates A rss by fitted factors of (1, 1.2, 2.5) for the three respective resonators sizes. Likewise, fitting κ d yields increase factors of (50%, 30%, 10%) relative to the κ d values expected from our measurement noise floor (≈ 0.23 pm/ √ Hz) and from theoretically expected thermomechanical fluctuations. Finally, we find that for the largest resonator, the experimental thermal time constant τ th must be reduced by a factor of 2 relative to the theoretically expected value. It is possible that the very high order mode used in this case lead to a higher G (lower τ th ) due to several mode anti-nodes being located close to the heatdissipating silicon frame.
We note that our measurement in Fig. 5 confirms important relations between the level of thermal fluctuation relative to the membrane dimensions and mechanical properties. We first observe the level of thermal fluctuation noise (i.e., the TH asymptote vertical position) scales inversely with the drum resonator side length (i.e., with G), as predicted by Eq. (2) and Fig. 2. This as an important consequence in practice. While Eq. (2) and Fig. 2(b) suggest that large drum resonators are always better (to maximize m ef f G ), this is not entirely true in practice. As our membrane size increases, it becomes increasingly difficult to identify the thermal asymptote before drifts occur at higher τ . A trade-off therefore exists when designing sensors operating at the fundamental thermal fluctuation noise limit; large drum resonator should be used to maximize the m ef f G ratio up to the plateau observed in Fig. 2(b). However, thermal fluctuation noise limit will eventually be shadowed by drift if the resonator is too large (L >> 3 mm).
In conclusion, we present and experimentally validate a model for computing frequency stability of NMR considering the effect of both additive phase noise and thermal fluctuation noise in close-loop frequency tracking scheme. We demonstrated that by using SiN drum resonators of properly designed geometric and material properties, one can minimize additive phase noise below thermal fluctuation noise. We also identified which resonators inherent properties (e.g., thermal conduction, thermal coefficient of frequency) most affect thermal noise, in contrast with additive phase noise that can be minimized via other parameters (e.g., drive amplitude). Our work therefore provides fundamental guidance for building thermal sensors such as nanomechanical bolometers. We provide a path for those to reach the never attained fundamental detec-tivity limit [21] of thermal radiation detection, which requires sensors limited only by fundamental thermal fluctuation noise [22].
A. Finding the optimum laser power
Optimizing the laser power is critical for our experiment: too high power may cause spurious heating that affects thermal fluctuation readings, while low power increases detection noise and can therefore prevent observation of fundamental thermal fluctuations. In this supplementary experiment, we empirically identify a suitable range of laser power for observing the dominant effect of thermal fluctuation noise in our SiN resonators. We replace the fixed optical attenuator in Fig. 3(c) by a variable optical attenuator (Thorlabs V1550A) such that the laser power incident on the SiN resonator can be varied. We then drive the SiN resonator to an amplitude (40 nm) suitable for observation of thermal fluctuations noise (see Fig. 4-5), and we record Allan deviations for different laser powers. As shown in Fig. S1, in the thermal fluctuation region, the Allan deviation varies with laser power when the attenuation is the 0 -3.1 dB range, meaning that laser at this intensity heats up the membrane and adds to thermal fluctuation noise. In the optimal 3.1 -5 dB attenuation range, the characteristic bump of ther-mal fluctuations frequency noise is clearly noticeable and is now independent of laser power. We therefore chose a fixed 5 dB attenuation in this work. Above 5 dB attenuation, additive phase noise becomes excessive, such that the effect of thermal fluctuation cannot be observed anymore. | 2022-11-24T06:42:17.677Z | 2022-11-23T00:00:00.000 | {
"year": 2022,
"sha1": "059f340f985c41b10436057f10a6c309619a52fa",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "059f340f985c41b10436057f10a6c309619a52fa",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
1861293 | pes2o/s2orc | v3-fos-license | Autonomic nervous regulation of ovarian function by noxious somatic afferent stimulation
It is well known that ovarian function is regulated by hypothalamic–pituitary–ovarian hormones. However, although several histological studies have described the autonomic innervation of the ovary, the involvement of these autonomic nerves in ovarian function is unclear. Recently, it has been shown that both the superior ovarian nerve (SON) and the ovarian nerve plexus (ONP) induce vasoconstrictor activity by activation of alpha 1-adrenoceptors, whereas the SON, but not the ONP, inhibits ovarian estradiol secretion by activation of alpha 2-adrenoceptors. Furthermore, reflex activation of these ovarian nerves by noxious cutaneous stimulation of the rat hindpaw results in ovarian vasoconstriction and inhibition of estradiol secretion. Thus, in addition to long-term regulation of ovarian function by hormones, ovarian autonomic innervation may be involved in rapid regulation of ovarian function by responding to either internal or external environmental changes.
Introduction
Many studies have examined hypothalamic and pituitary hormonal regulation of such ovarian functions as ovulation and secretion of ovarian hormones [1,2]. However, although several histological studies have described the innervation of the ovary by vagal parasympathetic and sympathetic nerves, including both afferents and efferents [3][4][5][6][7][8], the involvement of these autonomic nerves in ovarian function has not yet been clarified.
The activity of autonomic efferent nerves and function of the target organs are regulated reflexly by somatic afferent stimulation, for example noxious mechanical stimulation of the skin. The neural mechanism of reflex responses in the sympathetic and parasympathetic nervous systems produced by noxious somatic afferent stimulation has been described for anesthetized animals, for which emotional factors are eliminated [9,10]. For example, cutaneous noxious mechanical stimulation of a hindpaw (pinching stimulation) increases heart rate and blood pressure via reflex activation of sympathetic efferent nerve activity to the heart and blood vessels [11]. The same somatic afferent stimulation increases adrenal sympathetic nerve activity and adrenal catecholamine secretion [12].
Concerning neural regulation of the female reproductive organs by autonomic nerves, some physiological functions have been demonstrated for afferents [13][14][15][16] and efferents [17] in the pelvic and hypogastric nerves innervating the uterus in rats. Furthermore, noxious somatic afferent stimulation has been demonstrated to cause both uterine contraction and an increase in uterine blood flow by reflex activation of the uterine parasympathetic efferent nerves of anesthetized rats [18].
Herein, we review the autonomic nervous regulation of ovarian function in anesthetized rats by noxious somatic afferent stimulation.
Autonomic innervation of rat ovary
Histological studies have revealed the distribution and innervation of sympathetic (splanchnic) and parasympathetic (vagal) nerves of the rat ovary, including both afferents and efferents [3][4][5][6][7][8]. Sympathetic nerves innervating the ovary emerge from the lower thoracic and upper lumbar spinal cord segments (mainly at T9 and T10) whereas vagal nerves originate from medullary neurons in the nucleus of the solitary tract, the dorsal vagal complex, the nucleus ambiguus, and the area postrema [3,7,8]. These autonomic nerves reach the ovary by two routes: the ovarian nerve plexus (ONP) along the ovarian artery and the superior ovarian nerve (SON) in the suspensory ligament [29,30] (Fig. 1). Histochemical and immunocytochemical studies have shown that the densities of nerves containing noradrenaline or neuropeptide-Y are high in the ovaries, whereas fewer nerves express acetylcholine, substance P, calcitonin gene-related peptide, vasoactive intestinal polypeptide, or other peptides [6]. Further, Burden et al. [3,31] demonstrated that adrenergic nerves enter the ovary through the hilar perivascular plexus, and tiny branches from this plexus extend into the contiguous steroidogenic interstitial gland cells.
Ovarian blood supply
The ovary receives blood from the ovarian and uterine arteries [32,33]. The ovarian artery, which branches from the abdominal aorta or renal artery, crosses the ureter ventrally and runs to the ovary. The uterine artery runs along the uterine horn in the mesometrium and anastomoses with the ovarian artery before entry to the hilus of the ovary. In the ovarian hilus, the arteries divide into medullary arteries and then enter the ovarian cortex. These cortical arteries divide repeatedly in the cortex and supply the ovarian stroma, follicles, and corpora lutea [34]. Figure 2a shows a vascular cast of a rat ovary, as observed with a scanning electron microscope [21]. Each ovarian follicle has a rich microvascular network. When examining the microvasculature on the surface of the ovary of anesthetized rats by video microscopy, arterioles and venules can be differentiated by their diameter, color of blood in the vessel, speed, and direction of blood flow. In the microvasculature shown in Fig. 2a, the arteriole ( Fig. 2b arrow), venule ( Fig. 2a*), and capillaries (for example, arrow heads in Fig. 2b) are distinguishable by reference to in-vivo observation. The diameter of ovarian arterioles on the surface of the vascular casts observed with a scanning electron microscope ranges from 11.4 to 34.3 lm (mean diameter, 21.9 ± 2.1 lm). This diameter is similar to that of ovarian arterioles on the surface of the ovary observed in vivo by video microscopy (range 13.1-42.9 lm; mean diameter, 22.1 ± 2.6 lm).
Effect of electrical stimulation of ovarian nerves
Reynolds and Ford [35] described ovarian vasoconstrictor activity in in-vitro experiments with pig ovaries. A vasoconstrictive effect on ovarian blood vessels was attributed to the sympathetic neurotransmitter, noradrenaline (NA), in in-vitro experiments with human ovarian blood vessels [36] and in in-vivo experiments with rats [37,38]. These earlier studies suggest the existence of adrenergic vasoconstrictor activity in the ovary.
In anesthetized rats, we examined the effects of NA and electrical stimulation of the autonomic nerves to the ovary on the diameter of ovarian arterioles, by use of digital video microscopy, and on ovarian blood flow, by use of laser Doppler flowmetry [21,22]. NA (5 lg/kg), injected intravenously into the jugular vein over 20 s resulted in a decrease in the diameter of ovarian arterioles and in ovarian blood flow, and an increase in mean arterial pressure (MAP) (Fig. 3a-e). The mean diameter of ovarian arterioles measured before NA injection was 23.0 ± 3.1 lm, but reached a minimum of 16.3 ± 2.9 lm 5 s after the end of the NA injection. Electrical stimulation of the distal part of the severed ONP resulted in a decrease in the diameter of ovarian arterioles and of ovarian blood flow, but did not change the MAP (Fig. 3f-j). The mean diameter of ovarian arterioles measured before ONP stimulation was 25 ± 1 lm, but reached a minimum of 19 ± 1 lm at the end of ONP stimulation. The time courses for the changes in ovarian blood flow were similar to those of the changes in the diameter of the ovarian arterioles. Electrical stimulation of the distal part of the severed SON also resulted in a decrease in ovarian blood flow (measured by plasma flow rate of ovarian venous blood) that was similar to that produced by ONP stimulation [23]. Decreases in ovarian blood flow after electrical stimulation of either the ONP or the SON were abolished completely by administration of an alpha-adrenoceptor antagonist [23,24]. These results suggest that the activation of sympathetic nerves to the ovary (both the ONP and the SON) and NA, a sympathetic neurotransmitter, induce vasoconstriction of ovarian arterioles, thereby reducing blood supply to the ovaries.
Sympathetic regulation of ovarian estradiol secretion
We examined the effects of electrical stimulation of the SON and the ONP on the rate of secretion of estradiol by the ovary in rats [23]. The rats were anesthetized on the day of estrus, and ovarian venous blood was collected intermittently through a catheter inserted into an ovarian vein (Fig. 4a). Plasma estradiol levels were measured by enzyme immunoassay. Under resting conditions, the mean concentration of estradiol in the ovarian venous plasma was 134.0 ± 13.3 pg/ml. The estradiol concentration in systemic arterial plasma (69.8 ± 6.9 pg/ml) was approximately 50 % of that observed in ovarian venous plasma. The rate of secretion of estradiol by the ovary was calculated from the absolute concentration of estradiol in the ovarian venous plasma minus the concentration in the arterial blood multiplied by the ovarian venous plasma flow rate. Under resting conditions, the ovarian venous plasma flow rate, concentration of estradiol in ovarian venous plasma, and the rate of secretion of estradiol ranged from 26.0 to 28.7 ll/min, 115.8 to 173.0 pg/ml, and 1.5 to 2.4 pg/min, respectively [23]. These values were stable for 45 min under resting conditions.
The SON or ONP, ipsilateral to the ovary from which ovarian venous blood was collected, was stimulated electrically at an intensity which was supramaximum for C-fibers. Stimulation of either the SON or the ONP produced a decrease in ovarian venous plasma flow rate (Fig. 4c, d). During the SON or the ONP stimulation, reduction of the plasma flow rate reached 76 ± 3 % and 74 ± 9 % of the prestimulus values, respectively, and returned to the prestimulus basal level 5 min after the end of stimulation. On the other hand, the rate of secretion of estradiol by the ovary was reduced by SON stimulation but was not affected by ONP stimulation (Fig. 4c, d). During SON stimulation, reduction of the rate of secretion of estradiol reached 53 ± 6 % of the prestimulus values, and returned to the prestimulus basal level 5 min after the end of stimulation. These results suggest that activation of autonomic nerves to the ovary causes vasoconstriction and inhibition of estradiol secretion, independently (Fig. 5). Ovarian estradiol production is considered synonymous with release because steroid hormones, once produced, can freely cross the cell membrane without having to be packaged into granules and actively exocytosed [39]. Therefore, stimulation of the SON may reduce estradiol synthesis in the ovary. Furthermore, it was shown that the reduction of estradiol secretion during SON stimulation was blocked by an alpha 2-adrenoceptor antagonist (yohimbine) but was not affected by an alpha 1-adrenoceptor antagonist (prazosin) or a beta-adrenoceptor antagonist (propranolol) [25]. On the other hand, reduction of ovarian blood flow during SON stimulation was blocked by an alpha 1-adrenoceptor antagonist but not affected by an alpha 2-adrenoceptor antagonist or a beta-adrenoceptor antagonist. These results indicate that decreases in ovarian estradiol secretion and ovarian blood flow in response to SON stimulation are caused by activation of alpha 2-adrenoceptors and alpha 1-adrenoceptors, respectively (Fig. 5).
Estradiol is synthesized from testosterone by aromatization in the ovary [40,41]. Recently, we examined whether the inhibitory effect of SON on estradiol secretion via activation of alpha 2-adrenoceptors was a secondary response to an inhibitory effect of sympathetic nerve stimulation on testosterone synthesis. The rate of secretion of testosterone by the ovary was also reduced by electrical stimulation of the distal part of the severed SON. The reduction of the rate of testosterone secretion by SON stimulation was not affected by an alpha 2-adrenoceptor antagonist but it was abolished by an alpha 1-adrenoceptor antagonist. These results show that SON has an inhibitory role in ovarian testosterone secretion, via activation of alpha 1-adrenoceptors but not alpha 2-adrenoceptors [26,27]. This, therefore, indicates that reduction of the rate of estradiol secretion by SON stimulation is because of direct inhibition of estradiol production.
Other examples of adrenergic innervation of endocrine secretory structures are found in the kidney [42] and pineal gland [43], where activation of beta-adrenoceptors increases secretion of renin and melatonin, respectively, and in the pancreas, where stimulation of alpha 2-adrenoceptors results in inhibition of insulin secretion [44][45][46].
Response of ovarian blood flow
Effects of noxious mechanical stimulation of a hindpaw (pinching stimulation) on ovarian blood flow, ovarian sympathetic nerve (ONP) activity, and MAP have been examined in anesthetized rats [19,20]. Pinching stimulation of a hindpaw for 30 s produced marked increases in ovarian sympathetic nerve activity and MAP (Fig. 6c, d). Ovarian blood flow decreased slightly (95 % of the prestimulus control level) during the stimulation and then slightly increased (106 % of the prestimulus control level) after stimulation (Fig. 6a).
After the ovarian sympathetic nerves (ONP and SON) were severed, pinching of a hindpaw was repeated. The MAP increased in the same way as before severing the ovarian sympathetic nerves (Fig. 6e) whereas a remarkable monophasic increase of the ovarian blood flow was observed (Fig. 6b). This increase in ovarian blood flow is explained by passive vasodilation because of a marked increase in MAP. When the ovarian sympathetic nerves are intact, the reflex increase in ovarian sympathetic nerve activity induced by pinching of a hindpaw may contribute to vasoconstriction of the ovarian blood vessels, and prevent an extreme passive increase in ovarian blood flow because of an increase in blood pressure [19,20].
After spinal transection at the third thoracic segment, the responses to hindpaw stimulation of MAP, ovarian sympathetic nerve activity, and ovarian blood flow were nearly abolished. These results indicate that, in central nervous system-intact rats, hindpaw afferents contribute to a supraspinal reflex pathway to the ovarian sympathetic nerves innervating ovarian blood vessels (Fig. 8a).
Response of ovarian estradiol secretion
It has been shown for anesthetized rats that noxious mechanical stimulation (pinching) of a hindpaw for 5 min produced an increase in SON activity and a decrease in the rate of secretion of estradiol by the ovary (Fig. 7a, c) [28]. The increase in SON activity reached maximum level during stimulation, and the activity level remained elevated for more than 15 min after the termination of stimulation.
Reduction of the rate of estradiol secretion reached significance by 5 min after the end of stimulation and lasted for 20 min. The rate of estradiol secretion decreased to 71 % of prestimulus basal values 15 min after the stimulation ended. The decrease in the rate of ovarian estradiol secretion was abolished by bilateral transection of the SON (Fig. 7b). Mean arterial pressure was increased only during stimulation, and the MAP response was not affected by severing the SON (Fig. 7d, e). These results suggest that the decrease in estradiol secretion in response to noxious mechanical stimulation of a hindpaw may be a consequence of the reflex increase in SON activity (Fig. 8b).
After spinal transection at the second cervical level, the increased SON activity in response to hindpaw pinching was abolished. This indicates that the reflex center for the increase in SON activity in response to hindpaw pinching is located in supraspinal structures. Pinching stimulation of a hindpaw for 5 min reduced the rate of estradiol secretion by the ovary, but did not change the plasma estradiol concentration in systemic blood [28]. The clinical implication of this work is that ovarian dysfunction because of activation of sympathetic nerves to the ovary does not immediately result in changes in systemic blood. This may be one reason why some ovarian diseases are difficult to detect clinically at an early stage.
Conclusion
We have reviewed several interesting results from recent studies. First, ovarian blood flow and ovarian estradiol secretion are controlled independently by sympathetic adrenergic innervation. Of the two pathways of sympathetic nerves to the ovary (SON and ONP), stimulation of either reduces ovarian blood flow via activation of alpha 1-adrenoceptors, whereas stimulation of the SON, but not the ONP, reduces estradiol secretion via activation of alpha 2-adrenoceptors. Second, reflex activation of ovarian sympathetic nerves by noxious cutaneous stimulation causes ovarian vasoconstriction and inhibition of ovarian estradiol secretion. The ovarian vasoconstrictive response is produced via reflex activation of the SON and ONP, whereas inhibition of ovarian estradiol secretion occurs as a result of reflex activation of the SON only (Fig. 8). These results suggest that reflex activation of sympathetic nerves to the ovary by stressful physical stimulation, for example noxious stimulation, may be involved in rapid inhibition of ovarian function in emergencies. Rapid and direct regulation of ovarian function by the autonomic nerves may be an important adaptation of female reproductive function to either internal or external environmental changes. This is in addition to the long-term regulation provided by hypothalamic-pituitary hormones. It is also possible that hyperactivity of sympathetic nerves to the ovary may contribute to ovarian failure, for example anovulation, by eliciting ovarian vasoconstriction and reduction of ovarian estradiol secretion [33,47,48]. These findings extend our understanding of neural regulation of ovarian function, which occurs in addition to hormonal regulation by the hypothalamic-pituitary-ovarian axis. | 2016-05-12T22:15:10.714Z | 2014-06-26T00:00:00.000 | {
"year": 2014,
"sha1": "dca20b4403b156c9fe61ab1555cd53bb949da905",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12576-014-0324-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dca20b4403b156c9fe61ab1555cd53bb949da905",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265587742 | pes2o/s2orc | v3-fos-license | BRAC in Bangladesh and beyond: bridging the humanitarian–development nexus through localisation
ABSTRACT Since its inception, BRAC has combined emergency assistance with longer-term development interventions, grounding its approach in empowering local communities. Its experiences in navigating tensions across the humanitarian–development nexus and in debates around localising aid provide a useful perspective on the way in which these debates intersect, showcasing how prioritising a localised response is conducive of an approach that is “humanitarian in nature and developmental in solution”. Through historical and contemporary perspectives, we explore how BRAC has adjusted its profile, adapted to new challenges, learned about new and changing settings, developed innovations, learned from mistakes, and dismantled boundaries to bridge humanitarian and developmental support in Bangladesh and beyond. These experiences highlight the importance of the “local” beyond geography, as evidenced by BRAC’s engagement with the communities it serves and is part of, and its desire to move forward in an inclusive and socially just manner.
Introduction
From its 1972 inception as an active participant in the reconstruction of the newly independent Bangladesh, BRAC has grown its presence and impact in Bangladesh and beyond, expanding internationally to Afghanistan, the Philippines, Myanmar, Uganda, and South Sudan, amongst others.BRAC has long been recognised as a "pioneering" and "pace-setting" NGO, with its success rooted in its philosophy and identity as a learning organisation (Lovell 1992).It has received international recognition for its impact, innovation, governance, and sustainability work, including being repeatedly judged as "the best" NGO globally in 2023 by The Dot Good (formerly NGO Advisor). 1 The awarding committee highlighted BRAC's impact and global outreach.It also lauded BRAC's coordinated efforts in humanitarian response, including its leadership in responding to the Rohingya refugee crisis and its continued action to meet the new development challenges that Bangladesh faces, including climate change and the COVID-19 pandemic.
BRAC's five decades of emergency response and development work sets its approach apart from a global norm that draws a clear distinction between humanitarian and development objectives.In responding to on-the-ground challenges in Bangladesh and beyond, BRAC's experience offers important insights into questions currently vexing the humanitarian and development communities, most notably debates around the "humanitarian-development nexus" and the "localisation" of humanitarian aid.It is these debates that we explore and inform in this paper.
This paper offers a historic approach to understanding BRAC's organisational approach and development.It explores how BRAC has operated at the centre of the "humanitarian-development nexus" and has been involved in "localisation" practices.We analyse how being a deeply rooted part of Bangladeshi society and community has positioned BRAC to respond with agility to emergencies in ways that reinforced, rather than sacrificed, the pursuit of longer-term developmental objectives.Looking beyond its Bangladesh experiences, too, we argue how BRAC's model has naturally adapted itself to situations that required a more concerted humanitarian-development approach and a locally implemented and locally co-led response.While fields of development and humanitarian action continue to be largely viewed and operationalised independently of each other, BRAC has long brought these two together to ensure sustainable, long-term benefits.
The following section introduces two key contemporary debates in the field of humanitarian response, namely the "humanitarian-development nexus" and the "localisation agenda".Section 3 then elaborates BRAC's organisational history and how this has shaped its positionality, identity, and operations to carefully bridge and respond to humanitarian and development needs.It also introduces BRAC's international expansion of these efforts and looks in detail at BRAC's response to the Rohingya refugee crisis and how it has utilised the space afforded by global localisation agendas to sustain a leading role in the crisis.Section 4 concludes by assessing the relevance of BRAC's rich history and positionality for the furthering of contemporary debates and knowledge around the humanitarian-development nexus and the localisation agenda.We reflect upon its positionality as a leading local and global organisationa hybrid NGOand what this means for leading terminologies associated with the "localisation" agenda.And we explore how BRAC's strategies, operations, and, ultimately, its philosophy, intertwine humanitarian and development fields.
In doing so, we bring together a broad range of academic research and grey literature (including internal BRAC strategy and evaluation documents) across three main themes: BRAC's history in Bangladesh; BRAC's expansion in other international settings; and BRAC's involvement in the Rohingya response.We pay specific attention to the challenges and practices of localisation BRAC faces (e.g.BRAC as "local" actor; BRAC as partner within larger humanitarian initiatives; BRAC as an external actor) and BRAC's involvement in either or both development and humanitarian activities.Combined with this desk-based research, the paper is also inspired by and includes authors' personal experiences with and within BRAC, 2 including how BRAC's experiences relay to broader debates within humanitarian and development realms.
Of course, a paper that combines authorship with personal practitioner experience must contemplate its ability to offer a fully objective view of the organisation and its weaknesses.We have certainly aimed for this, including by starting out the drafting process with a frank discussion on the successes and lessons learned in Bangladesh and beyond as relating to nexus and localisation practices.However, we highlight several limitations here.First, there is a paucity of critical literature explicitly mentioning BRAC, including on BRAC's activities at large and in its contributions to Bangladesh's Rohingya response.BRAC has urged researchers to be "as critical as possible" (Hossain and Sengupta 2009, 6), yet there are few critical voices of BRAC in public scholarship. 3We encountered several pieces reflecting on national actors in general terms (see Wake and Bryant 2018), but few singling out specific organisations.This reveals an important research gap in addressing the role that BRAC fulfils within the broader Bangladesh response, and how other actors within this response perceive BRAC as a local, national and international organisation.
A further limitation of the study is in BRAC's strategy itself.BRAC works predominantly in non-or post-conflict settings, where development issues such as pervasive poverty and the repercussions of disasters shape its experience.That BRAC's organisational model was honed within these settings, and not others, will have shaped its experience and its positioning in the nexus and localisation practices.During its more recent expansion internationally, it has become more familiar with environments experiencing conflict (e.g.Afghanistan and Sri Lanka); the associated challenges here have required new innovation and adaptation of its approaches.It also encountered different challenges in Bangladesh itself.Its role in the humanitarian response to the Rohingya refugees from Myanmar was another watershed moment for the organisation.BRAC's experiences and the theoretical contribution of this paper need to be interpreted within this specific, more limited, light.
The humanitarian-development nexus
Humanitarian and development approaches have always been perceived as distinct entities, each drawing on different guiding principles.Humanitarian aid was traditionally geared towards shortterm, immediate responses to needs in times of "crisis" (Hilhorst 2018).Humanitarian actorsespecially those of the more classical, Dunantist leaningemphasise the importance of humanitarian principles, including political neutrality, in order to ensure unbiased access to people they need to assist.Fundamental to humanitarian aid is that its distribution is based on human needs alone: ideally, it is supposed to be neutral, impartial, and independent (Lie 2017, 201).While these original principles remain central today in how humanitarian action is framed externally, there is increasing acknowledgement of compromises made and critiques within and beyond the humanitarian sector on the ability to apply principles of political neutrality in practice (Terry 2022).
In contrast, often focusing explicitly on social justice and transformation, the strength of development NGOs and civil society actors lies in providing tangible alternatives to existing market-and state-driven development processes that do not meet the needs of the poor and other marginalised groups (Banks, Hulme, and Edwards 2015).To reorder relationships and accountability between the state, market, and civil society, development NGOs engage more closely with political direction, even if only about the course that development should take, reflecting certain political, economic, and gendered ideas of what constitutes a good life.
In short, while humanitarian aid was conventionally perceived as de-politicised, time-limited, lifesaving assistance, the pursuit of development is considered an intrinsically politicised and longerterm process.Suhrke and Ofstad (2005, 3) highlight these differences as "the institutional gap" between humanitarian and development entities -"their different priorities, cultures and mandates" that create practical difficulties in coordinating their actions.Consequently, NGOs specialising in development aid and those foregrounding humanitarian efforts have so far largely worked independently of each other, even if at times they are one and the same or operating in similar circumstances.
However, the world is changing.Protracted crises and large-scale forced migration require new and more complex interventions that blur the long-held distinctions between development and humanitarian assistance (Medinilla, Cangas, and Deneckere 2016, I, vii).This raised questions as early as the 1990s on the "relief-development continuum" (Borton 1994).It has captured a larger interest among relevant actors in recognising the need to address humanitarian needs whilst preserving, if not improving, development at the same time.There is also increasing awareness that humanitarianism is not devoid of politics; rather its efforts and the on-the-ground interpretation of its principles are profoundly connected to politics and power (Barnett and Weiss 2008;Brković 2017).Politically incentivised local humanitarian action has led to the reassessment of the importance of neutrality as a precondition of "good" humanitarians, accompanied by calls to accept "humanitarian resistance" as the "real" deal and the "authentic" practice of humanitarianism (Slim 2020;2022, 21).
Indeed, in a classification of humanitarian aid, Hilhorst (2018) juxtaposes the classical Dunantist paradigm with the practice of "resilience humanitarianism", grounded in different ideas around crisis, response and relationality and offering local and national actors a leading role.It has become widely accepted within some humanitarian circles that responses must strengthen resilience, through preserving and strengthening current systems and building long-term engagement (Stamnes 2016).Such discussions have informed the idea of the humanitarian-development nexus.Howe's (2019) framework categorises multiple "nexus actions", identifying a range of potential nexus relationships between development, humanitarian, and/or peace efforts. 4 Traditionally, the concept of a humanitarian-development nexus is characterised by its transitional nature: it represents a sequencing of interventions that facilitates the move from an emergency to a "recovery" state.Work at this nexus, then, sees the linkages between humanitarian and development actions as a "bridge" or transition from one to the other (c.f.Lie 2017), through interaction (and sometimes competition), rather than two processes running independently.Several large international agencies, such as Save the Children and CARE, have worked successfully in this transition nexus space and become respected for their work in both roles (Barnett and Weiss 2008).However, many organisations continue to operate within these two different "silos".Structural constraints to bridging these are significant.For example, humanitarianism and development have "their own separate tools, funding cycles and decision-making processes", and often distinct line management.Moreover, "development co-operation tools" often lack the flexibility to quickly adapt, making the shift from development to humanitarianism, in particular, difficult (less so the other way around) (OECD 2017, 1, 3).
This nexus, to an extent, is intuitive; both domains are so inter-linked that they could be considered, at times, inter-dependent.This is particularly the case in "stable" settings facing natural catastrophes, epidemics, or large-scale displacementthe context of BRAC's main organisational evolution.Even work pursuing or sustaining traditional "development" is conditional on humanitarian work when crises or disasters occur.Humanitarian actors are increasingly incorporating notions that were originally more at home in the development field (e.g.improving "resilience" in disasterprone areas (Medinilla, Cangas, and Deneckere 2016)) and development actors face a growing reality of recurring climate shocks requiring immediate assistance and relief.Yet traditionally the type of services and the order in which humanitarian and development organisations prioritise these services differ dramatically."Livelihoods" is not a humanitarian priority, while "shelter" is, even though the two are closely connected in many refugees and IDPs' lived experience.These practical differences impact the perspectives of different agencies and their investment choices and capacities.
This evolving nexus space continues to generate debate among scholars and practitioners (Howe 2019;Slim 2019), with questions around the loss of humanitarianism's apolitical purity and the risk of mandate drift (Lie 2017).Debates also surround the practicality and challenges of merging two endeavours with distinct timelines, mentalities, and expertise.In other words, can the nexus deliver on its promised objectives or does it run the risks of creating increased complexities and diluted focus?
The localisation agenda
The localisation of humanitarian support is a less publicly contested, but no less divisive, topic.Its fundamental principle is the belief that local and national organisations should hold as much funds and decision-making power as possible, enabling locally-led crisis responses and program design, with international agencies only stepping in when necessary.Greater inclusion for local actors has been advocated for in debates around how to improve the efficiency and address power imbalances in humanitarian action and (locally led) development (Roepstorff 2019).
As global crises have become protracted, localisation has become an important process for underpinning the sustainability of responses.In prolonged crises, humanitarian funding gradually decreases.Against this declining resource flow, international NGOs (INGOs) seek to transfer the burden of service provision to "local" organisations due to their own high running costs.In such instances, a localisation process essentially serves to create local capacity to "take over" once INGOs leave.In other contexts, local actors have been present (perhaps even most prominent) in responses from the beginning.They are to certain degrees dependent on international funding streams that are allocated and managed by INGOs, who therefore have an oversized influence on the response (c.f.Roborgh 2021; Roepstorff 2019).
Although the localised provision of life-saving assistance preceded the advent of the humanitarian model as we know it, it was long associated with a form of less professional, less effective, and less principled care (DuBois 2020). 5However, the changing circumstances that have made the nexus more pertinent have similarly brought localisation again to the forefront, placing it at the heart of recent humanitarian reform efforts.In 2016, world leaders and multilateral agencies committed to the "Grand Bargain", a pledge to scale up support for locally led humanitarian action and allocate at least 25 per cent of global humanitarian funds to local and national actors by 2020.Little of this commitment has translated into practice, with direct funding to local and national actors remaining low. 6Renewed efforts are, among others, focusing on improving multi-year funding "channelled as close to direct delivery as possible" (IASC 2022, 2) and grounded in a collaborative approach (IASC 2022, 4).
Research highlights the ways in which different organisations strategically position themselves as "local" within this new landscape of humanitarian coordination.This has been evidenced in Syria (Roborgh 2021) and Bangladesh (Roepstorff 2021), amongst others.While the Grand Bargain centred "localisation" at the heart of humanitarian systems, the concept has been criticised for being "strikingly undertheorised with a number of key conceptual questions still unaddressed" (Roepstorff 2019, 285).We return to these in our concluding reflections.Nascent literature on "critical localism" has begun to ask "What is the local?" (MacGinty 2015; Roepstorff 2019; Taithe 2019), but there are also questions raised around issues of capacity and complementarity (Barbelet 2019;Roepstorff 2021).
Assumptions on the ostensible relative lack of capacity among local vis-à-vis international organisations have influenced, or at least ostensibly justified, the Grand Bargain's slow progress on localisation.Fast (2017) argues that international organisations have played a key role in setting the terms of this debate.Assumptions about capacity also affect what is known as "complementarity", the way the entire spectrum of organisationsranging from multilateral organisations, INGOs, national NGOs, to local NGOscome together to assist those needing their help (Barbelet 2019).
Though these two debates around the humanitarian-development nexus and localisation are occurring largely independently of one another, in reality they are intimately intertwined, and driven by similar global developments.The localisation agenda has been lauded as a critical foundation within the humanitarian-development nexus due to its assumed usefulness in strengthening local resilience (Stamnes 2016).
There are few organisations that can be used as case studies through which to explore both debates and the experience of BRAC may prove informative here.Firstly, humanitarian and development approaches have both long been central to BRAC's identity.It carries out activities associated with both often simultaneously in collaboration with, rather than independent of, one another.Emerging from the aftermath of Bangladesh's Independence War and in a country geographically vulnerable to drought, cyclones, flooding, and famine, this was in part driven by necessity.In BRAC's vision of pursuing holistic community-driven development, responding with urgency to emergencies is critical to community resilience and must be part of any developmental trajectory.Driven by necessity and philosophy, in this respect, BRAC has a long history of combining disaster relief and emergency support with development activities.Secondly, alongside serving as a local and national organisation within its domestic context, BRAC also holds status as an "international" organisationand a Southern one at thatacross the other countries in which it operates.These factors make it an example of a "hybrid NGO", one that fits uneasily into the humanitarian sector's simplistic language around "localisation" given its operations across the humanitarian-development nexus and at "local", "national", "international", and "global" levels.We next explore BRAC's experiences in these two dimensions.
BRAC and the humanitarian-development nexus
Understanding BRAC's positionality in the humanitarian-development nexus requires a historical approach, as it is strongly influenced by Bangladesh's broader national history (c.f.Zaman, et al., this Special Issue).Lovell (1992: 9) highlights that to "understand BRAC one must understand the context that influences, impels and enables its work".Two particular historical events served as watershed moments in BRAC's development: the 1970s Bhola Cyclone, followed one year later by the Liberation War against West Pakistan.Ahasan and Iqbal (2022) highlight how the suffering caused by these events contributed to "a deep, collective humanitarian consciousness".Facing "neglect and apathy" from the government in West Pakistan and a disempowering portrayal by international media, Bengali people themselves spearheaded the recovery.In addition to this "humanitarian consciousness", there was also, therefore, "a period of intensifying political consciousness", forming "the roots for an extraordinary episode of 'vernacular humanitarianism'" (Ahasan and Iqbal 2022, i;Brković 2020).The resulting approach and philosophy still influences BRAC's course today, as well as that of other Bangladeshi actors (Ahasan and Iqbal 2022, i;Brković 2020).
"Vernacular humanitarianism" indicates "aid provided by various local actors in tune with their socio-historically specific ideas of humanness, as a response to an emerging need that cannot be adequately addressed through conventional channels of help" (Brković 2020, 224).It consists of "local, grassroots forms of helping others that are less visible and less dominant than the international ones" (Brković 2017).This is thus a humanitarian response that is "local" in its truest sense of the word, not because it is carried out by in-country actors but because of its roots in the "ecological, sociopolitical, and economic realities of the country" (Ahasan and Iqbal 2022, i).
Through engaging in post-cyclone and post-war rehabilitation processes, volunteers developed "a distinct way of learning" that increased their sensitivity to their locus of action, aiding in recognising and responding to communities' needs.Sir Fazle Hasan Abed was one of these volunteers and would go on to create BRAC, fostering a response that "transformed the trajectory of Southern humanitarianism in Bangladesh, deeply embedded in the historical, political, and cultural landscapes of the region" (Ahasan and Iqbal 2022, i-ii). 7BRAC's drive for institutional learning and connectedness with the communityoften leading to innovative practicehas become embedded in BRAC's organisational culture and carried forth across generations of BRAC staff and strategies.The grassroots beginnings of BRAC's humanitarianism resulted in "a distinct Southern development discourse and practice" that has not yet been sufficiently understood and theorised (Ahasan and Iqbal 2022, ii;Smillie 2009).
Bangladesh found itself at Independence with a large and largely illiterate population suffering from years of underinvestment by the former West Pakistan government.Poverty and extreme poverty were high and the land remained geographically vulnerable to natural disasters.Women were particularly vulnerable to social and economic deprivation.Against a backdrop of political instability in which formal political systems and institutions struggled to provide social services and livelihoods, the third sector of NGOs in Bangladesh rose quickly to the challenge.While initially small and with limited impact, BRAC was a pioneer in responding to the acute humanitarian crisis in the country caused by this quick succession of catastrophes.It rapidly learned that it would not be able to "sustain" its relief and development work if it did not foreground self-reliance of people and communities in its approach.An integrated approach, focused on building institutional infrastructure and developing human potential, soon followed (Ahasan and Iqbal 2022, iii).
A historical overview by BRAC characterises the evolution of its approach as "a process of learning by doing", of learning from successes and failures from its moment of inception.The organisation realised Bangladeshi communities faced numerous structural constraints to overcoming poverty, which could only be comprehended through staff immersion within these communities and community representation within its staff.This deeply participatory approach to community development led to an "extraordinary fit between beneficiary needs, program outputs and the competence of the organisation" (Lovell, 1992: 4).BRAC's "understanding of poverty as a deeply relational and complex social phenomenon reflecting power imbalances" shaped its focus on "inclusive development within existing rural power structures and national development plans".Supporting women played a central role in this (Ahasan & Iqbal 2022, iii; see also Smillie 2009, Ahmed, Hopper, andWickramsinghe 2007).
We must distinguish between BRAC's laudable intentions and on-the-ground realities, which could differ.Although women were prioritised from early on, female staff rarely occupied managerial positions, for example (Ahmed, Hopper, and Wickramsinghe 2007, 32-33).Its microfinance success rates also created intense pressures on communities (Ahmed, Hopper, and Wickramsinghe 2007).Some field staff described visiting the same home several times a day to collect weekly payments (Ahmed, Hopper, and Wickramsinghe 2007, 26), suggesting that BRAC's reputation and programmatic successes at times took precedence in implementation.Community empowerment, a key element of the organisation's philosophy, similarly could be hampered in practice, with Ahmed, Hopper, and Wickramsinghe (2007, 28-29) also describing practices that weakened BRAC's intended accountability towards its beneficiaries.
Nevertheless, it is clear that BRAC's general operational success was made possible by its localised humanitarianism and development model, its focus on people and the community, and its ethos of working with and learning from them.This, combined with its entanglement with the direst moments in the country's history, explains its ease with shifting between and/or developing hybrid action inclusive of both humanitarian and development activities and philosophies.A stricter separation would deeply underestimate peoples' and communities' agency and ability to adapt.
As it has spread globally, BRAC International has applied this same philosophy and values.In 2002, the Afghanistan government invited BRAC International to start operations there in fields of education, healthcare, agriculture, microfinance, and female empowerment.It quickly became "one of the largest NGOs" in Afghanistan (Chowdhury, Alam, and Ahmed 2006, 677).BRAC replicated the "holistic approach" it had honed in Bangladesh to address the root causes of poverty (Chowdhury, Alam, and Ahmed 2006, 679) and focused on mid-to long-term development.It was a rewarding experience for the NGO, illustrating the replicability of its model in other settings and heralding an appetite for involvement in other countries (Chowdhury, Alam, and Ahmed 2006).
Stepping into a new context as an international responder was a different experience for BRAC.The vernacular humanitarian response that gave birth to BRAC and its consciousness of Bangladesh's "ecological, socio-political, and economic realities" (Ahasan and Iqbal 2022, i) no longer applied.This did not stop BRAC from embarking upon its contributions in Afghanistan with the same values and ethosinvesting heavily in local staff and research capacity to facilitate deep learning about the new setting (Chowdhury, Alam, and Ahmed 2006, 678).BRAC ensured, for example, that Afghan nationals came to occupy leadership positions, with, in 2020, "the entire programmatic leadership of BRAC Afghanistan [...] comprised of Afghan Nationals" (BRAC Afghanistan 2020, 5). 8 Yet the organisation struggled initially to get donor support when it entered the international humanitarian and development stage as an international actor.Even staunch supporters of BRAC in Bangladesh, such as Novib (now Oxfam Novib) at first could not see BRAC as a potentially meaningful actor in Afghanistan (Smillie 2009, 226).Although BRAC emphasises its position as a Southern humanitarian NGO and continues to herald its narrative of local empowerment in its international offices, we must ask the extent to which it has become yet another external player in local actors' eyes.Even on its home turf of Bangladesh, BRAC was categorised by some analysts as an example of an NGO which was "reactive" and adaptive to more powerful actors, such as international donors (Ahmed, Hopper, and Wickramsinghe 2007, 22, 23).
Following the Indian Ocean tsunami in Sri Lanka, BRAC began emergency response programs there, before moving quickly towards a broader development agenda that sought to address people's and communities' short-and long-term needs (BRAC 2014).Again, this dual approach, which sees humanitarian response and development strategies intrinsically linked, best represents the values that are in BRAC's DNA; there is no artificial separation of this nexus for BRAC.However, not everyone shared this view.Practitioners report how BRAC's approach drew disparaging remarks from Sri Lankans and other global humanitarian responders as it challenged and disturbed their modes of operations. 9 Although BRAC became respected for its service provision in these new settings, it sometimes struggled with managing expectations of local staff and local populations.BRAC's developmental model lacked some of the perks of other international organisations (e.g.salaries, vehicles, and management styles), leading to initial scepticism and varied perceptions among staff and populations about BRAC's ability to deliver services (Cronin 2008).Its reliance on local staff in Afghanistan, for example, led to it often being perceived as a "local" organisation by the local population, aiding trust-building and program implementation.In Uganda, by contrast, BRAC was considered more as an International NGO and initially struggled with being perceived there as a "Southern" organisation.Some local Ugandans wondered what this "Indian organisation" (as sometimes misunderstood) could assist them with (Cronin 2008, 138, 165, 166).Over time, the way local people and partners, including national authorities, perceive BRAC, can be subject to change based on its programmatic and management practices.
A key criticism of BRAC's international expansion was that it did not, at first, adjust its approach in these new settings.This pertained to programs, but also, importantly, to the implementation of Bangladeshi management and communication practices.Cronin (2008) describes, for example, how Afghan local staff did not appreciate the more aggressive communication style of Bangladeshi management, which transposed Bangladeshi social hierarchies to the new country office.While BRAC attempted to adjust to local needs and conditions (Cronin 2008), more research is needed to fully understand whether it has been able to serve as a more "tuned-in", empowering, organisation than its Northern counterparts in these settings.There are indications that this might be the case, due to the origins of both BRAC and Bangladesh.Contrary to Northern NGOs, BRAC's Bangladeshi origins arguably facilitated a less imbalanced peer-to-peer relationship.Among others, national level leadership from BRAC International countries were invited to Bangladesh to show what can be accomplished the "BRAC way".
Although BRAC International is inspired by the approaches and programs developed in Bangladesh, the analysis above emphasises that much more than these is exported to new settings.It is BRAC's ethos that BRAC International carries with it to inform its approach elsewhere.This includes prioritising research in new local contexts, centring the experiences of local populations and using this knowledge in program design, implementation, and evaluation.It also includes its prioritisation of reaching the poorest and its drive to find innovative solutions to do so; a reliance on and fostering of local talent to form the initial backbone and leadership of new national chapters; a careful monitoring of programmatic impact and willingness to go back to the drawing board and fine-tune ideas; attempts, where possible, to make interventions long-lasting and self-sustaining; a high level of ambition in improving developmental and programmatic outcomes; and a willingness to acknowledge errors and adjust interventions and business culture to be more sympathetic to local experiences of staff and populations.These recurrent features may not always come to full fruition in all its home and international endeavours -BRAC is not impervious to errors and insensitivities, as illustrated above.But they are generally visible in international case studies and shape BRAC's engagement both in humanitarianism and development, and at home and afield, and may serve as best practices and lessons learned to others.
The localisation of humanitarian aid: BRAC's response to the Rohingya crisis
In 2017, nearly one million Rohingya refugees crossed the Bangladesh border from Myanmar to escape ethnic and religious persecution.Most took shelter in Cox's Bazar, establishing "the largest and densest refugee settlement in the world" (Wake and Bryant 2018, 1).While the nation of Bangladesh and BRAC have a long history of quick humanitarian responses to environmental and ecological crises, the Rohingya refugee crisis stood apart as a man-made crisis involving a political and crossborder element.It required an unparalleled humanitarian effort not seen in Bangladesh since recovery efforts after the Liberation War.
Cox's Bazar's Rohingya response has become an important case study in localisation debates.A number of strengths have helped mitigate the crisis, including the existence of a strong "eco-system" of in-country emergency response; heavy investments in developing governmental and non-governmental responses to natural disasters; and an army which was experienced in peacekeeping (Van Brabant and Patel 2018, 62).The "right to localisation" played an important role for local NGOs, who made direct references to the Grand Bargain and its localisation agenda for leverage (Roepstorff 2021, 9, 12).As we return to in our concluding reflections, this also exposed some of the contradictions of "localisation" terminology and debates.
There were several unique elements to the coordination of this response.First, there was a big role for the Bangladesh government, which awarded the International Organization for Migration (IOM) a leading role and insisted on strong leadership roles for Bangladeshi organisations in the response (Parker 2017;Wake and Bryant 2018, 10). 10 A shift away from the usual leadership of UNHCR created controversy and confusion among international organisations, who feared a watering down of the status, rights, and duties of the Rohingya as refugees and a sense of competition among UN agencies for space, resources, and recognition (Parker 2017;Wake and Bryant 2018, 12).Several coordination bodies emerged led by different stakeholders, including a locally-organised Cox's Bazar Civil Society Forum led by COAST (CCNF).
Despite these efforts, local agencies found themselves as little more than sub-contractors in "distribution-oriented projects" for international "partners" who displayed attitudes of "superiority" rather than a genuine partnership (Van Brabant and Patel 2018, 62).Smaller local partners "could not deliver assistance or handle funding on a large scale" (IRIN News 2013).While BRAC, in contrast, was perceived as a strong local partner with "the required capacities and processes", this did not necessarily translate into partnerships with them (Wake and Bryant 2018, 28, 29; see also IRIN News 2013).Nevertheless, BRAC was soon considered to be one of the "main players" (in addition to the core UN agencies and a handful of large INGOs) in terms of financial power and influence (Van Brabant and Patel 2018, 62).
It may have been the first time that BRAC dealt in earnest with the global humanitarian system and architecture on its own soil.However, BRAC's role and leadership were considered a "standout example" of a national NGO in the Rohingya response (Rieger 2021, 41).It managed, at one point, several camps directly and worked in a range of emergency support and development assistance in all camps, partnering with local and international organisations in the process (Rieger 2021, 41).Other local organisations viewed BRAC and fellow Bangladeshi organisation COAST as "effective representatives and advocates for the localisation agenda" (Wake and Bryant 2018, 32).
This does not mean that BRAC's role has been without tension.Pro-localisation national organisations have expressed discontent with BRAC's leading role.Due to BRAC's nature as a Southern NGO behemoth that had become an international player, it was seen as unrepresentative of other humanitarian initiatives in the camps (Wake and Bryant 2018, 32).Reflecting wider debates on what constitutes a "proper" local organisation, 11 there were attempts to categorise Bangladeshi agencies like BRAC as "National" and to equate localisation processes with more "local" organisations in Cox's Bazar.Despite its strong commitment to local empowerment and bottom-up programming, it contained few of the characteristics that had made national actors around the world so passionate about how the localisation agenda could restore power dynamics within the humanitarian response. 12In practice a situation emerged where the presence of organisations such as BRAC and COAST was considered as sufficient "local" engagement by international humanitarian actors, with few other local actors included (Wake and Bryant 2018, 32).More research is required on how BRAC handled this position and whether and how other organisations felt represented and empowered by BRAC.
This uncertainty about how "local" BRAC is created questions of ownership and participation in the response.Pro-localisation campaigners have tried to claim that the CCNF is the only legitimate platform for local responders through which localisation can be promoted.CCNF has criticised the lack of access of local organisations to the awarding of funding and decision-making forums.It also raised questions about the localisation strategy implemented by international organisations.In this, the role of BRAC University's Centre for Peace and Justice was criticised, for example, for showing "little results" in its task to create a localisation road map (The Business Standard 2020; The Daily Star 2020).
The de facto establishment of a hierarchy in national NGOs can create tensions within the local response.Organisations such as BRAC, with its experience in international engagement, familiarity with international professional standards, and its well (often Western-) educated leadership, are in many ways treading the line between "local" and "international", playing different aspects of their identity as and when needed for leverage.They can serve as crucial brokers for other organisations and the wider national response, but similarly can serve as another set of gatekeepers (see also Roborgh 2021;Roepstorff 2021).
The complexity of BRAC's role within localisation can also be observed with regards to its nexus positionality.BRAC's multi-dimensional approach to interventions is clear in the Rohingya camps.Some interventions are common to "traditional" humanitarian responses, but many are more developmental in nature.For example, BRAC organised several community-led interventions around health, including training community health workers, hygiene promotion, and WASH committees. 13 Since many trees were destroyed during shelter construction BRAC also mobilised the Rohingya community to plant trees in camps and promoted homestead vegetable gardening to improve nutrition.They provided skills training to help residents meet their daily needs and build livelihoods (practitioner reports and BRAC (2020).
One key area in which BRAC disrupted the traditional humanitarian response was in its approach to children and education.Education accounts for "less than 3 per cent of the global humanitarian aid budget" (Erum, Ahmad, and Sarwar 2021, 134-135).This is despite the fact that "protracted refugee situations" continue, nowadays, for 26 years on average, with generations spending their full formative years in a refugee context (Erum, Ahmad, and Sarwar 2021, 134).Despite a UNICEFcoordinated Child Protection Sub-Sector within the Rohingya response, the focus of humanitarian aid was to meet the short-term needs of Rohingya children rather than their long-term educational and development needs (Erum, Ahmad, and Sarwar 2021, 134-135).In contrast, BRAC prioritised coordinated action around early childhood development and education, designing a "Humanitarian Play Lab" for children under six, offering play-based learning and psychosocial support.It implemented a bottom-up and "community-based participatory approach", building educational content from Rohingya cultures and experience (Erum, Ahmad, and Sarwar 2021, 133, 146), and ensuring community ownership through employing young Rohingya women.BRAC ran 304 labs (December 2019) in the camps, with approximately 41,000 Rohingya children reached (Erum, Ahmad, and Sarwar 2021, 136).
Two factors enabled BRAC's Play Lab activities in the camps in these challenging circumstances.First was BRAC's positionality as a Bangladeshi organisation, as well as its activities in "all sectors", allowing it to spread and scale-up rapidly across the camps.As Erum, et al. highlight, BRAC embarked on its humanitarian response in the Rohingya camps with "scalability and replicability in mind" (2021,137).Second was their approach that remained focused on the agency of people and communities.Its desire to involve affected communities in program design, planning, and implementation remains central (2021,145).The Play Labs support the notion that localisation can serve as an important facilitator for the success of humanitarian-development nexus activities.Indeed, it is hard to imagine running such bottom-up, yet large-scale, development-oriented interventions in the camps without a robust local presence (see also DuBois 2020).
However, BRAC's Rohingya response also highlighted its vulnerability.Concerned that the framing of the response in humanitarian terms neglected the refugees' longer-term needs, BRAC moved quickly from relief distribution towards their longer-term development needs, including initiatives like the Play Labs.This was a sensitive area given the government's desire to repatriate refugees rather than provide long-term protection in Cox's Bazar.This view of the government was particularly relevant to local and national organisations like BRAC, whounlike the international actors operating therehad to maintain a good relationship with the authorities for their survival (Rieger 2021, 18, 41).
BRAC's continuing reliance on government tolerance for its operations in Bangladesh means it provides an informative case study for debates on localisation that increasingly highlight the politicised and contentious nature of interactions with authorities.It shows how in certain politically restrictive environments, a position of "political astuteness" is critical for operational impact and sustainability.
Concluding reflections
Reflecting on BRAC's experience introduces a nuanced and complex perspective on the linkages between humanitarian and development aid and localisation processes.While fields of development and humanitarian action continue to be largely viewed and operationalised independently of each other and remain dominated by Northern-led power dynamics, Bangladesh's experience of responding to the huge Rohingya influx is unique in bringing these three dimensions of humanitarianism, development, and locallydesigned actions together to ensure sustainable and future-oriented interventions, rooted firmly in the people and communities involved.Recurrent tenets can be found in the manner in which BRAC operates and achieves its objectives, whether "at home" in Bangladesh or internationally.
A locally empowering approach that is, like BRAC's, humanitarian in nature and developmental in solution, would benefit many communities facing chronic crises.The urgency of integrated nexus responses can only become more acute in the context of climate change.There is a need for resilience, innovation, and adaptation to ensure the readiness to respond to increasingly frequent and severe shocks, especially when those most impacted (and displaced) will be the poorest.Such an approach was also critical to national resilience during the COVID-19 pandemic, with BRAC's research infrastructure and ability to identify and respond to the breadth and depth of the social and economic crisis that the pandemic triggered, leading to quick action and response.PPRC and BIGD research revealed the profound impacts on urban informal settlement households, around 20 per cent of whom experienced a fall below the poverty line (Rahman et al. 2021).Crises like this threaten to set back Bangladesh's developmental achievements unless responded to quickly and at scale, as BRAC did through its "New Poor Program" (See Gomes et al. 2023, this Special Issue).The ability to identify problems quickly and move within and across developmental and humanitarian boundaries through well-capacitated local responses proves critical. 14 BRAC's experience highlights that it is not the geographic definition of the "local" in localisation agendas that underpins its successes in Bangladesh's localised responses or in its work at the humanitarian-development nexus.In localisation debates, the "local" constitutes a geographic term that refers to the role that regional, national and sub-national actors can and should play in countries experiencing humanitarian crises.As we have seen here, BRAC's strengths as a humanitarian and development actor are not only in its geography but in its rich history of vernacular humanitarianism, born of deep knowledge and respect for communities across Bangladesh and a desire to rebuild with them, recover, and move forwards in inclusive and socially-just ways.
Looking across both the localisation and nexus debates, the experience and voice of an organisation like BRAC is disruptive, demonstrating the case of a "hybrid NGO" in the humanitarian and development fields.Not only does BRAC operate as a strong local and national organisation in its Bangladesh operations, it is also emerging as an international organisation globally.It does so through a programmatic approach that sees humanitarian and development activities as complementary, rather than separate.Of course, this role as a "disruptor" is accompanied by challenges.
It threatens well-established and powerful global architecture used to working in strictly delineated ways.And through its international work, BRAC has learned that it is not simple to transport reconstruction and development solutions from one geographic context to another.While BRAC does not hold the same vernacular knowledge of people, cultures, and contexts in their new countries of operation, it seeks to build this through maintaining a focus on the agency of people and communities and investing in local research to generate knowledge.
This case study of BRAC also illuminates contradictions in the concept and terminology of "localisation" in the humanitarian sector.It becomes visible, when taking a "hybrid" NGO like BRAC as the focal point of analysis, that the universal notion of "the local" lacks analytical precision."Localisation" as a process has been conceptualised by the humanitarian sector from a narrowly focused perspective that equates "international" with "Western".In its implementation, these tensions and contradictions are quickly revealed.It becomes clear that the language of localisation was never intended to recognise the new relationships, collaborations, and antagonisms between Southern organisationsand between national organisations and governmentsas these new landscapes play out.Against the inefficiencies and inequities of a humanitarian system that conceptualises Western organisations as the senders of aid to non-Western "others" in need, it has also overlookedthe important and international role of Southern organisations like BRAC that are working at a global scale.
Returning to the humanitarian-development nexus, a more fluid reality of humanitarian and developmental practice needs to be acknowledged and strengthened, a reality which will at times be complementary and at times need to phase in and out.In this reality, the "nexus" concept can provide a guiding and facilitating role but should not itself lead to new restrictive dogmas.Artificial boundaries between humanitarian and development actions in protracted crises should not be set in stone, particularly for local actors, for whom a more long-term perspective focused on future survival and prosperity perhaps comes more naturally.Local actors are in a better position, compared to international organisations, to assess the scale and depth of new crises and to respond at pace, as well as to gain legitimacy for undertaking nexus activities in complex and sensitive political contexts.In this, the localisation agenda is right in identifying the "local" as key actors deserving greater funding and decision-making power in humanitarian action.
Yet, in its current use the "localisation" terminology in humanitarian and development circles has not yet recognised the depth of comparative advantage that "the local" can have in terms of its deep-rooted knowledge of and respect for a community and country's social, political, economic, ecological, and environmental past, present, and future.Notwithstanding issues emerging from new hierarchies of "the local", current supporters of localisation do well to fully appreciate the potential of the nexus.The importance of local actors in designing, implementing and scaling-up locallyrelevant programs that bridge the humanitarian-development divide can ensure that peoples' short and longer term needs remain centre stage.Examples like BRAC, then, provide a crucial argument in taking the localisation agenda forwards to improve humanitarian and development outcomes for refugees and other vulnerable communities.
Notes
1. See The Dot Good (n.d.).2. It is explicitly reported within this paper whenever a statement is based on practitioner experiences.3.This raises questions about the way in which scholars who form part of, are temporarily affiliated with, or whose research is facilitated by the organisations they study, may engage with these organisations.Scholars have, for example, published important work that recognizes BRAC's unique positionality and history (c.f Lovell 1992, Smillie 2009).Yet there may be a hesitancy to be too overly critical, perhaps because of a general support for the organisation's objectives, interpersonal relationships with colleagues and friends working there, a fear that criticism may be misinterpreted and wielded by others against an organisation, or a desire to continue the working relationship for personal and career purposes.The question how "embedment" within an NGO affects topic choice, research methodology and write-up is in itself deserving of further investigation.For the purposes of this article we were challenged by reviewers to highlight how as academics and practitioners working or affiliated with BRAC we overcame the risk of bias in the account we present here.These challenges led to upfront conversations between us that started with the conversation "what doesn't make it into official accounts and records in these experiences?"They also led to us bringing in an additional author to support this process.In these revisions we worked much more centrally with some of the problems and dysfunctionalities that exist within BRAC to build a more honest and open account, in some parts basing entire new sections around them. 4. The addition of peace efforts leads to the categorisation of a "triple-nexus" (Howe 2019). 5.Even ICRC founder Henry Dunant discovered upon his arrival to Castiglione in the aftermath of the Battle of Solferino that local people were already involved in providing assistance to the wounded (Dunant 1986 ed., 62).His memoir offers a (rather patronising) description of the ineffectiveness of the local response, where the arrival of Dunant and other external actors serves to improve the local efforts.6.Only 13 (of 53) grant-giving signatories allocated 25% or more of their humanitarian funds to national or local responders in 2020, and the overall funding allocated to them was 4.7% of all funding.This was far below the original ambition of 25% and a lower share than it had been in 2016 (Metcalfe-Hough et al. 2021, 52, 53). 7.In doing so, BRAC was one in a wave of several influential organisations emerging in Bangladesh, including Nijera Kori and Proshika, which both began in humanitarian work before evolving into broader developmentfocused NGOs and microfinance organisations such as Grameen Bank and the Association for Social Advancement (ASA).8.However, this has changed in recent years with a re-emergence of international leadership, partially because of the changing political situation on the ground in Afghanistan.9. BRAC has since departed from Sri Lanka.10.See Khan and Kontinen (2022) for a detailed overview of the structure of the response's governance and coordination at national and local levels.11.This debate is fostered among others by efforts to no longer have national affiliates of large international NGOs counted as "local", due to their superior access to resources compared to non-affiliated national and local organisations (A4EP 2019).12. Tellingly, Van Brabant and Patel describe BRAC as "a huge Bangladeshi 'non-governmental' organisation, that is variously perceived as an 'NGO' or a 'corporate entity'" (2018, 62).13.These groups worked towards behavioural change, maintained sanitation facilities in the camps, and increased access to basic health services, including immunisation (practitioner reports and BRAC (2020).14.Furthermore, the deep respect a home-grown NGO like BRAC holds nationally means that alongside its own emergency responses it can achieve great impact on broader policy formation on a national level.Occupying a position as a respected local and national actor is important in encouraging and demonstrating to the Bangladeshi Government what can be done in times of crisis: after BRAC released its study findings around the "new poor" linked to Covid-19, the Government announced it would start a new social protection programme, worth $150 million to support the "new poor" in the urban informal sector (Rahman et al. 2021, 2).
Disclosure statement
No potential conflict of interest was reported by the authors. | 2023-12-04T17:42:35.767Z | 2023-11-28T00:00:00.000 | {
"year": 2024,
"sha1": "e9c4f44290a121bc80a0959d53a6d16e7a53a322",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09614524.2023.2273756?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3d0c8a07421a0ef796160fa28323b6ef459d36d1",
"s2fieldsofstudy": [
"Geography",
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": []
} |
222080589 | pes2o/s2orc | v3-fos-license | When the Lawyer Becomes Traumatized: A Scoping Review
Lawyers can be exposed to cases involving traumatic elements of crimes. Such exposure may result in symptoms of posttraumatic stress disorder (PTSD) and have adverse effects on the lawyers’ capacities to work. A scoping review was conducted to summarize original investigations of work-related PTSD among lawyers in terms of (a) trauma exposure conceptualization and operationalization, (b) symptom severity, (c) prevalence, and (d) risk factors. The scoping review also aimed to highlight potential directions for future studies and clinical implications. Literature searches were conducted in PsycINFO, Embase, Pubmed, MEDLINE, PILOTS, and Google Scholar. Of 341 initial publications, 9 were included. A majority conceptualized the impact of work-related trauma exposure as secondary traumatic stress and operationalized work-related trauma exposure as the number of cases or clients involving traumatic material. Levels of PTSD symptoms reported by lawyers were positively related to levels of work-related trauma exposure.
While many consider the legal profession as prestigious and noble, is it possible that being a lawyer has detrimental effects on one's mental health? Significant rates of depression, anxiety, and substance use (e.g., alcohol, opioids, cocaine) have been identified among lawyers (e.g., Krill et al., 2016). Lawyers are responsible for advising and defending their clients with competence and diligence. This implies not only a rigorous interpretation of the law, but also an understanding of clients' issues. Yet, some lawyers can be exposed to horrific details of events of a traumatic nature, such as videos of physical aggressions, gruesome depictions of legal evidence, and emotional testimonies that can translate, for some, into symptoms of posttraumatic stress disorder (PTSD). Considering their professional duties and their position of power over their clients, lawyers need to be in control of their mental health and cognitive capacities. It is therefore important to understand how and to what extent PTSD may impact their mental health, so as to advance academic research and inform clinical programs adequately.
Trauma Conceptualization
PTSD was included in the Diagnostic and Statistical Manual of Mental Disorders (DSM-III) in 1980 (American Psychiatric Association [APA]). The introduction of this new diagnosis stimulated vigorous research and led to a number of conceptual ramifications. One of them is the conceptualization of work-related trauma that emerged in the 1990s, with vicarious trauma and secondary traumatic stress being considered as the predominant notions (Jenkins & Baird, 2002). Vicarious trauma had a circumscribed definition and referred to the negative alteration of a (mental health) professional's cognitive schema as a result of working with traumatized individuals, and the experience of intrusive memories stemming from this work (Jenkins & Baird, 2002). Secondary traumatic stress described the development of symptoms akin to those of PTSD, including re-experiencing, avoidance, and hyperarousal symptoms without being directly exposed to a traumatic event (Figley, 1995;Jenkins & Baird, 2002).
In 2013, the 5th edition of the DSM defined PTSD as a condition that could stem from "experiencing repeated or extreme indirect exposure to aversive details" of traumatic events throughout the course of professional duties (APA, 2013). PTSD is characterized by a set of 20 symptoms grouped into four clusters: (a) re-experiencing symptoms (i.e., intrusive memories, distressing dreams, flashbacks, and distress or other intense reactions due to trauma reminders); (b) avoidance of trauma-related stimuli (i.e., avoidance of internal stimuli, such as memories and thoughts, or external stimuli, such as places); (c) negative alterations of mood or cognition (i.e., dissociative amnesia, exaggerated negative beliefs about oneself or the world, altered cognitions of the cause/consequence of the traumatic event, exaggerated negative emotions, decreased interests, social withdrawal, and incapacity to feel positive emotions); and (d) alterations in arousal and reactivity (i.e., irritability, anger, hypervigilance, recklessness, easily startled, decreased concentration capacities, and sleep disturbances). By definition, the DSM-5's conceptualization of PTSD encompassed vicarious trauma and secondary traumatic stress, providing a comprehensive conceptual framework.
Trauma-Related Symptoms Among Professionals
Trauma exposure among many lawyers is similar to that of other professionals, including journalists, social workers, and law enforcement agents. Such professionals have indeed been identified as being at risk of developing trauma-related symptoms, although PTSD prevalence rates varied between 4% and 32%, depending on the study and on the profession (Asmundson & Stapleton, 2008;Bride, 2007;Pearlman and Saakvitne, 1995;Pyevich et al., 2003). Among social workers working with a traumatized clientele, Bride (2007) identified that a large portion reported intrusion (45%), avoidance (25%), and arousal symptoms (25%). Similarly, Jaffe et al. (2003), reported that 63% of a sample of judges experienced at least one symptom of vicarious trauma including sleep disturbances, feeling isolated, cognitive difficulties (e.g., issues with concentration), and interpersonal problems. Considering the apparent effects that indirect trauma exposure has on public safety personnel (Carleton et al., 2018), it is not unreasonable to suspect that many lawyers could also be at-risk of developing trauma-related symptoms.
Knowing that many lawyers handle cases involving a traumatized clientele, such as refugees, child abuse victims, and sexually assaulted individuals, it is important to understand how these cases may impact their mental health. The aim of the current study was to conduct a scoping review to further our understanding of how the association between traumatic material exposure and PTSD symptoms was conceptualized and operationalized in previous studies. The study also sought to summarize and disseminate patterns in current research findings regarding how and which PTSD symptoms lawyers tend to develop based on the four symptom clusters of the DSM-5, as well as, the prevalence and risk factors associated with PTSD symptoms. Examining the association between workrelated trauma exposure and PTSD symptoms among lawyers can identify the knowledge gaps to highlight areas for future studies and clinical implications. Based on those patterns, a framework for reporting such findings is proposed.
Procedure
This scoping review collated information from existing peerreviewed studies based on the stages of Arksey and O'Malley (2005): (a) identify the research question, (b) identify relevant studies, (c) select the studies, (d) chart the data, and (e) collate, summarize, and report the results, the 6th stage "expert consultation" being optional. Two independent reviewers conducted the retrieval, selection, and analysis of the studies. Next, they compared their results. All (minor) discrepancies were resolved by consensus. The research question of this scoping review was: How have studies previously conceptualized the effects of work-related trauma exposure among lawyers and what types of PTSD-related symptoms do lawyers report?
Database Search
Relevant studies published up to May 2019 were searched through PsycINFO, Embase, Pubmed, MEDLINE, PILOTS, and Google Scholar search engines. No search restrictions (except language) were used so as to ensure the retrieval of as many relevant investigations as possible. The following search terms were used across all search engines: "Secondary traum*," "Vicarious traum*," "Compassion fatigue," "Posttraumatic stress," "Posttraumatic stress disorder," "PTSD," "Attorney*," "Lawyer*" in either the title, abstract, subject heading word, or keyword heading of the studies ("*" indicates that the word was truncated to search for all possible words containing the characters prior to it). See Appendix 1 for an example of the search expressions used.
Screening
Manuscript inclusion (including dissertations) was based on four criteria: the study (a) was an original piece of work, (b) reported findings from a sample composed of practicing lawyers, (c) assessed PTSD symptom severity, and (d) was written in the English language. Manuscripts relating to (a) clinical law, (b) personal stories, or (c) described solely as PowerPoint presentations were excluded. Because the scoping review specifically targeted lawyers as a professional population, studies reporting global findings from a combination of lawyers and other professionals were excluded if they did not provide specific results for lawyers. Titles and abstracts of the retrieved studies were screened based on the study criteria. The full texts of the apparently eligible publications were retrieved and reviewed based on the same criteria. The references of the reviewed papers were scanned using the snowball technique to find additional papers which may have been missed otherwise (Sayers, 2007). Data from the included studies were extracted from the manuscripts: authors and year of publication, country, participants' profession, sample size, study design, data collection method, measures used, and key findings.
Results
The search yielded 341 publications (see Figure 1). After eliminating duplicates, 219 titles and abstracts were screened, and 202 of those were excluded as they did not investigate PTSD symptoms, or were not original studies. The 17 eligible publications were reviewed based on the same criteria. The manuscript of one study (thesis) was not available online. Although the corresponding author of the study was contacted to obtain the manuscript, this person did not reply to the inquiry. The academic institution of the author was therefore contacted to obtain a copy of the manuscript. Due to a 2-year embargo placed on the manuscript, the institution could not provide it. The study was therefore excluded. The snowball technique did not yield additional papers. Of 16 remaining studies, 7 were excluded since (a) they did not involve an original research (i.e., descriptive paper on secondary trauma among professionals) or relate to the theme of the current review (n = 2), (b) the sample assessed was not exclusively composed of lawyers and no specific results for lawyers were provided (n = 4), or (c) the manuscript was not written in English (n = 1). A total of nine studies were therefore retained for analyses (Table 1). Seven were empirical studies and two were dissertations. One study had a longitudinal design (Levin et al., 2012) and five had a control group (Leclerc et al., 2019;Levin et al., 2011;Levin & Greisberg, 2003;Maguire & Byrne, 2017;Vrklevski & Franklin, 2008). All studies collected data via self-reported surveys.
Trauma Conceptualization
Among the retained studies, three different trauma conceptualizations were used: (a) secondary traumatic stress, (b) vicarious trauma, and (c) PTSD as defined by the DSM-IV or the DSM-5. The most frequently used trauma conceptualization was secondary traumatic stress (see Goldman, 2006;Levin & Greisberg, 2003;Piwowarczyk et al., 2009;Sokol, 2014), followed by vicarious trauma (see Maguire & Byrne, 2017;Vrklevski & Franklin, 2008), and PTSD (see Leclerc et al., 2019;Levin et al., 2012). One study adopted both secondary traumatic stress and PTSD as conceptualizations of work-related trauma exposure (Levin et al., 2011). All studies that adopted the conceptualization of secondary traumatic stress considered that it involved symptoms of intrusive memories, avoidance, and hyperarousal. Goldman (2006) and Piwowarczyk et al. (2009) added that secondary traumatic stress could also result in cognitive schema disruption. Goldman (2006), Sokol (2014), as well as Levin and Greisberg (2003), identified secondary traumatic stress as a concept similar to PTSD. However, they noted that the two concepts differ based on the trauma exposure-type; while secondary traumatic stress involves indirect exposure, PTSD arises from direct exposure, which is not entirely consistent with the DSM-5 criteria for PTSD. Vicarious trauma was defined by Vrklevski and Franklin (2008) and Maguire and Byrne (2017) as the alteration of cognitive schemas regarding the self and the world due to engagement with traumatic material. Maguire and Byrne (2017) conceptualized vicarious trauma as similar to PTSD, with the exception that it did not involve enough symptoms to be considered as such. Finally, Leclerc et al. (2019), Levin et al. (2012), and Levin et al. (2011) applied the DSM-5 trauma exposure criterion (the so-called Criterion A) when assessing their lawyer participants. Consistent with the DSM-5 diagnostic criteria, they defined PTSD as involving four types of symptoms: intrusive, avoidance, and hyperarousal symptoms, as well as changes in cognitive schemas. Goldman (2006), Leclerc et al. (2019), and Levin et al. (2011) all agreed that there was a conceptual overlap between secondary traumatic stress and vicarious trauma, and that these concepts could be used interchangeably. Sokol (2014) and Vrklevski and Franklin (2008) also noted a conceptual overlap between these conceptualizations of trauma, but, along with Piwowarczyk et al. (2009), they considered them to describe distinct conditions. Moderate exposure group was 5.01 points higher on PCL-5 than no exposure group. High exposure group was 10.03 points higher on PCL-5 than no exposure group. High exposure group was 5.03 points higher on PCL-5 than moderate exposure group. 9% of lawyers met screening criteria for PTSD based on DSM-5 criteria.
Note. CS = cross-sectional; L = longitudinal in the section "Study design". STQ = Secondary Trauma Questionnaire; IES-R = Impact of Events Scale-Revised; STS = Secondary Trauma Scale; TABS = Trauma and Attachment Belief Scale; VTS = Vicarious Trauma Scale, STSS = Secondary Traumatic Stress Scale, and PCL-5 = PTSD Checklist for DSM-5 in the section "Measures for PTSD."
Operationalization of Exposure to Traumatic Material
The level of exposure to traumatic material among lawyers was operationalized differently across the studies. Most studies considered that the client-load or the number of cases involving traumatic material among lawyers were reliable measures of exposure. Some specified a specific time period in their assessment of client-load or number of cases (Levin et al., 2011(Levin et al., , 2012Levin & Greisberg, 2003), but Goldman (2006) and Sokol (2014) did not. Levin and Greisberg (2003) considered the number of traumatized clients worked with over a period of 1 year, while Levin et al. (2011) and Levin et al. (2012) restricted the number of traumatized clients over a period of 3 months. In addition to the number of clients, Levin et al. (2011) assessed the impact of the number of hours worked weekly on symptomatology. Sokol (2014) examined exposure to traumatic material based on three additional aspects: the percentage of cases involving traumatic material and the time since the first and the last exposure to traumatic material. Vrklevski and Franklin (2008) and Maguire and Byrne (2017) operationalized exposure to traumatic material based on whether the lawyers had been practicing criminal law and family law, but did not consider the level of exposure. Leclerc et al. (2019) measured the level of trauma-related exposure as a tripartite variable. Lawyers were categorized as having none (0%), up to half (1%-50%), or more than half (51%-100%) of their cases involving traumatic material in the past year. Piwowarczyk et al. (2009) recruited lawyers who had worked pro-bono for asylum seekers. They operationalized the level of exposure to traumatic material based on the number of pro-bono hours worked per week. They considered it to be a proxy for the number of hours exposed to traumatic material, although they did not account for the number of regular hours worked. As can be seen, the operationalization of trauma exposure varied greatly across studies.
Do Lawyers Report PTSD Symptoms from a Specific Cluster?
Intrusive symptoms. Although Levin and Greisberg (2003) did not report conducting any statistical test, the authors stated to have found a higher level of intrusive symptoms related to traumatic material for lawyers compared to mental health professionals. Using the Impact of Event Scale-Revised (IES-R), this tendency was observed by Levin et al. They also reported a significant, but weak correlation at the follow-up between the level of exposure to traumatic material and intrusive symptoms (r = .24).
Avoidance symptoms. Lawyers had higher levels of avoidance symptoms related to exposure to traumatic material compared to their administrative support staff (Levin et al., 2011), but not compared to non-criminal law practicing solicitors (Vrklevski & Franklin, 2008). Low levels of avoidance symptoms were noted on the IES-R by Levin et al. Levin and Greisberg (2003) reported that lawyers also had higher levels of avoidance symptoms than mental health professionals, but they did not report the statistical results, hindering the interpretation of the results. Goldman (2006) Hyperarousal symptoms. Levin and Greisberg (2003) found that lawyers reported more hyperarousal symptoms, including concentration, sleep, and irritability symptoms, than mental health professionals (although no data was provided). . The authors also found significant but weak to moderate correlations at baseline between the number of hours worked weekly and hyperarousal symptoms (r = .30), as well as, at the 10-month follow-up between the level of exposure to traumatic material and hyperarousal symptoms (r = .27).
Cognitive alterations symptoms. Levin and Greisberg (2003) found that lawyers reported more symptoms related to loss in "pleasure and interest in activities" than mental health professionals (although they did not report any data). In addition, Vrklevski and Franklin (2008) found that lawyers had significantly higher levels of cognitive disruptions than noncriminal law solicitors on the Trauma and Attachment Belief Scale and on the Vicarious Trauma Scale. Maguire and Byrne (2017) noted that although both lawyers (M = 39.86, SD = 7.81) and mental health professionals (M = 33.13, SD = 6.96) had moderate symptoms on the Vicarious Trauma Scale, lawyers had significantly higher levels of cognitive alterations. The authors also reported that emotional stability negatively correlated with symptom severity. Overall, Goldman (2006) identified that 17% of lawyers had severe disruptions in their beliefs related to self and other-safety, self and other-trust, self and other-esteem, self and other-intimacy, as well as, self and other-control.
Secondary traumatic stress, vicarious trauma, and PTSD.
The studies reviewed also included results related to the overall syndromes (secondary traumatic stress and vicarious trauma) and disorders (PTSD). Goldman (2006) found that although the average secondary traumatic stress severity among lawyers was minimal (i.e., the average score on the Secondary Trauma Scale was inferior to the clinical threshold of 38), 10.4% of the sample reported moderate levels of secondary traumatic stress (i.e., score between 38 and 44 on the Secondary Trauma Scale) and 7.2% reported severe levels of secondary traumatic stress (i.e., score between 45 and 62 on the Secondary Trauma Scale). Still, Goldman (2006) concluded that the study "provided little evidence for secondary traumatization" when lawyers were considered as a group. Goldman (2006) noted that "in essence, law guardians as a group reported that they only rarely or occasionally experience adverse emotional reactions to their work." Contrary to Goldman (2006), Piwowarczyk et al. (2009) concluded that lawyers working with traumatic material were at risk for secondary traumatic stress. They found that 87% of their sample reported two or more symptoms and that 9% reported moderate levels of secondary traumatic stress on the Secondary Trauma Scale, although none reported levels that were of clinical concern (i.e., score of 45 or higher). Sokol (2014) identified that 37% of the sample scored moderately for secondary traumatic stress (i.e., score between 38 and 48 on the Secondary Traumatic Stress Scale) and that 15% obtained scores translating in high levels of secondary traumatic stress (i.e., score above 49 on the Secondary Traumatic Stress Scale). The study revealed that the symptoms did not correlate with the levels of exposure to traumatic material. The authors nonetheless concluded that lawyers were at higher risk of secondary traumatic stress than other professionals. Levin et al. (2011) found that more lawyers (11%) than their administrative support staff (1%) had probable PTSD. The same tendency was observed for secondary traumatic stress, with 34% of lawyers compared to 10% of their administrative support staff reporting clinically severe levels of symptoms (i.e., score above 56 on the Professional Quality of Life Scale Version 5, secondary traumatic stress subscale). The authors reported that lawyers were more vulnerable for developing trauma-related symptoms than their administrative support staff, but that this relationship was mediated by the number of hours worked weekly and the number of clients dealt with. This model explained a modest 14% of PTSD symptom variance. It was also significant for secondary traumatic stress symptoms. Leclerc et al. (2019) reported that 9% of lawyers had PTSD symptoms that met or exceeded the DSM-5 diagnostic criteria for PTSD. Furthermore, they reported that lawyers moderately exposed to traumatic material (1%-50% of their caseload) had significantly higher levels of PTSD symptoms than those unexposed (0%) to traumatic material. Lawyers highly exposed to traumatic material (51%-100% of their caseload) also had higher levels of PTSD symptoms than those moderately exposed and unexposed to traumatic material. Levin et al. (2012) found lawyers to have significant levels of symptoms over time, with 15% at baseline and 9% at follow-up who screened positive for probable PTSD.
Risk Factors for PTSD Symptoms
The risk factors associated with PTSD symptoms in the reviewed studies were inconsistent. A higher caseload involving traumatic material or working with more trauma-related clients were related to higher symptom severity by Levin and Greisberg (2003), Goldman (2006), Levin et al. (2011), and Leclerc et al. (2019), but not by Sokol (2014). A history of prior trauma exposure in the personal life of the lawyers was also associated with the development of PTSD symptoms by Goldman (2006), Vrklevski and Franklin (2008), Maguire and Byrne (2017), and Leclerc et al. (2019), but Piwowarczyk et al. (2009) reported no association. Longer hours worked weekly was identified as influencing the increase of PTSD symptoms by Goldman (2006), Piwowarczyk et al. (2009), Levin et al. (2011), and Leclerc et al. (2019. While Piwowarczyk et al. (2009) and Levin et al. (2012) reported no association between years of experience and symptom severity, Goldman (2006) reported statistically significant results. Indeed, Goldman (2006) found that fewer years of work experience as a lawyer was related to an increase in symptom severity, but that more years of experience was specifically related to higher hyperarousal symptoms. Goldman (2006) was the only one to report younger age as a factor associated with symptom severity. Levin et al. (2012) reported such association with age. Female gender was associated with more symptoms by Levin and Greisberg (2003) and Leclerc et al. (2019), but Goldman (2006), Piwowarczyk et al. (2009), Levin et al. (2012), and Maguire and Byrne (2017) reported no such results. Levin and Greisberg (2003) found that prior treatment for mental health issues was associated with greater symptom severity. Furthermore, the profession of lawyers was considered a risk factor for PTSD symptoms by Maguire and Byrne (2017). Sokol (2014) reported that time since the first and the last exposure to a case involving traumatic material was a risk factor. Piwowarczyk et al. (2009) and Levin et al. (2012) reported no association between symptom severity and firm size. Levin et al. (2012) computed cross-lagged panel correlation path models using structural equation modeling to investigate the relationship between the levels of PTSD symptoms at follow-up and exposure to traumatic material and hours worked weekly at baseline. The study revealed a significant positive relationship over time between PTSD symptoms and the lawyer's quantity of work, including exposure to traumatized clients. Interestingly, the number of hours worked weekly at baseline was related to the levels of exposure and PTSD symptoms at follow-up through its association with the level of exposure at baseline. Levin et al. (2012) suggested that exposure to traumatic material may be a vulnerability factor for the development of PTSD symptoms in lawyers.
Discussion
The present review of the scientific literature exposed the confusion surrounding the conceptualization of trauma among lawyers. Researchers have either conceptualized lawyers' trauma symptomatology as secondary traumatic stress, vicarious trauma, or the DSM-IV or DSM-5's PTSD. In both the case of secondary traumatic stress and vicarious trauma, the criteria for diagnosis were not clear and rigorous. While these syndromes were meant to depict differing consequences of work-related trauma, they were used interchangeably and considered as reflecting the same syndrome (Baird & Kracen, 2006). However, the trauma-related symptoms covered by these conceptualizations differ. Three clusters of symptoms are involved in secondary traumatic stress (intrusive memories, avoidance, and hyperarousal), while PTSD involves four types of symptoms, the fourth being cognitive disruptions, and vicarious trauma specifically reflects emotional distress and cognitive alterations associated with trauma exposure at work. Adopting the conceptualization of PTSD is more rigorous as it involves meeting a number of diagnostic criteria, while the conceptualizations of secondary traumatic stress or vicarious trauma simply involve meeting a scale threshold to be considered as having severe symptoms. Overall, secondary traumatic stress seemed to still be the preferred conceptualization in a majority of studies investigating lawyers (Goldman, 2006;Levin & Greisberg, 2003;Piwowarczyk et al., 2009;Sokol, 2014).
Severity levels of work-related trauma exposure among lawyers were operationalized in various ways, such as the number of trauma-related legal clients that were associated with trauma-laden material, the type of law practiced, or the percentage of cases involving traumatic material. All studies recruited convenience samples and only one recruited a control group of lawyers not exposed to cases involving traumatic material (Leclerc et al., 2019). The majority of the studies retained in this scoping review employed a cross-sectional design and all relied solely on self-report measures.
Overall, intrusive, avoidance, and hyperarousal symptoms were generally low when considering lawyers as a group, while cognitive symptoms were on average moderately severe. Mean intrusion levels ranged between .31 and .76 on the IES-R, mean avoidance levels ranged between .34 and .76 on the IES-R, and mean hyperarousal levels ranged between .18 and .61 on the IES-R when considering lawyers as a group (Goldman, 2006;Levin et al., 2012;Levin et al., 2011;Vrklevski & Franklin, 2008). These results are below the proposed clinical cut-off of 1.5 on the IES-R (Levin et al., 2011). Mean cognitive symptom scores ranged between 39.86 and 41.50 on the Vicarious Trauma Scale (i.e., moderate score; Maguire & Byrne, 2017;Vrklevski & Franklin, 2008) and between 47.91 and 54.32 on the Trauma and Attachment Belief Scale (i.e., average score; (Goldman, 2006;Vrklevski & Franklin, 2008) when considering lawyers as a group. The proportions of lawyers reporting high or severe secondary traumatic stress in the past week or month varied between 15% and 34%, while the past week or month PTSD prevalence rate varied between 9% and 15%. Although as a group lawyers tended to report low levels of PTSD symptoms, subgroups of lawyers exposed to traumatic material developed moderate to high levels of symptoms. Therefore, lawyers who deal with trauma-related cases are at risk to develop PTSD symptoms, but results are not consistent as to whether lawyers have higher levels of symptoms than control groups. The risk factors identified across the studies included work-related trauma exposure levels, longer hours worked, female gender, and past traumatic events experienced in the personal life. The inconsistencies across the studies regarding symptom severity, prevalence rates, and risk factors could be due to heterogeneous conceptualizations of trauma, divergent operationalizations of work-related trauma exposure, various study designs, and differing populations of lawyers recruited (e.g., law guardians, criminal lawyers, etc.). Consequently, the results might be over or under-representations of the true prevalence rates and they are not generalizable.
Implications
Research implications. Although imperfect, the studies reviewed above suggest that a sizable proportion of lawyers practicing certain types of law report significant levels of PTSD symptoms, and that more investigations under improved methodological conditions are warranted to further investigate this problem. Ideally, future studies investigating PTSD symptoms among lawyers would use epidemiologic analyses to reveal the adequate levels of PTSD symptoms and risk factors associated with it. Epidemiologic analyses entail that researchers employ probability sampling methods to decrease recruitment biases. Using probability sampling methods would allow the recruitment of larger and gender balanced samples of lawyers and would avoid the recruitment of lawyers practicing a specific law or working in a specific geographic location. The results would thus be generalizable. To do so, collaborations with Bar associations and law organizations are necessary to ensure that all practicing lawyers have an equal opportunity of selection and to recruit a maximum of lawyers (Statistics Canada, 2017). An effort should be made to encourage the participation of lawyers who are on sick leave or who have retired (prematurely or not) from the profession. The use of structured interviews to assess adequately the presence of PTSD symptoms should be considered in future studies. However, lawyers work in the context of short deadlines and long working hours. Therefore, despite the biases related to the use of self-report measures, such measures remain a convenient form of assessment, as they provide a time-efficient way of measuring putative traumatic stress. Researchers should investigate group differences by analyzing proportions of diagnoses between groups, instead of mean scores differences.
Incorporating a control group with similar professional duties and jobs, as well as, similar levels of exposure to lawyers is also required. Indeed, lawyers have a very specific training, and their workplace pressures and demands are not comparable to other professions.
The use of diverse conceptualizations of work-related trauma exposure might explain the inconsistent findings reported in the reviewed studies. Consensus regarding this conceptualization is needed to produce conclusions that are comparable over time. We recommend future studies to assess for PTSD based on the DSM-5 criteria instead of secondary traumatic stress or vicarious trauma to generate upto-date results. Secondary traumatic stress and vicarious trauma are non-DSM syndromes without standardized criteria that were employed before the DSM-5. Researchers should use longitudinal designs and assess the variations of severity in PTSD symptoms over time (incidence, remission, and persistence of symptoms).
Researchers should assess trauma exposure levels based on the time spent working with traumatic material instead of the number of clients worked with or the percentage of legal cases dealt with that involve traumatic material. It is important to keep in mind that lawyers do not spend an equal amount of time on all cases and that not all lawyers work the same number of hours weekly (some can work less than 35 hr per week, while others can work more than 70 hr per week). Therefore, a lawyer working on one specific case can spend 50 hr per week working with traumatic material from this case, while a lawyer dealing with 10 cases involving traumatic material could spend less than 50 hr working on these cases. In addition, within a law practice, such as criminal law, lawyers are not necessarily exposed to traumatic material.
Since lawyers cannot avoid traumatic material during their work duties, it would be important to better understand how PTSD symptoms impact lawyers' capacities to work. Future studies should investigate productivity at work and how PTSD symptoms affect lawyers behaviorally and cognitively. There is a need to understand if and what kind of avoidance symptoms lawyers develop toward the traumatic material, how those symptoms manifest themselves at the job, and how they impact the quality of the services they dispense. Lonergan et al. (2016) conducted a review of the literature on trauma-related symptoms in jurors and concluded that "graphic evidence," "trial complexity," and "deliberations" were factors regularly associated with PTSD symptoms. There is a need to understand how various types of trauma exposure impact symptom levels among lawyers. Future studies should also seek to determine the economic impact associated with the presence of PTSD symptoms among lawyers (e.g., costs associated with loss of productivity at work, turnover rates, health care services used, etc.). Such results might "speak" louder to law firms and associations, encouraging the implementation of preventive and intervention measures for PTSD among lawyers at risk.
Clinical implications. The present scoping review findings suggest that lawyers are at risk of developing PTSD symptoms due to their exposure to traumatic material, although only a portion of them developed clinically significant symptoms (as is always the case in the field of PTSD). Psychologists, physicians, and other mental health professionals should be aware of these results to adequately assess lawyers consulting them for possible posttraumatic stress symptoms. PTSD is often misdiagnosed for depression or "burnout" due to symptom overlap and stigma (Brady et al., 2000). Levin and Greisberg (2003) and Maguire and Byrne (2017) identified two aspects on which lawyers differ from mental health professionals and that may contribute to the development of PTSD symptoms: (a) lawyers are not specifically trained to handle clients who were involved in a traumatic event or to handle the effects of this exposure and (b) they do not have access to a space to vent their feelings and to turn to peer support. The few studies conducted so far have encouraged law professionals and academicians to raise awareness regarding this issue. Still, the largest body of literature directed at lawyers is meant to understand PTSD when clients developed it and how to use it in court. This illustrates how PTSD among clients of lawyers is prevalent and how the law objectifies PTSD. Moreover, it is not possible to know if concerned lawyers consult the documents raising awareness on PTSD symptoms in their profession. Law Schools could benefit from implementing a course on this issue in their curriculum. Such a course has the potential to make lawyers better aware of the symptoms, which could reduce stigma and encourage them to consult with mental health professionals. Although more research is necessary to deepen our understanding of this question, lawyers should have access to literature and treatments tailored to their needs.
Limitations
Studies investigating PTSD symptoms among lawyers published in a language other than English were not included in this scoping review. Furthermore, some studies might not have been included as "lawyer" is a heteronymous title across countries. Our search included the titles "Lawyer*" and "Attorney*," but more studies might have been published under other legal professional titles. The principal limitation of this scoping review was the small number of studies that could be included and analyzed. This scoping review was further limited due to the lack of quality of these few studies' methodology (e.g., none except one employed a longitudinal design, all samples recruited were not representative). The comparison of the results of the retained studies was also limited in the present scoping review because of the conceptual confusion surrounding trauma symptomatology, the different operationalizations of work-related trauma exposure, the varied methodologies, and the samples used. Data pooling of the reviewed studies was not possible due to the small amount (or even lack) of statistical results reported in certain studies (Levin & Greisberg, 2003;Piwowarczyk et al., 2009). | 2020-10-01T19:33:08.693Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "aba1559756301c621b016457666ffee98856482d",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2158244020957032",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "aba1559756301c621b016457666ffee98856482d",
"s2fieldsofstudy": [
"Law",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247719152 | pes2o/s2orc | v3-fos-license | Good functional recovery after bilateral elbow dislocation associated with bilateral distal radius and ulna fractures
Abstract Bilateral elbow dislocation associated with bilateral distal forearm fractures is extremely rare, therefore its optimal treatment, complications, and outcomes remain unclear. We present an illustrative case with a 2‐year follow up of a patient who sustained a complex injury of the upper extremity and underwent combined surgical and conservative treatment.
| INTRODUCTION
The elbow is considered to be one of the most stable joints in the human body. Nevertheless, unilateral elbow dislocations are the second most common major joint dislocation, with a reported incidence of 5.21 dislocations per 100.000 people/year in the US population. 1 Contrarily, bilateral elbow dislocation (BED) is an extremely rare injury with only 20 reported cases. [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] The above-mentioned includes 11 bilateral elbow dislocations associated with a skeletal injury, none of which describe associated bilateral fracture of the distal forearm. Complications after unilateral elbow dislocation are common and include residual pain, chronic stiffness, heterotopic ossifications, and functional instability, which can enhance early arthrosis and lead to permanent incapacity to work. 21,22 Due to the limited information regarding complex bilateral injuries of the upper extremity, its optimal therapy, complications, and outcomes remain unclear. We present an illustrative case of a patient who sustained BED associated with bilateral forearm fractures. After combined conservative and surgical treatment at 2-year follow-up, the patient has recovered uneventfully and fully returned to his working and recreational activities.
| CASE PRESENTATION
A 39-year-old fit and otherwise healthy male soldier was admitted to the emergency department after a fall from a 3-meter height, predominantly landing to the outstretched hands and lower back. Clinical examination revealed bilateral elbow and distal forearm deformity with a small wound (Gustilo-Anderson type I) on the ulnar side of the right radiocarpal joint. The neurovascular status of both arms was unremarkable. Initial radiographs showed bilateral elbow dislocations with associated right lateral epicondyle avulsion fracture and bilateral impacted comminuted fractures of distal radius and ulna ( Figure 1).
The injuries were treated in two stages. In the first stage, under general anesthesia and using the image intensifier, a closed reduction of both elbows and distal forearms was performed, followed by proper wound care and antibiotic prophylaxis. The left elbow showed only valgus stress instability, so posterior elbow splint in 90° flexion with a forearm in the neutral rotation was applied. Contrarily, the right elbow showed multidirectional instability and its congruency was retained by temporarily ulnohumeral transfixation with a titanium Kirschner wire and the above-elbow cast with a forearm in the neutral rotation. Proper bilateral post-reduction position of both distal radius and ulna was retained with Kirschner wires and a cast with "window" at a wound location.
After closed reduction and immobilization of both elbows, the CT and MRI were obtained, which revealed further injuries. On the left elbow, an avulsion of the lateral epicondylar origin of the lateral collateral ligament (LCL) and a partially ruptured medial collateral ligament (MCL) were verified. On the right elbow, complete ruptures of both collateral ligaments and completely ruptured joint capsule were accompanied to a nondisplaced radial head fracture (Figures 2 and 3).
In the second stage, a day after the injury, a definitive surgical stabilization of multi-directionally unstable right elbow was performed. Under general anesthesia, the right elbow was approached through Kocher interval. A complete rupture of the LCL complex and the rupture of the joint capsule was verified. A radial head was fractured on its lateral side in supination. The fragment was small (<25% of the radial head) and comminuted and therefore unable to fixate so it was evacuated. Ligamentous reconstruction was done using three 2.4 mm × 7.5 mm suture anchors (FASTak Arthrex) and non-absorbable sutures. The humeral anchor was positioned on the lateral epicondyle to the "footprint" of the lateral ulnar collateral ligament (LUCL), while ulnar anchors were placed to the proximal part of the supinator crest ( Figure 4). LUCL, which was avulsed at its proximal origin, was reconstructed and additionally sutured throughout its distal origin using ulnar anchors. The annular ligament and radial collateral ligament were reconstructed with non-absorbable sutures. Lastly, the joint capsule was plicated. Intraoperatively, the elbow was congruent and stable throughout a full range of motion under the image intensifier. A posterior elbow splint in 90° flexion with a forearm in the neutral rotation was applied ( Figure 5).
The patient was discharged a week postoperatively (post-op) with the right elbow and radiocarpal joint immobilized with the above-elbow cast in 90° flexion and forearm in neutral rotation.
The left elbow was in a hinged elbow brace which allowed a full range of motion, and the left radiocarpal joint was immobilized with below elbow cast. Indometacin (3 × 25 mg) with gastroprotection was recommended for 5 weeks as heterotopic ossification prevention. Two weeks post-op, a hinged brace allowing full flexion and 60° extension was applied to the right elbow with radiocarpal joint immobilized with below elbow cast. Three-week post-op right elbow was allowed full extension, while the left elbow was allowed a full range of motion without any support. Also, Kirschner wires were removed. Five-week post-op right elbow orthosis and both below elbow casts were removed which was followed by a full range of motion and vigorous physiotherapy. The mainstay of the physical therapy program was elbow range of motion and strength exercises. At 2, 4, and 6 months follow-up, the patient reported a painless range of motion and no instability problems. He also reported restriction of supination and palmar flexion on the right side, which improves with exercising.
Two years postoperatively, patient has bilateral painless 0°-140° flexion elbow range of motion with no restriction of pronation/supination on the left side and restriction of terminal supination on the right side. The bilateral wrist range of motion is also painless with 50° flexion and 75° extension on the right side and 65° flexion and 75° extension on the left side ( Figure 6). Radiologically, there are initial signs of heterotopic ossifications, although its prophylaxis was carried out, but no signs of early arthrosis (Figure 7). The Disabilities of the Arm, Shoulder and Hand (DASH) score is 0.9 for the left and 1.7 for the right arm. The Mayo Elbow score is 100 bilaterally, and Mayo Wrist Score is 100 for the left and 90 for the right wrist. By the time of examination, 2 years post-op, the patient is fully back to service and recreational activities without any limitation (Video S1).
F I G U R E 3
Postreduction MRI of the right elbow in atypical projections (because its congruency was retained temporarily by ulnohumeral transfixation due to its multidirectional instability). The fat-suppressed proton-density-weighted TSE sequence. White arrow points to the medial collateral ligament rupture, and the black arrow points to the intraarticular effusion (A). T1 sequence of the same projection (B)
F I G U R E 4
Picture showing the intraoperative placement of the suture anchor in the ulna. The ruptured annular ligament is secured on the stay suture (ANL). Black star-lateral epicondyle, ANC, anconeus muscle; EDC, extensor digitorum communis muscle; EDU, extensor carpi ulnaris muscle
| DISCUSSION
Bilateral elbow dislocation (BED) is a rare injury with only 20 reported cases to the present moment. That includes 9 isolated BED (7 adult and 2 pediatric) 2-10 and 10 BED associated with a skeletal injury (7 adult and 5 pediatric). 7,11-20 None of the cases sustained a BED associated with bilateral forearm fracture which makes the reported case unique. Elbow is one of the most stable joints, which at the same time allows a high range of forearm motion. With bony and capsuloligamentous structures as primary and joint-crossing muscles as secondary stabilizing factors, a high force is necessary to dislocate the joint. 23 Biomechanical studies have shown that bony structures serve as the main stabilizers, limiting flexion/ extension movements and stabilizing elbow in varus/valgus stress during rotational movements. In varus stress, stability mostly relies on bony structures, while valgus stability depends on ligaments as well as on bony structures. 24,25 Elbow dislocation requires prompt reduction followed by stability tests and neurovascular examination which would determine whether conservative treatment is sufficient or surgery is required. In doubtful cases, a CT or MRI can be done.
After the posterior dislocation, the elbow is usually more stable in flexion. For this reason, the elbow should be immobilized in at least 90° flexion. Moreover, biomechanical studies suggest that the LCL complex is torn in almost all elbow dislocations. For this reason, the elbow with sufficient MCL complex (and insufficient LCL) should be immobilized with the forearm in pronation which engages the lateral secondary elbow constraints, the wrist extensor origins and consequently provides more stability. When both LCL and MCL are torn, the forearm should be positioned in neutral to unload the both lateral and medial sides. In the situation when LCL is sufficient and MCL insufficient, the elbow should be immobilized in supination which would engage the medial secondary elbow constraints, the flexor-pronator mass and therefore ensure more stability.
Surgical indications include complex dislocations with associated fractures, an osteochondral fragment, softtissue entrapment which prevents concentric reduction, and open or neurovascular injury.
The primary objective of all therapeutic options is early mobilization. Interestingly, there is a lack of evidence that would specify the best time for mobilization after conservative or surgical treatment of elbow fractures in adults. 26 Elbows which are unstable after reduction due to torn capsuloligamentous structures need to be stabilized surgically. There is still debate among authors which intervention is optimal as some propose only ligament reconstruction 23,27 and other combination of ligament reconstruction and hinged fixator. 28 Furthermore, good outcomes are reported with hinged fixator only. 7 Our experience supports capsuloligamentous reconstruction followed by early mobilization as a reliable treatment for multidirectional elbow instability due to complex elbow dislocation. Complications after elbow injuries are common and difficult to treat. 29 Anakwe et al 21 showed that even simple elbow dislocations are not benign injuries with long-term mean DASH score of 6,7 and high rate of residual pain and elbow stiffness. Also, the authors state that functional instability is not common and often does not limit activities which is supported by the presented case. Our patient reported no residual pain and stiffness bilaterally with a significantly lower DASH score, although he sustained a complex injury of the right elbow and additional bilateral complex forearm fractures.
Even isolated bilateral distal forearm fracture is a rare injury, with only a handful of case reports. [30][31][32][33][34] Whereas therapeutic options and outcomes of unilateral distal radius fracture (DRF) are well known, 35 due to limited information regarding bilateral DRF, its optimal therapy, complications, and outcomes remain unclear. The treatment goal is to retain optimal reduction and stability, which would allow early mobilization to achieve the best functional result. Depending on the fracture type and the associated injuries, distal forearm fractures can be treated conservatively with closed reduction and cast fixation or surgically using one of the available methods. Surgical treatment includes reduction in the fracture and fixation F I G U R E 7 Two-year follow-up radiographs showing congruent elbow joints with surgical hardware in place (white arrows) and the presence of some ossific fragments (white stars) although prophylaxis of heterotopic ossifications was carried out. Distal radius bilaterally shows loss of alignment although the patient is functionally almost fully recovered with the Kirschner wires or open reduction and internal fixation using plate and screws by the AO method. For the open or highly comminuted fractures, external fixation can be used. A recent retrospective study of 10 bilateral radius fractures treated with open reduction and internal fixation showed complications in one-half of the patients after a mean 2.4-year follow-up. 36 As there are no standardized and evidence-based protocols for the treatment of complex bilateral upper extremity injuries, a question remains whether something could have been done differently. Both distal radial fractures could have been treated with open reduction and internal fixation using plates and screws, but our idea was to use minimally invasive methods to yield the best result. So, the main goal was to reduce and stabilize all joints which would allow early rehabilitation. Our patient suffered no complications and recovered uneventfully which supports our treatment choice and rehabilitation protocol. Rare complex injuries such as BED associated with bilateral forearm fractures should not lessen the need for standardized treatment protocols, so patients could get the best possible care.
| CONCLUSION
Due to the limited information regarding complex bilateral injuries of the upper extremity, its optimal therapy remains unclear. Treatment of bilateral complex elbow dislocation associated with bilateral distal forearm fractures should be focused on the unstable elbow. The treatment goal should be elbow stability and early mobilization. Other injuries should be treated with minimally invasive methods which would, together with the stabilized elbow, allow early and vigorous rehabilitation to yield the best possible functional upper extremity result. | 2022-03-27T15:20:38.661Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "19fbb2690bd7d4dedf24441d5515384f1925db73",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "260070f93b23d7ee23eba82804f0b08fecb70225",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118666805 | pes2o/s2orc | v3-fos-license | Joint remote preparation of a four-dimensional quantum state
We propose various protocols for joint remotely prepare a four-dimensional quantum state by using two- and three-particle four-dimensional entangled state as the quantum channel. The single- and two-particle generalized projective measurement and the appropriate unitary operation are needed in our protocols. It is shown that the receiver can reconstruct the unknown original state only if two senders collaborate with each other.
Introduction
Quantum entanglement plays a more and more critical role in quantum information theory. Quantum teleportation, proposed by Bennett et al [1], is the process that transmits an unknown quantum state from a sender to spatially distant receiver using the entanglement channel with the help of some classical information. In the last decade, Lo [2], Pati [3], and Bennett et al [4] presented a new quantum communication protocol that uses classical communication and a previously shared entangled resource to remotely prepare a quantum state. This communication protocol is called remote state preparation(RSP). RSP is another important protocol taking advantage of entanglement, in which the sender Alice performs a measurement on her share of the entangled resource in a basis chosen in accordance with the state she wishes to help the receiver Bob in his laboratory to prepare. In RSP, Alice is assumed to know fully the transmitted state to be prepared by Bob, so RSP is called the teleportation of a known state. Compared with teleportation. RSP requires less classical communication cost than teleportation [3]. In recent years, RSP has attracted much attention, various protocols [5][6][7][8][9][10][11][12] for generalization of RSP have been presented using various kinds of methods, including low-entanglement RSP [5], optimal RSP [6], generalized RSP [7], oblivious RSP [8], continuous variable RSP [9,10], etc. Several RSP protocols in higher dimensional Hilbert space have been proposed [13][14][15]. Meanwhile, some RSP protocols have already been implemented experimentally [16][17][18][19][20].
All the above protocols assume the case that only one sender knows the original state. However, if two-party or multiparty share an original quantum state, and they want to remotely prepare it in the laboratory of receiver, how can they do it ? To answer this problem, recently, a novel aspect of RSP, called as the joint remote state preparation(JRSP), has been proposed [21][22][23][24][25]. In these protocols of the JRSP [21][22][23][24][25], two senders(or N senders) know partly of the original state they want to remotely prepare, respectively. If and only if all the senders agree to collaborate, the receiver can reconstruct the original quantum state. Nevertheless, in Refs. [21][22][23][24][25] only the single-or multi-qubit state was considered. Though various protocols of RSP of high-dimensional quantum state have been proposed in recent years [13][14][15], but no scheme has yet been reported for the JRSP of higherdimensional quantum state.
During the last few years, the high-dimensional system in quantum information processing(QIP) has attracted much attention. High-dimensional systems have properties which are different from qubit counterparts which could be useful for QIP. For instance, we can note that high-dimensional systems can be more entangled than qubits [26][27][28] and can share a larger fraction of their entanglement [29]. These properties, as well as the larger dimension alone could aid many QIP tasks, including quantum key distribution [30][31][32][33], quantum teleportation [1,34], quantum bit commitment [35,36], quantum computing [37][38][39][40][41], quantum dense coding [42], quantum secure communication [43], quantum secret sharing [44,45], and quantum state remotely preparation [13][14][15]. In this paper, we propose a set of protocols for two senders to remotely prepare the single-and twoparticle four-dimensional(FD) quantum state by using various types of quantum channel. In our protocols, the single-and two-particle FD projective measurement and appropriate unitary operation are needed. This paper is organized as follows.
In section 2, a protocol for joint remotely prepare an unknown single-particle FD quantum state by using a tripartite FD entangled state as quantum channel is presented. In section 3, we propose two protocols of joint remote preparation of an unknown bipartite FD entangled state via two tripartite and three bipartite FD entangled states as quantum channel, respectively. Conclusions are given in section 4.
By Eq.(3), the state (2) can be described as where "· · ·" represents 11 terms with l m in |ψ (1) l 1 |ψ (2) m 2 (l, m = 0, 1, 2, 3). Clearly, in Eq.(4) only the four first terms can cause success, but all the 12 remaining terms with l m lead to failure. Now let Alice 1 and Alice 2 perform singleparticle FD projective measurements on their own particles 1 and 2 respectively, and then they inform Bob of their results by the classical channels. According to the measurement results of Alice 1 and Alice 2 , the receiver Bob can reconstruct the original state at his side. Without loss of generality, assume Alice 1 's measurement result is |ψ (1) 1 1 and Alice 2 's result is |ψ (2) 1 2 , the particle 3 will collapse into the state 1 2 (β|0 + α|1 + δ|2 + γ|3 ) 3 . Bob needs to perform a local unitary operation U 1 on particle 4, the state of particle 3 will evolve 1 3 (α|0 + β|1 + γ|2 + δ|3 ) 3 , which is exactly the original state |ϕ . Here the unitary operation U 1 is one of the where I is 2 × 2 identity matrix and σ x is Pauli matrix. If the measurement results of Alice 1 and Alice 2 are the other 3 cases of the four first terms in Eq.(4), the relation between the results obtained by Alice 1 and Alice 2 and the unitary operations performed by Bob is shown in Table 1. The required classical communication cost is 4 bits (2 × log 2 4) in the protocol.
Joint remote preparation of a bipartite FD entangled state
We now consider the situation when the state of joint remote preparation is a bipartite FD entangled state. In what follows we present two protocols of JRSP using different quantum resources as the quantum channel. The first protocol relies two tripartite FD entangled states and the second protocol uses three bipartite FD entangled states as the quantum channel, respectively.
JRSP by using two tripartite FD entangled states as the quantum channel
Suppose that Alice 1 and Alice 2 wish to help the receiver Bob remotely prepare a bipartite FD entangled state where a, b, c and d are real and a 2 + b 2 + c 2 + d 2 = 1. Assume that Alice 1 and Alice 2 know the original state |ψ partly, i.e., Alice 1 knows a 1 , b 1 , c 1 and d 1 , Alice 2 We also suppose that the quantum channels shared by Alice 1 , Alice 2 and Bob are two tripartite FD entangled states Here, particles 1 and 4 belong to Alice 1 , particles 2 and 5 to Alice 2 and particle 3 and 6 to Bob, respectively. In order to help Bob to remotely prepare the original state, Alice 1 and Alice 2 should perform the two-particle FD projective measurements on their own particles (1, 4) and (2, 5), respectively. The measurement basis chosen by Alice 1 and Alice 2 are the set of MOBVs where "· · ·" includes 47 other terms with g m or/and h n in |ψ (1) gh 14 |ψ ( and Alice 2 , Bob can reconstruct the original state. For instance, suppose Alice 1 's measurement outcome is |ψ (1) 11 14 and Alice 2 's outcome is |ψ 2 11 25 , the particles 3 and 6 will collapse into the state 1 4 (b|01 + a|12 + d|23 + c|30 ) 36 . According to Alice 1 's and Alice 2 's public announcement, Bob should perform the unitary operations U 1 ⊗ U 5 on particles 3 and 6, thus the bipartite FD entangled state (6) can be reconstructed. Here unitary operation U 1 is defined by Eq.(5) and U 5 is Table 2: Corresponding relation between the measurement results (MR) of Alice 1 and Alice 2 and the local unitary operations (U i ) 3 ⊗ (U j ) 6 (i, j = 0, 1, · · · , 7) performed by Bob. (ζ gh → |ψ (1) gh 14 , η mn → |ψ (2) mn 25 , one of the unitary operations U j ( j = 4, 5, 6, 7) If the measurement outcomes of Alice 1 and Alice 2 are the other 15 cases of the sixteen first terms in Eq. (9), the relation between the outcomes by Alice 1 and Alice 2 and the unitary operations by Bob is shown in Table 2. The required classical communication cost is 8 bits in this protocol.
JRSP by using three bipartite FD entangled states as the quantum channel
Suppose the state that Alice 1 and Alice 2 wish to help Bob remotely prepare is still in state |ψ (see Eq. (6)). We also assume that Alice 1 , Alice 2 and Bob share three bipartite FD entangled states as quantum channel where particles 1 and 3 belong to Alice 1 , particles 2 and 5 to Alice 2 and particles 4 and 6 to Bob, respectively. As in the previous protocol, Alice 1 and Alice 2 perform the two-particle FD projective measurements on their own particles (1,3) and (2, 5), respectively. The measurement basis chosen by Alice 1 and Alice 2 is still in {|ψ k gh } (see Eq. (8)). The quantum channel |Φ = |φ 12 |φ 34 |φ 56 can be written in terms of basis {|ψ k gh } as +|G 3 j (d|λ 0 j + c|λ 2 j + b|λ 2 j + a|λ 3 j ) 46 +|G 4 j (a|λ 1 j + b|λ 2 j + c|λ 3 j + d|λ 0 j ) 46 +|G 5 j (b|λ 1 j + a|λ 2 j + d|λ 3 j + c|λ 0 j ) 46 where |G p j ≡ |G p j 1325 given in appendix A and |G 0 j ∼ |G 7 j include 64 terms with g = m or/and h n in |ψ (1) gh 13 |ψ (2) mn 25 (g, h, m, n = 0, 1, 2, 3), "· · ·" includes 191 other terms with g m or/and h n in |ψ (1) gh 13 |ψ (2) mn 25 , |λ i j ≡ |i, i ⊕ j and i ⊕ j means i + j mod 3. In Eq.(12) only the 64 first terms (i.e. |G 0 j ∼ |G 7 j ) can cause success, but all the 192 remaining terms with g m lead to failure. For example, assume Alice 1 's measurement result is |ψ (1) 20 13 and Alice 2 's result is |ψ 2 21 25 , the particles 4 and 6 will collapse into the state 1 8 (c|01 + d|12 + a|23 + b|30 ) 46 , then Bob should perform U 2 ⊗ U 6 on particles 4 and 6, the original state(6) can be reconstructed successfully. If the measurement results of Alice 1 and Alice 2 are the other 63 cases of the successful terms in Eq. (12), the relation between the results by Alice 1 and Alice 2 and the unitary operations by Bob is shown in Table 3. In this protocol, the required classical communication cost is also 8 bits.
Conclusion
We propose the protocols for joint remote preparation of the four-dimensional quantum states by using various types of the four-dimensional entangled states as quantum channel. In these protocols, two senders share an original state which they wish to help the receiver to remotely prepare it, but each sender only partly knows the state. It is shown that, only if when all the senders collaborate with Table 3: Corresponding relation between the measurement results (MR) of Alice 1 and Alice 2 and the local unitary operations (U i ) 4 ⊗(U j ) 6 (i, j = 0, 1, · · · , 7) by Bob. (ξ gh → |ψ (1) gh 13 , τ mn → |ψ (2) mn 25 , each other, the receiver can remotely reconstruct the original state. In order to realize the JRSP, two senders need to perform four-dimensional projective measurements on their own particle, respectively, and then inform the receiver Bob of the measurement outcomes through the classical channel. According to the public information of the senders, the receiver can obtain the original state by using some appropriate unitary operations. These protocols require resources such as bipartite or tripartite four-dimensional entangled state as the quantum channel, single-or two-particle four-dimensional projective measurement, classical communication and appropriate unitary operation. In principle, our protocols can be generalized to the case of JRSP of d-dimensional (d = 2 N , N is a positive integer greater than 2) quantum state. Furthermore, the required classical communication cost in the JRSP process in our protocols has been calculated respectively. | 2010-06-22T03:32:41.000Z | 2010-06-22T00:00:00.000 | {
"year": 2010,
"sha1": "18feed7f9fc90e862923cfeefa7eebed93cd563e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "18feed7f9fc90e862923cfeefa7eebed93cd563e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
269981173 | pes2o/s2orc | v3-fos-license | Transcriptional Regulation Analysis Provides Insight into the Function of GSK3β Gene in Diannan Small-Ear Pig Spermatogenesis
Glycogen synthase kinase-3β (GSK3β) not only plays a crucial role in regulating sperm maturation but also is pivotal in orchestrating the acrosome reaction. Here, we integrated single-molecule long-read and short-read sequencing to comprehensively examine GSK3β expression patterns in adult Diannan small-ear pig (DSE) testes. We identified the most important transcript ENSSSCT00000039364 of GSK3β, obtaining its full-length coding sequence (CDS) spanning 1263 bp. Gene structure analysis located GSK3β on pig chromosome 13 with 12 exons. Protein structure analysis reflected that GSK3β consisted of 420 amino acids containing PKc-like conserved domains. Phylogenetic analysis underscored the evolutionary conservation and homology of GSK3β across different mammalian species. The evaluation of the protein interaction network, KEGG, and GO pathways implied that GSK3β interacted with 50 proteins, predominantly involved in the Wnt signaling pathway, papillomavirus infection, hippo signaling pathway, hepatocellular carcinoma, gastric cancer, colorectal cancer, breast cancer, endometrial cancer, basal cell carcinoma, and Alzheimer’s disease. Functional annotation identified that GSK3β was involved in thirteen GOs, including six molecular functions and seven biological processes. ceRNA network analysis suggested that DSE GSK3β was regulated by 11 miRNA targets. Furthermore, qPCR expression analysis across 15 tissues highlighted that GSK3β was highly expressed in the testis. Subcellular localization analysis indicated that the majority of the GSK3β protein was located in the cytoplasm of ST (swine testis) cells, with a small amount detected in the nucleus. Overall, our findings shed new light on GSK3β’s role in DSE reproduction, providing a foundation for further functional studies of GSK3β function.
Introduction
Spermatozoa are highly specialized cells essential for transmitting genetic information to the next generation.If mammalian sperms are immotile, the oocyte cannot be fertilized even after being released from the seminiferous tubular epithelium of the testes.Before fertilizing oocytes, sperm must undergo a series of intricate processes in the uterus and fallopian tubes.These processes include capacitation, acrosome reaction, nuclear condensation and elongation, cytoplasm elimination, and flagellar development prior to their release from the testis [1].However, the process of overactivation and acrosome reactions is tightly regulated through the phosphorylation of specific proteins.Glycogen synthase kinase-3 (GSK3), a serine/threonine kinase with two different isoforms (α and β), has been identified in association with sperm maturation in the epididymis [2].The levels of GSK3α/β in goat sperm increase during epididymal tail transport; the serine phosphorylation of GSK3α/β regulates goat sperm motility and acrosome response by mediating energy pathways in glycolysis and oxidative phosphorylation [3].Notably, the catalytic activity of GSK3 is greater in immature epididymal head sperm compared to mature epididymal tail sperm.However, the pharmacological inhibition of GSK3 alone does not induce movement in quiescent sperm cells [4].Furthermore, the GSK3α/β protein plays a crucial role not only in regulating sperm maturation in mice but also in orchestrating the acrosome reaction [4].The absence of GSK3α in mice results in reproductive incapacity, accompanied by compromised sperm function [5].
Glycogen synthase kinase-3β (GSK3β), initially recognized as an enzyme responsible for glycogen synthase phosphorylation and deactivation, is now recognized as a ubiquitous signaling molecule pivotal in various cellular functions.GSK3β is a promising drug target for numerous neurological disorders, including Alzheimer's disease, Parkinson's disease, schizophrenia, and bipolar disorder, as well as conditions related to energy metabolism and cell death, such as diabetes and cancer [6,7].The extensive use of this kinase in various diseases stems from its role as an active protein kinase that regulates nearly 40 protein substrates, all of which are modulated by signaling pathways such as Wnt, insulin, and brain-derived neurotrophic factor (BDNF).Notably, in mouse models of Alzheimer's disease, the reduction in amyloid-β deposition and associated neuronal apoptosis during apoptosis is achieved by inhibiting GSK3β [8,9].In the neural feedforward response mechanism, the inhibition of Wnt signaling by neurotoxic amyloid-β peptide results in the inactivation of GSK3β phosphorylation [10].GSK3β is the primary kinase responsible for phosphorylating the microtubule-stabilizing tau protein, a key factor in the formation of insoluble paired spiral filaments and the characteristic nerve fiber tangling observed in Alzheimer's disease [11,12].The diverse functions carried out by GSK3β underscore its involvement in numerous cellular processes, encompassing glycogen metabolism, gene expression, proliferation, and development [13].
Integrating high-throughput short reads from second-generation sequencing (Illumina) with long reads from third-generation sequencing (PacBio) enables a more comprehensive capture of transcript full-length information, facilitating the identification of new transcripts, splice variants, and non-coding RNAs, thereby enhancing our insight into gene expression mechanisms [14].Here, we employed Pacific Biotechnology isoform sequencing (PacBio Iso-Seq) and Illumina RNA sequencing (RNA-seq) technologies to assess the expression of GSK3β mRNA and alternative splicing, as well as those related to miRNA, and lncRNA in DSE testes.We analyzed the molecular characteristics of the GSK3β gene, and its corresponding protein functions, and conducted protein-protein interaction and correlation analyses.Additionally, we constructed a competitive endogenous RNA (ceRNA) regulatory network by annotating the GSK3β gene and identified associated GO terms, miRNAs, and IncRNAs.This study underlined the significance of GSK3β in the testis, providing a valuable resource for further study of the mechanisms and functions of the GSK3β gene in DSE spermatogenesis.
Short-Read RNA-Seq and Long-Read Iso-Seq
We conducted transcriptomic analysis of GSK3β in DSE testes using an integrated approach, combining short-read RNA-seq and long-read Iso-seq.We collected the testicular samples from three 12-month-old mature male DSE.We obtained the short-read sequencing data from Novogene Co., Ltd., Tianjin, China, while long-read sequencing data were obtained from GrandOmics Co., Ltd., Wuhan, China.We merged the unannotated transcripts with annotated transcripts from the Ensembl database to generate a comprehensive annotation file.The filtering and processing of second-generation sequencing data, coupled with the incorporation of newly identified subtypes from third-generation sequencing, facilitated the generation of BAM files.We visualized GSK3β gene transcripts using the Sashimi Plot function in IGV.Subsequently, we constructed the pig reference genome (Sus scrofa 11.1) index using STAR-2.5.2 [15].We calculated gene expression quantification, comprising raw expression levels and normalized expression values (TPM) using FeatureCounts-2.0.1 and Salmon-1.5.1, respectively.Finally, we employed the visualization of the expression abundance of the GSK3β gene using R package Gviz.
Transcript Amplification and Sequence Determination
We designed the primers F1/R1 (F1: GCGTTTATCATTAACCTAACACC; R1: ATTTTC TTTCCAAACGTGACC) to amplify the transcript ENSSSCT00000039364 using Premix Taq™ (Takara, Dalian, China).The total reaction volume of 25 µL comprised 12.5 µL of Premix, 1 µL each of 10 µM F1/R1 primers, 1 µL of 50 ng/µL testis cDNA, and the remaining volume was filled with H 2 O.The amplification program consisted of initial denaturation at 95 • C for 5 min, followed by 35 cycles of denaturation at 95 • C for 30 s, annealing at 61 • C for 30 s, extension at 72 • C for 80 s, and a final extension at 72 • C for 10 min.
Characteristics Analysis of Transcript ENSSSCT00000039364
To obtain the complete coding sequence (CDS) of the GSK3β transcript ENSSSCT0000 0039364, we utilized Lasergene 7.1 to analyze the sequencing data.To decipher the feature information of GSK3β, we used ProtParam to estimate the molecular weight, molecular formula, and isoelectric point.We parsed the functional domain, secondary structure, tertiary structure, hydrophobic structure, transmembrane helices, and signal peptide of the GSK3β protein using the SMART, SOPMA, I-TASSER, ProtScale, TMHMM2.0, and SignalP6.0websites, respectively.Finally, we measure the evolutionary relationship of GSK3β amino acid sequences across different species using MEGA11.
Protein-Protein Interaction Analysis of GSK3β
We constructed the protein-protein interaction network using String 11.5.Additionally, for these proteins, we employed GO and KEGG functional enrichment analysis using the R package clusterProfiler.In our analysis, entries with a significance threshold of p < 0.05 were considered to be statistically significant.Finally, we associated the identified proteins with gene expression obtained from transcriptome sequencing data and calculated the expression correlation between them.
Regulatory Network Analysis of GSK3β
We utilized the annotation of the UniProt database to acquire insights into the biological processes associated with GSK3β, including molecular function and biological processes.To capture miRNAs and lncRNAs regulating GSK3β, we analyzed the transcriptome data using miRanda 3.3 and RNAhybrid 2.1.2.Further, we visualized the ceRNA (competing endogenous RNA) transcriptional regulatory network using Cytoscape 3.9.1.
Subcellular Localization Detection of GSK3β
Based on the XhoI and KpnI restriction enzyme cleavage sites of the GSK3β gene CDS sequence and the multicloning sites of the pEGFP-C1 green fluorescent protein eukaryotic expression vector, we designed the primers F3/R3 (F4: CTCGAGCTATGTCAGGGCGGC-CCAGAA; R4: GGTACCTCAGGTGGAATTGGAAGCTGACG) for the eukaryotic expression of the cDNA sequence.We constructed the pEGFP-C1-GSK3β eukaryotic expression recombinant plasmids and transfected them into ST cells to locate the GSK3β and nontransfected ST cells, and those cells transfected with the EGFP-C1 vector served as the negative controls.Subsequently, we stained the nuclear and mitochondria of ST cells using blue Hoechst 33,342 and red MitoTracker, respectively.Finally, we captured the expression and localization of GSK3β in ST cells using inverted fluorescent microscopy.
Alternative Splicing of GSK3β
The short-and long-read sequencing results revealed four distinct alternative splicing isoforms in the testes of DSE pigs.Specifically, novel transcript PB.5234.183was identified from third-generation sequencing data.Notably, ENSSSCT00000039364 emerged as the predominant transcript, as shown in Figure 1.
Based on the XhoⅠ and KpnⅠ restriction enzyme cleavage sites of the GSK3β gene CDS sequence and the multicloning sites of the pEGFP-C1 green fluorescent protein eukaryotic expression vector, we designed the primers F3/R3 (F4: CTCGAGC-TATGTCAGGGCGGCCCAGAA; R4: GGTACCTCAGGTGGAATTGGAAGCTGACG) for the eukaryotic expression of the cDNA sequence.We constructed the pEGFP-C1-GSK3β eukaryotic expression recombinant plasmids and transfected them into ST cells to locate the GSK3β and non-transfected ST cells, and those cells transfected with the EGFP-C1 vector served as the negative controls.Subsequently, we stained the nuclear and mitochondria of ST cells using blue Hoechst 33,342 and red MitoTracker, respectively.Finally, we captured the expression and localization of GSK3β in ST cells using inverted fluorescent microscopy.
Alternative Splicing of GSK3β
The short-and long-read sequencing results revealed four distinct alternative splicing isoforms in the testes of DSE pigs.Specifically, novel transcript PB.5234.183was identified from third-generation sequencing data.Notably, ENSSSCT00000039364 emerged as the predominant transcript, as shown in Figure 1.
GSK3β Gene Expression Characteristics
The transcript ENSSSCT00000039364 of the GSK3β gene in DSE pig testes was located on chromosome 13 of the pig genome (Sscrofa11.1), with a total length of 227,644 bp.Gene annotation conducted by Gviz delineated transcript ENSSSCT00000039364, which comprised 12 exons and 11 introns, with consistently high expression across all three DSE pig samples (Figure 2a).A 1440 bp fragment of the GSK3β gene was obtained using primers F1/R1 (Figure 2b).Subsequent Sanger sequencing unveiled the full-length coding sequence (CDS) of GSK3β as 1263 bp, encoding a total of 420 amino acids (Figure 2c).The GSK3β protein contains five classes of functional active sites: N-glycosylation, proteinase C phosphorylation, casein kinase II phosphorylation, tyrosine kinase phosphorylation, and N-myristoylation.
GSK3β Gene Expression Characteristics
The transcript ENSSSCT00000039364 of the GSK3β gene in DSE pig testes was located on chromosome 13 of the pig genome (Sscrofa11.1), with a total length of 227,644 bp.Gene annotation conducted by Gviz delineated transcript ENSSSCT00000039364, which comprised 12 exons and 11 introns, with consistently high expression across all three DSE pig samples (Figure 2a).A 1440 bp fragment of the GSK3β gene was obtained using primers F1/R1 (Figure 2b).Subsequent Sanger sequencing unveiled the full-length coding sequence (CDS) of GSK3β as 1263 bp, encoding a total of 420 amino acids (Figure 2c).The GSK3β protein contains five classes of functional active sites: N-glycosylation, proteinase C phosphorylation, casein kinase II phosphorylation, tyrosine kinase phosphorylation, and N-myristoylation.
Protein-Protein Interaction
The protein-protein interaction analysis revealed 50 proteins that potentially interacted with GSK3β (Figure 4a).The subsequent KEGG enrichment analysis indicated that these proteins are mainly involved in pathways such as the Wnt signaling pathway, human papillomavirus infection, hippo signaling pathway, hepatocellular carcinoma, gastric cancer, colorectal cancer, breast cancer, endometrial cancer, basal cell carcinoma, and Alzheimer's disease (Figure 4b).Further, GO enrichment analysis indicated that these proteins were primarily involved in β-catenin binding, protein serine kinase activity, DNA-binding transcription factor binding, Wnt signalosome, chromosome, centromeric region, cell-cell signaling by wnt, regulation of binding, nuclear transport, and developmental cell growth (Figure 4c).Finally, we matched these proteins with gene expression data from DSE pigs and identified significant correlations between GSK3β and GYS1, LRP6, PIN1, PRKCZ, TP53, AKT3, APC, BTRC, CCND1, DISC1, DPYSL2, DVL2, EIF2B5, as well as GLI3, with correlation coefficients of 0.99792 to 0.95129, respectively (Figure 4d).
Protein-Protein Interaction
The protein-protein interaction analysis revealed 50 proteins that potentially interacted with GSK3β (Figure 4a).The subsequent KEGG enrichment analysis indicated that these proteins are mainly involved in pathways such as the Wnt signaling pathway, human papillomavirus infection, hippo signaling pathway, hepatocellular carcinoma, gastric cancer, colorectal cancer, breast cancer, endometrial cancer, basal cell carcinoma, and Alzheimer's disease (Figure 4b).Further, GO enrichment analysis indicated that these proteins were primarily involved in β-catenin binding, protein serine kinase activity, DNA-binding transcription factor binding, Wnt signalosome, chromosome, centromeric region, cell-cell signaling by wnt, regulation of binding, nuclear transport, and developmental cell growth (Figure 4c).Finally, we matched these proteins with gene expression data from DSE pigs and identified significant correlations between GSK3β and GYS1, LRP6, PIN1, PRKCZ, TP53, AKT3, APC, BTRC, CCND1, DISC1, DPYSL2, DVL2, EIF2B5, as well as GLI3, with correlation coefficients of 0.99792 to 0.95129, respectively (Figure 4d).
Expression Pattern of GSK3β across Multi-Tissue
Multi-tissue qPCR analysis revealed that the relative expression of GSK3β displayed the highest level in the testis of DSE pigs.Conversely, relatively low expression levels were observed in the liver, spleen, lung, kidney, brain, duodenum, colon, seminal vesicle, prostate, urethral gland, and epididymis.The expression levels were almost negligible in the heart, stomach, and muscle.The relative expression level of GSK3β was significantly higher in testis tissues than in other tissues (p < 0.01) (Figure 6).
Expression Pattern of GSK3β across Multi-Tissue
Multi-tissue qPCR analysis revealed that the relative expression of GSK3β displayed the highest level in the testis of DSE pigs.Conversely, relatively low expression levels were observed in the liver, spleen, lung, kidney, brain, duodenum, colon, seminal vesicle, prostate, urethral gland, and epididymis.The expression levels were almost negligible in the heart, stomach, and muscle.The relative expression level of GSK3β was significantly higher in testis tissues than in other tissues (p < 0.01) (Figure 6).
Expression Pattern of GSK3β across Multi-Tissue
Multi-tissue qPCR analysis revealed that the relative expression of GSK3β displayed the highest level in the testis of DSE pigs.Conversely, relatively low expression levels were observed in the liver, spleen, lung, kidney, brain, duodenum, colon, seminal vesicle, prostate, urethral gland, and epididymis.The expression levels were almost negligible in the heart, stomach, and muscle.The relative expression level of GSK3β was significantly higher in testis tissues than in other tissues (p < 0.01) (Figure 6).
Subcellular Localization Results of GSK3β
The subcellular localization analysis indicated that the majority of the GSK3β protein was located in the cytoplasm of ST cells, with a small amount detected in the nucleus (Figure 7).
Subcellular Localization Results of GSK3β
The subcellular localization analysis indicated that the majority of the GSK3β protein was located in the cytoplasm of ST cells, with a small amount detected in the nucleus (Figure 7).
Discussion
In this study, we conducted an in-depth analysis of the GSK3β transcriptome in DSE pig testis using a comprehensive approach involving short-read RNA-seq and long-read Iso-seq.Our analysis unveiled four prominently expressed isoforms in DSE pig testis.Notably, ENSSSCT00000039364 emerged as the most significant isoform.Consequently, we proceeded with our further analysis of this specific transcript.The complete coding sequence (CDS) of the GSK3β gene was amplified from DSE pig testis cDNA, yielding a sequence length of 1263 bp encoding 420 amino acids.Protein phosphorylation is a fun-
Discussion
In this study, we conducted an in-depth analysis of the GSK3β transcriptome in DSE pig testis using a comprehensive approach involving short-read RNA-seq and long-read Iso-seq.Our analysis unveiled four prominently expressed isoforms in DSE pig testis.Notably, ENSSSCT00000039364 emerged as the most significant isoform.Consequently, we proceeded with our further analysis of this specific transcript.The complete coding sequence (CDS) of the GSK3β gene was amplified from DSE pig testis cDNA, yielding a sequence length of 1263 bp encoding 420 amino acids.Protein phosphorylation is a fundamental and widely recognized mechanism for regulating and controlling protein activity.GSK3, with multiple phosphorylation sites such as proteinase C and casein kinase II, plays a role in regulating sperm motility and the acrosome reaction via phospho-ser9-GSK3β, contributing to the regulation of sperm energy metabolism [3].Certain GSK3β substrates necessitate phosphorylation on serine or threonine residues by another protein kinase, utilizing a phosphate group as an identifying element to phosphorylate the substrate [17].Moreover, GSK3β exhibited high conservation across multiple animal species, with over 98.8% similarity observed compared to 10 other animals.
Using GSK3β alone does not guarantee the normal capacitation and acrosome reaction of sperm.It needs to cooperate with other proteins to effectively function in these processes.The analysis of protein-protein interactions unveiled that GSK3β had the potential interaction with 50 different proteins.These proteins were primarily involved in pathways related to the Wnt signaling pathway, human papillomavirus infection, and other processes, according to KEGG enrichment analysis.Furthermore, GO analysis revealed their involvement in various biological processes, including protein serine kinase activity and DNA-binding transcription factor binding.Collectively, these signaling pathways constitute an intricate network regulating both spermatogenesis and the development of cancerous lesions.We also found that GSK3β significantly correlated with several proteins, such as GYS1, and that insulin deficiency can inhibit glycogen synthesis by activating GSK3, which suppresses muscle GYS1 (glycogen synthase 1) and thus maintains the activated form of glycogen phosphorylase [18].Notably, individuals with type 2 diabetes mellitus (T2DM) experience an elevation in the levels of UDP-glucose (uridine diphosphoglucose), an active intermediate in the testes.Although GYS1 is identified as a hypoxia-inducible factor, the evaluation of testicular GYS1 levels in T2DM rats reveals no significant alterations [19].The inhibition of GSK3 by Wnt signaling peaks in mitosis, and the Wnt co-receptor LRP6 (low-density lipoprotein receptor-related protein 6) is activated in a cell-cycle-dependent manner by CCNY (cyclin Y) and its target kinase CDK14 (cyclin-dependent kinase 14) [20].Furthermore, GSK3/LRP6 interactions play a role in gene expression, which could be the foundation of sperm physiology [21].Prolyl isomerase (PIN1) is essential for promoting the production of spermatozoa through spermatogonial processes in stem cells.Moreover, the maintenance of the blood-testicular barrier depends on the expression of PIN1 in supporting cells [22].The inhibition of AKT3 expression leads to cell cycle arrest in embryonic stem cells [23].Studies utilizing apigenin to treat spermatogonia revealed the down-regulation of AKT3 expression, affirming its role in inhibiting spermatogonial stem cell proliferation and promoting abnormal differentiation [24].Schizophrenia 1 (DISC1) is identified as a risk factor for both schizophrenia and affective disorders.Investigation into the C-terminal structure of mutant DISC1 and an analysis of its expression levels in male and female mice revealed an interruption in the GSK3β signaling pathway, suggesting a significant interaction between these factors [25].The DVL protein exhibits widespread expression in the body during development.ASPM (abnormal spindle-like microcephaly associated protein) participates in the degradation of DVL2 by counteracting autophagy, consequently activating the Wnt/β-catenin signaling pathway [26].In summary, the identification of these protein interactions with GSK3β provided valuable insights, paving the way for further research to deepen our understanding of GSK3β's function and molecular mechanisms in DSE pigs.
MicroRNAs (miRNAs) are short, single-stranded non-coding RNAs typically comprising 19-22 nucleotides, originating from local hairpin structures processed by two RNase III enzymes, Drosha and Dicer.Functionally, miRNAs negatively regulate gene expression post-transcriptionally by binding to complementary sites within the 3 ′ UTR region of tar-get mRNAs [27,28].These molecules wield significant influence across diverse biological processes, including development, apoptosis, proliferation, differentiation, transformation, and cellular senescence [29,30].Functional annotation of the porcine GSK3β gene and construction of the ceRNA regulatory network revealed that GSK3β was targeted by 11 miRNAs.Notably, ssc-miR-331-3p, ssc-miR-2320-5p, ssc-miR-149, and ssc-miR-185 have been the focus of more extensive investigations in this regard.Modulating the expression of ssc-miR-331-3p at various stages of adipocyte development can enhance intramuscular fat deposition without a concurrent increase in subcutaneous fat [31].Several miRNAs may regulate a single gene, and conversely, a single miRNA can influence multiple genes.Notably, ssc-miR-2320-5p and ssc-miR-149 target TRAF3 (TNF receptor-associated factor 3), a versatile molecule within the TRAF family [31,32].Furthermore, ssc-miR-2320-5p is implicated in host protective responses and parasite growth [33], while ssc-miR-149 is associated with the regulation of precocious traits in boars.It exhibits high expression levels in immature boars and targets the SPATA3 gene, which was demonstrated to be the most significantly down-regulated in the testes of infertile patients [34] and plays a pivotal role in mouse spermatogonia development [35].Additionally, ssc-miR-185 was involved in immune cell differentiation [33].In summary, miRNAs are central contributors not only in spermatogenesis but also in gene expression, viral infection, and cancer.LncRNA is a crucial contributor to various biological processes and has emerged as a significant regulator in diverse biological environments.According to GO analysis, the principal enrichment items are associated with a range of physiological response processes, including protein phosphorylation, and the negative regulation of signal transduction, indicating the pivotal role of the GSK3β gene in diverse physiological processes.
GSK3β is predominantly found in the central nervous system, often localized in axons, and it serves as the primary kinase responsible for phosphorylating tau proteins.The overexpression or overactivity of GSK3β increases tau protein phosphorylation, which in turn leads to altered axonal transport and hippocampal neurodegeneration [36] and learning disabilities [37].Notably, GSK3β is expressed in goat sperm, particularly around the acrosome, in the middle and along the main tail, with its abundance increasing during the epididymal transport process [3].Multi-tissue qPCR analysis revealed that GSK3β was universally expressed across various tissues in DSE pigs, with high expression levels in the testis, underscoring its multifunctionality and significant role within organisms.At the cellular level, our subcellular localization findings revealed predominantly cytoplasmic expression in ST cell lines.
Conclusions
The integration of short-read and long-read sequencing methodologies provided a comprehensive insight into the transcriptional regulation and expression of GSK3β.Through an investigation into the transcriptional complexity arising from alternative splicing events within GSK3β, transcript characteristics, ceRNA-mediated regulatory, and the expression profiles at both mRNA and cellular levels were elucidated.Notably, four distinct transcripts were identified.ENSSSCT00000039364 exhibited 12 exons and displayed a conserved amino acid sequence across 11 species.Additionally, 50 proteins interacting with GSK3β were delineated, mainly involving the Wnt signaling pathway, multiple cancers, and Alzheimer's disease.Significant associations were observed between GSK3β and GYS1, LRP6, PIN1, PRKCZ, TP53, AKT3, APC, BTRC, CCND1, DISC1, DPYSL2, DVL2, EIF2B5, as well as GLI3.GSK3β was primarily involved in thirteen GOs, encompassing six molecular functions and seven biological processes.Eleven miRNAs were identified as regulators of GSK3β expression.The substantial expression of GSK3β was detected in the testes, predominantly localized within the cytoplasm of ST cells.These findings broaden our understanding of the transcriptional regulatory properties underlying the spermatogenesisrelated gene GSK3β, thereby laying the foundation for further elucidation of its function and molecular mechanism within the testes of DSE pigs, particularly in the context of sperm capacitation and acrosome reaction.
Figure 1 .
Figure 1.Transcripts and expression levels of GSK3β from three testes of DSE pigs.
Figure 1 .
Figure 1.Transcripts and expression levels of GSK3β from three testes of DSE pigs.
13 Figure 2 .
Figure 2. Gene structure of GSK3β.(a) Chromosome location, exon, intron abundance based on transcriptome sequencing; (b) RT-PCR product of DSE GSK3β gene; (c) GSK3β gene coding sequence and amino acid sequence M, DL2000 DNA marker; GSK3β PCR product; double underline, start codon; asterisk, stop codon; letters in upper line, nucleic acid sequence; letters in lower line, amino acids sequence.
Figure 2 .
Figure 2. Gene structure of GSK3β.(a) Chromosome location, exon, intron abundance based on transcriptome sequencing; (b) RT-PCR product of DSE GSK3β gene; (c) GSK3β gene coding sequence and amino acid sequence M, DL2000 DNA marker; GSK3β PCR product; double underline, start codon; asterisk, stop codon; letters in upper line, nucleic acid sequence; letters in lower line, amino acids sequence. | 2024-05-24T15:06:42.854Z | 2024-05-22T00:00:00.000 | {
"year": 2024,
"sha1": "f0bb2829b5726911a3259324c9ceccef02fdf7ae",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/15/6/655/pdf?version=1716371374",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c47a0dc9667c569ea4173205efa33358a549da61",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
264105270 | pes2o/s2orc | v3-fos-license | Features of Artemia aquaculture technology in Russia, prospects for its use in other temperate and tropical climates
. The results of long-term monitoring of the abundance and biomass of Artemia on the example of several model lakes in the south of Western Siberia are presented. Based on this, conclusions are drawn about the dynamics of the density of Artemia crustaceans characteristic of shallow lakes of temperate climate, associated with low live birth: a high number Artemia shrimps of the first generation and a low following. It is proposed to inoculate naupliuses in lakes during the period of catastrophic decline in the number of crustaceans, which will create a new powerful generation of Artemia. The results of such experiments are shown on the example of two lakes for two years. An assumption is made about the possibility of using this technology of aquaculture of artemia in other temperate countries. Laboratory and field studies on reducing the incubation time of cysts and early release into the brine of lakes are presented. The influence of brine salinity of lakes on the results of early release of nauplius and non-hatched cysts is shown. The technology of reducing the incubation period of cysts can be used in subtropical and tropical climate.
Introduction
Aquaculture, as one of the main sources of protein food supply for humanity, needs a stable supply of live starter feed.Artemia cysts have earned recognition as the most easy-to-handle food among all currently available live foods for fish larvae and crustaceans.Thanks to these cysts, according to the latest data [1], more than 10 million tons of valuable aquaculture species (shrimp, crabs, various marine fish) are grown annually.At present, almost no sturgeon farm in Russia can do without the use of brine shrimp for feeding fish larvae [2,3].The main world reserves of artemia cysts are concentrated on the territory of several countries (USA, China, Russia, Kazakhstan, Uzbekistan), where from 3.0 to 4.5 thousand tons of cysts in dry weight are harvested annually.A characteristic feature of commercial Artemia reservoirs is that they are drainless relatively shallow lakes located in a temperate continental arid climatic zone.In a changing climate, the natural reserves of artemia are very vulnerable.
In many countries with a tropical and subtropical climate (Thailand, Vietnam, Kenya, Mozambique, Sri Lanka, Peru, Philippines, Iran), attempts are being made to grow Artemia in coastal seasonal salt ponds.For temperate water bodies, such technologies are not suitable.The need to conduct research to improve the technology of growing artemia at the local level is mentioned in many publications [1,4,5,6,7].Our tasks were to study the possibility of growing artemia in natural salty reservoirs with a local artemia population to obtain additional cysts.Laboratory and field studies were carried out, the results were evaluated and prospects for work on inoculation of nauplius in salt lakes were outlined.
Monitoring of Artemia biomass and abundance in model lakes
In the period from 2001 to 2004, year-round studies of Аrtemia population density were conducted in 5 lakes (Vishnyakovskoye, Nevidim, Bolshoe Medvezhye, Maloe Medvezhye and Ebeyty), in 3 lakes (Novo-Georgievskoye, Cherdynskoye) -during the development of artemia crustaceans (from April to October) (Fig. 1).A total of 227 comprehensive studies were carried out, which also included abiotic factors: temperature, oxygen, and salinity (Table 1).The results of these field studies are presented in more detail in the literature (Van Stappen, Litvinenko et al., 2009).To summarize the data on monitoring studies, indicators of the ratio of the actual values of the average monthly biomass and the number of artemia crustaceans to the seasonal average values were used.In 2019-2020 in lakes Okunevo and Karasye, studies of the abundance and biomass of crustaceans and Artemia cysts were carried out from June to August, that is, before and after the experiments on inoculation of nauplii.
Laboratory experiments on incubation of cysts and survival of nauplii in brine with different salinity
The material for the experiments was cysts from Filatovo Lake, collected in September 2011.All experiments were carried out in triplicate.The temperature during the experiments was 23-24 0 C. The percentage of nauplii hatching was determined by standard methods (Sorgeloos et al., 1986) in the water of natural saline reservoirs with salinity of 150‰ (Filatovo Lake), 198‰ (Ebeyty Lake) and 240‰ (Medvezhye Lake), which was diluted with fresh water in concentrations from 1 to 100‰.
In experiments to study the survival rate, cysts were incubated in solutions of brine from Lake Ebeity diluted with fresh water.Concentrations of 0.5-195‰ were tested.
In experiments to reduce the incubation period, every 2 hours, 25 ml of a sample was taken from the incubation medium, which was passed through a gas sieve.Filtered cysts (cysts and nauplii) were placed in 50 ml vessels with lake brine.These samples were observed on day 1 from the start of incubation and on day 2 after being placed in the brine.The number of cysts and nauplii was counted using a stereoscopic microscope.
Field studies during the period of incubation of cysts and the release of nauplii into a natural salt lake
The experiments were carried out: July 09-26, 2019 and July 14-20, 2020 on Okunevo Lake; July 08-13, 2020 and July 14-24, 2021 -on Karasye Lake.The cysts were incubated in 10 m 3 frame pools located on the shore of a salt lake.The incubation medium with a salinity of 20-50‰ was a mixture of fresh water and water from a salt lake.During incubation, lighting installations, aeration with the help of airlifts, and generators with a total capacity of 7.5 kW were used.Figure 2 shows a diagram and a photo of the incubation unit.The installation was located on the lake shore for convenient release of nauplii into the lake and pumping water from the lake into the installation.Fresh water for dilution was supplied either from nearby freshwater lakes using water pumps or water carts.
Monitoring of saline lakes in the south of Western Siberia
Long-term year-round monitoring of abiotic and biotic factors of 5 model hypergaline lakes Vishnyakovskoye, Nevidim, Bolshoye Medvezhye, Maloye Medvezhye and Ebeity, as well as 3 lakes studied only during the growing season: Novo-Georgievskoye, Cherdynskoye, Ulzhay located in the southern part of Western Siberia (Fig. 3) allowed to reveal regularities in the development of Artemia populations [8,9,10]: -wintering as cysts, -the first generation of crustaceans is the most powerful due to hatching of naupliuses from overwintering cysts at water warming up to 5 0 C (mid-April), -subsequent 2nd and 3rd generations are fading due to low live births, -die-off of crustaceans in the middle of October at stable transition through the temperature of 5 0 C.
Fig.3. Seasonal dynamics of the total number of brine shrimp in model lakes
According to the conducted studies, during the period when commercial stocks of cysts should be formed in the reservoir (August-September), as a rule, sexually mature crustaceans occur in a small amount.
Figure 4 shows the total indicators of the dynamics of abundance and biomass of Artemia crustaceans expressed through the ratio of actual values to average values.It turned out that the maximum abundance of Artemia crustaceans, exceeding the seasonal average by 3 times, is observed in late April, followed by a sharp decline until the end of the growing season.During the period of the 2nd and 3rd generation of the crustaceans (June and September), this decline slows down somewhat.In the dynamics of Artemia biomass, the first peak is observed in June, i.e. in the period when the crustaceans of the first generation reach the sexually mature stage.The biomass of Artemia at this time exceeds the average seasonal values by 2 times.The next peak of Artemia biomass (smaller in magnitude than the first and below the seasonal average) is observed in September.Thus, from June to August there is a sharp decrease in the number and biomass of Artemia crustaceans due to dying off of adult stages of the 1st most powerful generation of crustaceans.At the same time, the high-fed water body (due to a large amount of organic matter of dead crustaceans and slow decomposition processes due to high salinity) is mostly empty in the second half of summer.Inoculation of naupliuses during this period allows creating additional powerful generation and thus doubling the productivity of water bodies.
The idea of Artemia nauplius inoculation during the period of natural population decline was first voiced at the International Conference on Salt Lakes in 2017 [11].In the following years, a number of experiments were carried out to put this idea into practice.
Laboratory studies to determine the optimal salinity for cyst incubation
First, laboratory studies on incubation of cysts in water with different salinity were carried out.Natural lake brine diluted with tap water to the required concentrations was used: 1-100‰.The hatched naupliuses were planned to be released into a natural water body with relatively high salinity from 100 ‰ and more.Therefore, it is necessary to select the most acceptable salinity to reduce osmotic shock.Figure 5 shows that maximum hatching is observed at the lowest salinity values (1-15‰), with salinity greater than 45‰ showing a significant decrease in hatching percentage.Thus, salinity of 35-45‰ was recommended for incubation of cysts and inoculation of nauplii into natural water body.The study of nauplius survival rate after their release into the medium with salinity from 0.5 to 195‰ showed (Fig. 6) that on the second day (48 h) after release the survival rate of 100% was registered in experiments with salinity 45-180‰, on 60 h from the beginning of the experiment the maximum survival rate (about 80%) was observed at salinity 45-120‰.Decrease in survival rate to 48-59% was observed at both low and high salinity.In water with salinity 135-165‰ the survival rate of crustaceans was within 67-69%.
Field studies of nauplius inoculation into saline lakes
The first experiment on nauplius introduction into Ulzhai Lake was conducted in August 2015 (see Table 1).The lake is drainless, located in the steppe zone, salinity varied from 49 to 235 g/l in different years and periods of the season.During the period of experiments the salinity was about 120 ‰.A total of two experiments were conducted.In the first one 50 kg and in the second one 30 kg of dry cysts were incubated.The quality of cysts was relatively high: impurities were 3%, hatching was 78%, and moisture content was 6%.The experimental conditions are summarized in Table 2 Incubation occurred for 24 hours in the first experiment and 36 hours in the second experiment.The longer incubation duration in experiment 2 was due to low air temperature.Hatching exceeded 60% in both cases.The high cyst density of 5 g/L dry cysts resulted in a decrease in oxygen content to 0.7 mg/L by the end of the experiment.Therefore, the cyst density was reduced in the next experiment.Simultaneously, hatching experiments were conducted in lake brine (salinity 120‰).No hatching of naupliuses was observed within 48 h.
At the end of incubation, the hatched naupliuses were released into a pen (a section of the lake fenced with fine mesh fabric) and the ratio of live to dead naupliuses was used to determine the survival rate in the lake brine.On the 4th day the survival rate in experiment 1 was 47%, in experiment 2 -59%.
Thus, the possibility of incubation of cysts in large volumes (10 m3) near a water body and the success of nauplius inoculation into a natural water body were proved.
The following studies were devoted to refining the technology for such cultivation.
In 2019 and 2020 on Lake Solenoye and in 2020 and 2021 on Lake Karasie (see Table 1), experiments were conducted to determine the optimal incubation parameters: incubation start time and its duration, cyst density, amount of hydrogen peroxide for cyst activation, and the method of releasing Artemia nauplii into a water body (passive, active).The most interesting of these indicators: reduction of incubation time is given in this article.
Under standard conditions the duration of incubation of cysts used as starter live food is 24-48 hours.This is due to the need for complete separation of cyst shells from nauplii, because when these shells and uninoculated cysts enter the intestinal tract of fish and crustacean larvae, they become blocked, leading to death.In the case of inoculation of naupliuses into the water body there is no such need.We tested the possibility of hatching of dry cysts in lake brine without incubation and with incubation for 2-23 h both in laboratory conditions (with brine salinity 101, 125, 225 and 333‰) and in incubation facilities on salty lakes Solenoe and Karasie.
Both laboratory and field studies have shown the absence of cysts hatching in the brine of lakes with salinity more than 100‰.It is known that complete hydration of cysts is necessary for nauplius hatching.To clarify the hydration time, we conducted studies during the incubation of dry cysts in incubation medium with salinity 42‰ at 27 0 C (Litvinenko et al., 2021).It turned out that by 6 h of incubation hydrated cysts were more than 80%, by 10 h -almost 100%.The beginning of hatching characterized by shell rupture and embryo emergence occurs en masse at 14-16 h of incubation, starting from 18 h -the first freeswimming nauplii appear.Starting from 22 h of incubation, dead nauplii appear (Fig. 7).
In experiments to reduce the incubation time of cysts (the results are presented in Figure 8), important answers to the question of the possibility of such a reduction were obtained.The experiments were conducted with cysts from two populations (1 -Ebeyty and 2 -Maloe Yarovoe).Some differences in the results indicate the influence of cyst and origin on these processes.Common for the populations is high mortality of naupliuses at 330‰ brine salinity already on the second day of the experiment, which gives a good reason to exclude lakes with high salinity from inoculation experiments.
The comparison of test data 24+ , i.e. under standard conditions (placement in the brine after 24 h of incubation) with earlier releases into the brine on the second day of the experiment is a more illustrative indicator not only of the possibility of reducing the incubation period, but also of the necessity.In Experiment 1, it is shown that at 100‰ brine salinity the best results were obtained in the 16+, 18+, 20+, 22+ tests, while at 125‰ and 225‰ salinity the best results were obtained in tests 20+ and 22+.In Experiment 2, at 100‰ brine salinity the best results were observed in tests 10+, 12+, 16+, at 125‰ salinity -in all tests starting from 12+ (except 22+), at 225‰ salinity -in test 16+.When comparing the data on nauplius hatching at 24 h and 48 h, it can be seen (see Fig. 8) that nauplius hatching occurs in lake brine, i.e. embryos at the umbrella stage develop into full-fledged naupliuses in lake brine.This process is particularly intensive in experiment 1 in brine with salinity 100‰, and in experiment 2 in brine with salinity 100 and 125‰.
Experiments on shortening the incubation period were also conducted in field conditions, conducted in 2019-2021 on lakes Okunevo and Karasie.The results of these experiments can be clearly judged by hydrobiological samples on the eve of the experiments and after (Fig. 9).
Discussion
The dynamics of Artemia population density revealed in our study of shallow lakes in the south of Western Siberia (maximum during the first generation and low density of subsequent generations) due to low live births is not specific only to shallow water bodies.For example, similar dynamics have been observed in relatively deep water bodies of Russia in some years, as well as in the largest commercial water body Great Salt Lake [13].Therefore, the geography of use of the proposed method of increasing cyst production can be expanded.
Of the vast number of publications on Artemia, the main topic concerns the use of naupliuses as food for fish and crustacean larvae [14].The need for incubation of dry cysts to obtain naupliuses is determined by their dehydrated state.It is known [12] that only fully hydrated cysts can initiate metabolism.The use of wet cysts instead of dry cysts (to accelerate incubation time) causes difficulties in storage of such cysts and, at high temperature, rapid spoilage.Therefore, the procedure of incubation of dry cysts in a medium with lower salinity is a necessary element in the technology of Artemia inoculation in saline water bodies.During incubation cysts pass the stage of hydration, embryo stage 1 and 2, nauplius.Optimizing this process is very important for good results and cost savings.
Inoculation of cysts into salt ponds without Artemia was first performed in 1977 in Macau (Brazil) with 250 g Artemia franciscana cysts.A few months after this inoculation, the first ton of cysts was collected [4,7,15].Since then, introduced Artemia has been found continuously in Brazilian saltwater bodies.Such examples of high productivity indicate the considerable potential of Artemia under favorable conditions for development.The relatively low annual production of Artemia cysts in Brazil of 2-4 tons of cysts and 25-30 tons of Artemia biomass is attributed limitation of effective management practices [7].
The technology of Artemia cyst and biomass production by inoculation of Artemia nauplii in saline evaporation ponds is well developed in India, Iran, Thailand, Vietnam and other countries with tropical and subtropical climates of the Eastern Hemisphere.According to literature data [1], the greatest results in obtaining stable yields of cysts have been achieved in Vietnam, where about 40 tons of cysts are obtained annually.This technology involves the use of organic and inorganic fertilizers and fodder, which allows obtaining 17-25 kg/ha of cysts from a pond area of 1200 ha [16].
Significant Artemia harvests have been observed in coastal, highly eutrophic ponds in China, located near Bohai Bay.As early as the 2000s, these ponds with small areas (less than 100 ha) had cyst yields of 3-60 kg/ha raw weight (~1.5-30 kg/ha dry cysts).In contrast to Vietnam and Thailand, Artemia cultivation in them was conducted in an extensive way, i.e. without fertilizer and feeding [4].This method of Artemia cultivation in ponds is relatively similar to the cultivation in our studies.The differences are that saltwater lakes have a significant supply of cysts in spring, providing a strong first generation of Artemia.Beginning in July, due to low live births, Russian salt lakes are generally sparsely populated with Artemia, which brings them closer to the ponds of Bohai Bay.The differences are related to the fact that Bohai ponds use not only local Artemia nauplii, but also A. franciscana nauplii for inoculation.In Russia, only local Artemia populations are used for this purpose.Artemia nauplius hatcheries have recently been built in China, which, unlike the hatcheries we used, have a smaller volume and are located indoors at a considerable distance from the nauplius release sites.
The introduction of Artemia cysts and naupliuses into natural saline lakes is poorly covered in the literature.As a rule, a water body with salinity suitable for Artemia development is naturally colonized by this crustacean species after some time (waterfowl, wind).Such Artemia-free lakes with conditions suitable for Artemia development have been found in high-mountain Tibet.The introduction of Artemia sinica into the Tibetan lake Dangxiong Co (China), located at an altitude of 4475 m above sea level, with an area of about E3S Web of Conferences 431, 01047 (2023) ITSE-2023 https://doi.org/10.1051/e3sconf/20234310104756 km2 and an average depth of 12 m, salinity 120-180 g/l was studied in 2004.According to literature data [17], the introduction of naupliuses obtained from 850 g of cysts was sufficient to increase the abundance of crustaceans in 2006 to 20 ind./m3 and in 2013 to 1950 ind./m3.Based on the results of such introduction, preliminary calculations were presented, which showed that up to 150 tons of cysts could be collected from the lake per year, as well as 3.2 thousand tons of frozen or 350 tons of dried biomass of Artemia adults.
We found no data on early release of embryos into lake brine in the literature.The intentional shortening of cyst incubation and release of unhatched embryos into lake brine in order to speed up and cheapen the procedure of lake inoculation in our study showed that this technology is promising.
Thus, the use of the presented Artemia aquaculture technology in other temperate countries is possible, provided that the live birth rate of local Artemia populations is low and the abundance and biomass of the crustaceans decrease sharply after the first generation.Population of naupliuses in optimal concentrations can form a generation as powerful as the first one and thus double the productivity of such water bodies.
The results of studies on reduction of cysts incubation time and early release into lake rapa allow reducing the release time and cheapening this procedure of incubation and inoculation.This part of the technology may well be used in countries of subtropical and tropical climates.
Fig. 2 .
Fig. 2. Scheme of the incubation plant (A), photo of the plant (B), two plants at night (C), experience of hatching in lake brine (D), release of naupliuses into the lake (E)
Fig. 4 .
Fig. 4. Seasonal dynamics of the total number of brine shrimp in model lakes
Fig. 6 .
Fig. 6.Survival of nauplii after their inoculation into brine with different salinityThus, the resistance of Artemia nauplius to hypergalyne shock was clearly demonstrated.
- Ulzhai Table 1 .
Characteristics of saline lakes during the study period . | 2023-10-15T15:11:40.489Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "4d6409e177f156919855cea3fe9cc1b20b3626ca",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/68/e3sconf_itse2023_01047.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "683e5e510b637f35ad222c9e54cb0f06f6e146a2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
17532105 | pes2o/s2orc | v3-fos-license | Generation of ultra-sound during tape peeling
We investigate the generation of the screeching sound commonly heard during tape peeling using synchronised high-speed video and audio acquisition. We determine the peak frequencies in the audio spectrum and, in addition to a peak frequency at the upper end of the audible range (around 20 kHz), we find an unexpected strong sound with a high-frequency far above the audible range, typically around 50 kHz. Using the corresponding video data, the origins of the key frequencies are confirmed as being due to the substructure “fracture” bands, which we herein observe in both high-speed continuous peeling motions and in the slip phases for stick-slip peeling motions.
T ape peeling is a simple yet beautiful example of the fracture process, a very complex phenomena which can be viewed across many different scales 1 . Moreover, the peeling of an adhesive tape is now known to exhibit some truly unexpected physics, such as the emission of X-rays 2 . In general, two different types of peeling motion can be observed -namely -''stick-slip'' or ''continuous'' peeling. The former has been the subject of significant study 1,3-5 for typically low peeling speeds of O(10) cm/s due to the production of a characteristic sound audible to the human ear, whilst the latter has been less studied. For extremely low peeling speeds (O(10 25 ) m/s), a sawtooth pattern was observed at the peeling front 6 .
Acoustic sampling of tape peeling 1 allowed extraction of the periodicity (i.e. frequency) of the stick-slip cycles for various velocities, whilst direct high-speed imaging (at 16 kfps) of the stick-slip motion was performed 5 to extract the period and duration of the slip and stick phases, allowing construction of spatiotemporal representations of the peeling point. Further to these studies, ultra-high-speed imaging at up to 1 Mfps was used 7 to reveal new insight into the stick-slip motion, with the observation of intermediate or ''sub-structure'' fracture bands which occur in the slip phase and move rapidly across the width of the tape at speeds up to 500 m/s. We now extend these previous experimental studies by performing concurrent (synchronised) high-speed video imaging and audio acquistion of rapid tape peeling where the typical peeling speeds are between 3 and 15 m/s. We examine both peeling regimes (stick-slip and continuous), with an emphasis on determining the origins of the peak acoustic frequencies. Surprisingly, we find the emergence of an unexpectedly strong sound in the ultrasonic domain with typical frequencies around 50 kHz.
Experimental details Basic setup. We use an experimental setup, shown in Figure 1, similar to that used in 7 , whereby a strip of tape was initially stuck onto a thick, transparent glass plate which in turn was clamped into a solid steel mount approximately 10 cm above an optical table. This allowed placement of a 45u mirror underneath for optical access. Illumination from above was diffused by the tape itself, rendering good contrast in the video sequences. The tape was then peeled rapidly by pulling upward on the free end, whilst a bar placed 3-5 cm above and away from the viewing region on the glass plate ensured a reproducible angle (30-45u) between the glass plate and the detached tape for a given number of repeat trials. Four different tapes were used herein -namely -3M Scotch Invisible tape, 3M Scotch transparent, Sellotape Invisible and HomeLife Invisible. All tapes have similar backing and adhesive layer thicknesses (,20 mm).
Video. We capture the peeling events using a high-speed video camera (Phantom V1610) operating at 48,000 fps. This frame rate was chosen due to the requirement of an exact number of audio samples per video frame, whereby the audio sampling rate was 192 kHz. Both the video and audio recordings were started by a common trigger Audio. The sound generated by the peeling process was captured using a class I measurement microphone and an analog-to-digital converter with a sample rate of 192 kHz, allowing for the capture of frequencies up to 96 kHz. The microphone itself was placed 15 cm away from the location of the tape peeling in the view of the camera and the signal was captured using a Pro Tools digital audio workstation. The raw waveform signal was then analysed using ProTools and a custom-built spectrograph application. The frequency spectrums and spectrograms were generated by performing a fast Fourier transform (FFT) and a wavelet transform, respectively, in order to identify the dominant frequencies for either the entire sequence (in the case of continuous peeling) or for individual slip events (in the case of stick-slip peeling). The audio samples were offset from the actual trigger point to account for the speed of sound in air due to the location of the microphone and we verified that this offset was constant for a number of different realisations.
Observations from video data Select frames from a typical video sequence are shown in Figure 2.
Here, the direction of the detachment front is from top to bottom (see also the supplemental movie for this Figure). Each of these frames shows the initial motion of detachment at the beginning of discrete ''stick-slip'' regions, where we observe the substructure or ''fracture'' bands, first reported in 7 . Figure 3 shows a close-up view of this phenomenon from a frame taken 40 ms prior to the sequence shown in Figure 2. The fracture bands are clearly visible and stream from right-to-left in this realisation. It is precisely these bands that result in waves in the detached tape, which have been indicated by the arrows on the right. Whilst we cannot observe the motion of the tip of the fracture bands frame-by-frame, due to insufficient temporal resolution, we estimate their speed by noting that each band travels the entire width of the video frame within one frame, thus giving speeds of up to 500 m/s, in agreement with previous calculations 7 . In all experimental trials, whether the peeling regime is stick-slip or continuous, we find that the fracture bands are equally spaced apart by approximately L fb 5 120 2 200 mm, depending on the brand of tape being used but that it is constant for each brand. The exact cause of this particular spacing is currently unknown, however, it is expected to be intimately linked to the adhesive and rheological properties of the The time between successive frames here is 2 ms. Each frame corresponds to the beginning of a slip event. tape since it appears to be more or less independent of the peeling speed and thus the applied load.
In general, there is monotonic increase in the length of the slip phase with peeling speed or equivalently, with the speed of the slip phase, as shown in Figures 4 (a) and (b). This is to be expected as one would expect to see longer slip phases with increasing speed as the peeling motion transitions towards the continuous peeling regime. Within each of these slip phases, there is an exact number of fracture bands that occur at a consistent separation distance, L fb , and it is the speed of each slip phase and this separation distance which determines the frequency of the sound, as we show herein.
Audio data A plot of the raw time-domain acoustic signal from the same realisation as in Figure 2 is shown in Figure 5(a) and the corresponding frequency spectrum and frquency-time spectrograph in Figures 5(b) and 5(c), respectively. This data (1388 audio samples) encompasses the entire length of the captured video sequence (7.2 ms) and clearly depicts a series of discrete events, which are precisely the ''slip'' phases observed in the video. This is verified by checking the exact start time and duration of each of the slip events in the video and audio data, which coincided exactly after taking into account the speed of sound offset due to the location of the microphone. The frequency spectrum clearly shows a first peak frequency at 22 kHz, which is also observed as the dark regions in the 3rd, 4th, 5th and 6th slip phases in Figure 5(c).
In order to examine the origins of this frequency in more detail, Figure 6 shows the raw audio signal from the 4th individual slip cycle, where we can clearly make out 12 discrete peaks, which corresponds precisely to the number of fracture bands that occur during this slip cycle. The images below show the progression of the peeling front with the number of completed fracture bands indicated in red. For this particular slip cycle we thus have 12 bands occurring in ,540 ms, leading to a frequency of 22 kHz, which is precisely the peak seen in Figure 5(b) and dense region in Figure 5(c). Note that this frequency can also be calculated from the wavespeed and wavelengths in the detached tape, but since these are not always in focus in the video sequences and the fact that they are caused by the fracture bands anyway, we use the observations of the fracture bands as a more reliable method of predicting the audio frequency.
Interestingly though, we also note another dominant high-frequency around 50 kHz, which occurs at approximately the same magnitude as the low-frequency (#20 kHz) sound in the audible range. The spectrograph in Figure 5(c) further shows that this high-frequency sound, around 50 kHz, is most prominent in the 2nd, 6th and 7th discrete slip events for this particular trial. Inspection of the video sequences for these particular individual slip events indicates that the fracture bands occur at rates of 48-53 kHz, which is confirmed by inspection of the audio signal of these slip cycles, for example shown in Figure 7 for the 7th slip cycle. Here, the markers on the peaks in the middle of the cycle clearly correspond to the peak around 52 kHz in Figure 5(b).
In contrast to this stick-slip motion, the continuous peeling regime exhibits a qualitatively different frequency spectrum, as shown by Figure 8, which shows the equivalent raw signal, frequency spectrum and spectrograph for a continuous peeling trial. Here, we can see a very clear peak in the frequency spectrum and spectrograph centred around 49 kHz. In this particular realisation, the peeling speed U peel 5 7.2 m/s and the spacing between the individual fracture bands L fb < 150 mm, thus giving a rate of generation of the bands of U peel /L fb 5 48 kHz, again in excellent agreement with the captured peak audio frequency. Figures 5 and 8 clearly show a dramatic difference between the two peeling regimes, which is a consistent feature across all brands of tapes and highlighted further in Figures 9 (a) and (b) showing both raw acoustic signals and corresponding frequency spectra for four realisations from both stick-slip (Figure 9(a)) and continuous peeling (Figure 9(b)) motions. The raw signals clearly represent the difference between the two types of motion, whereby the periodic structure seen in the stick-slip peeling is absent for the continuous peeling examples. Moreover, the frequency spectra for the continuous peeling examples exhibit much more pronounced peaks in the highfrequency regime, between 40 and 60 kHz, compared to stick-slip and it is noted that the strength of this sound is actually higher than the audible-range frequencies.
As with Figure 8, we can rationalize this observation by using the video data to assess the average speed of the peeling front in the continuous peeling regime, whereby U peel < 10 m/s. Recalling that the spacing of the individual fracture bands is L fb < 200 mm, we see that the generation of the fracture bands occurs at a rate of approximately 50 kHz, thus corresponding very closely to the peak frequency of the captured acoustic signal for these trials. As such, we confirm the presence of both audible and ultrasonic sound, where the peak frequencies can both be attributed to the fracture bands which occur both in continuous peeling motions and in the stick-slip peeling during the individual slip events. This is shown graphically in Figure 10, where we have plotted the peak captured frequency for multiple tapes against the predicted frequency based upon the generation rate of the fracture bands, U peel /L fb .
Finally, by using a cross-over filter (a pair of symmetrical low-pass and high-pass 3rd order filters) at 20 kHz, we estimate the total percentage of the sound energy in the ultrasonic domain to be between 13-40% for stick-slip trials, and between 46-79% in most cases for continuous peeling, showing that irrespective of the peeling motion, there is a significant portion of sound energy above the human audible range.
Conclusions
In summary, we have performed experiments to extend previous revelations about the peeling of an adhesive tape. Both continuous peeling and stick-slip regimes were observed and we have confirmed the presence of the fracture bands first observed by Thoroddsen et al. 7 in both. Furthermore, audio acquisition concurrent with imaging allowed us to assess the sound generated in this process and, in line with the conjecture in 7 , there is sound in the ultrasonic domain. Moreover, and quite surprisingly, this sound is as strong and, in many cases, stronger than that in the audible domain. By crossexamining the audio signals with the video data, we find that the substructure fracture bands are responsible for the dominant frequencies in the sound generated in this process. This finding adds to the unexpected and fascinating physics of this seemingly simple and ubiquitous phenomena. Correlating the peeling speed with the peak audio frequencies for a range of substrates and applied loads is the subject of ongoing work and will be addressed in the future. | 2016-05-12T22:15:10.714Z | 2014-03-21T00:00:00.000 | {
"year": 2014,
"sha1": "d5d90615feed69316313ec883b977708718815f7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep04326.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5d90615feed69316313ec883b977708718815f7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
55192233 | pes2o/s2orc | v3-fos-license | Impact of Climate Change and Variability on Wheat and Corn Production in Buenos Aires , Argentina
From the Global Historical Climate Network (GHCN-V3), monthly mean summer (DJF) temperature (1856-2012) and total precipitation (1861-2012) are analyzed in correlation with four climate modes and sunspot number to better understand the role of teleconnections on Buenos Aires’ (Argentina) climate. A general increase in temperature and precipitation was observed. Temperature has increased by about 1.8 ̊C and precipitation has increased by about 300 mm in the past century and a half. Indices of Arctic Oscillation (AO), Pacific North American (PNA), Antarctic Oscillation (AAO), and El Niño-Southern Oscillation (ENSO) are evaluated to study their effects on wheat and corn production and export. AO and PNA show strong relationships with precipitation and temperature received. AAO and ENSO show strong negative correlations with precipitation patterns and weak correlations with temperature. Sunspot Number shows a positive correlation with temperature. ENSO phases are strongly linked with the wheat and corn production and export; during El Niño Buenos Aires tends to experience extremely wet summer weather, causing soggy fields and extremely dry summer weather during La Niña causing drought. Both of these conditions result in reducing wheat and corn production and export.
Introduction
Buenos Aires, Argentina is located at 34.6˚ south latitude and at 58.5˚ west longitude.The station is 25 m above sea level and located at the Buenos Aires' International Airport, 30 km from the Atlantic Ocean and 9 km from the urban center.Because of its location in the southern hemisphere, Buenos Aires's seasons are opposite of those from the northern hemisphere; summer is in December, January and February (DJF), winter is during June, July and August (JJA), spring is in September, October and November (SON) and autumn is during the months of March, April and May (MAM).Monthly average temperature and total precipitation data for Buenos Aires are obtained from the Global Historical Climate Network (GHCN-V3).
Buenos Aires' average annual summer temperature is 23˚C for the period of 1856-2012.Due to its proximity to the South Atlantic Ocean, the average annual temperature fluctuates very little.The summer temperature ranges from approximately a high of 25.3˚C to a low of 21˚C.The winter temperature ranges from approximately a high of 13.3˚C to a low of 6.3˚C.Summer average precipitation is about 283 mm.The distribution shows seasonal periods of intense low to high precipitation amounts.The period of intense precipitation is during the autumn months with a total average of 299 mm, while it receives low amounts of precipitation in the winter months with a total average of 184 mm.Precipitation in the summer months supports farming production and failure of precipitation leads to devastating economic losses [1] [2].Wheat and corn are two of the major crops that Buenos Aires produces and exports.The climate of Argentina is very sensitive to the ENSO phases, especially with the amount precipitation received, largely contributing to a summer drought during a La Niña year, and a summer flood during El Niño [2].
This paper examines the effect of four teleconnections and sunspot number on the summer temperature and precipitation of Buenos Aires.The teleconnections include: Arctic Oscillation (AO), Pacific North American Oscillation (PNA), Antarctic Oscillation (AAO), and El Niño-Southern Oscillation (ENSO).AO is distinguished by pressure changes between the Arctic Polar Regions and areas south of the North Pole causing changes in upper-level zonal winds [3].In the positive phase of AO, the strong low pressure system in the Arctic keeps the cold air north and in the negative phase of AO; the weak low pressure system in the Arctic keeps the cold air south.
The PNA relates to the atmospheric circulation pattern over the North Pacific Ocean and over the North American continent mostly affecting North American climate.The PNA index deals with the average of the standardized monthly 700 mb height anomalies for Hawaii, the Aleutian Islands, southeastern United States and over the Intermontagne region of North America.
The AAO is measured by the Antarctic Oscillation Index, which is the difference in pressure between 40˚S and 65˚S.AAO affects the areas between 20˚S to 90˚S and has a strong effect on the location of the midlatitude jet stream affecting areas in the southern hemisphere [4].Just like in the Northern hemisphere, there is an opposition between the Azores high and the high pressure belt over Argentina, where a lack of southern data hindered the research for more oscillations [4] [5].The positive phase of AAO consists of the southern hemisphere tropical jet stream moving south, causing warm and dry conditions in the lower latitudes closer to the equator and cold and wet in the higher latitudes closer to the South pole.The negative phase of AAO is the opposite.
ENSO is a large-scale interaction between the ocean and the atmosphere.It is measured using the Southern Oscillation Index (SOI) where large negative SOI values represent El Niño, ocean warming, and large SOI positive values represent La Niña, ocean cooling [6] [7].SOI is calculated by the pressure differences between Tahiti and Darwin.ENSO affects South America by mimicking the Peruvian conditions into Buenos Aires from the Pacific to the Atlantic Ocean coast [8].
Sunspots are large magnetic storms that are cool spots appearing dark on the sun.They are surrounded by plagues or faculae, known as bright spots.Sunspots lead to an increase in the receipt of solar radiation, and there is evidence that sunspots can cause a potential increase in temperature [9].
This work is important because it explains how teleconnections that occur around the world affect Buenos Aires' climate.It also explains why some teleconnections affect only temperature and not the precipitation patterns, and vice versa.There is some work done around Buenos Aires, for example, in Chile, Las Pampas, Southern Brazil, Peru and the Andes Mountains regarding temperature and precipitation patterns, but little work is dedicated with the focus of the effect of teleconnections on Buenos Aires, Argentina, and how the ENSO phases influences the production and exporting of wheat and corn.Section 2 describes temperature and precipitation results and the impact of ENSO on wheat and corn production and export; and Section 3 summary and conclusions.
Temperature
Buenos Aires has experienced a warming trend in the past century and a half.There are links of this increasing trend of the temperature with more frequent occurrences of warmer days and nights and a decrease of cold days and nights [10] [11].Figure 1 shows that the average annual temperature of Buenos Aires has increased by about 1.8˚C over the past 156 years.There appears to be a year-to-year variation on the temperature of Buenos Aires, ranging from a low of approximately 14.8˚C to a high of approximately 18.5˚C, supporting the strong warming trend.The lowest average summer temperature is 21˚C and the highest average summer temperature is 25.4˚C.The increasing trend shows a dramatic drop in the temperature in 2008.The year 2008 was a strong La Niña year.During a La Niña year, the western coast of South America, experiences a higher than average air pressure system causing colder temperatures.This colder air temperature travels up and over the Andes Mountains and across the South American continent into Buenos Aires, causing a higher than average air pressure system in Buenos Aires, causing a decrease in the temperature to approximately 14.8˚C in 2008.
Figure 2 shows the trend of the seasonal and annual 30-year-average temperature of Buenos Aires.The graph shows a strong increase in the average temperature from 1893 to 1982, which could be associated with the exchange of tropical and polar air masses that produce an increase and variability in the southern hemisphere temperature, especially from the period of 1953 to 1982 [12].During the period of 1983-2012, there was a decline in the average annual temperature in all seasons.The drop in the temperature can be linked to the cold La Niña events that occurred in 1989, 1999, 2008, 2011 and 2012 (Figure 1).
The AO, PNA and Sunspot Numbers show the strongest correlations with the summer temperature of Buenos Aires.AO shows a strong positive correlation coefficient of 0.24 (Figure 3).The positive phase of AO shows high temperatures of about 25.4˚C and during the negative phase Buenos Aires experiences lower temperatures of about 21.5˚C.During the positive phase of AO, the cold Arctic air and strong westerly winds aloft stay further north, creating warmer temperatures in the southern hemisphere.During the negative phase of AO, Buenos Aires experiences colder temperatures, which could be due to large clouds over the Atlantic Ocean [5] and colder air descending from the Arctic due to strong zonal winds [3].
PNA also shows a positive relationship with the summer temperature of Buenos Aires with a correlation coefficient of 0.10 (Figure 4), meaning that the positive events produce warm temperatures and the negative events produce colder temperatures.PNA's slight increase in temperature could be influenced by El Niño, where warm episodes in the eastern Pacific could enhance the temperature of Buenos Aires.
AAO and ENSO showed weak to no correlations with the summer temperature of Buenos Aires.The correlation between AAO and temperature is 0.06 (Figure 5).This means that during the positive phase of AAO, Buenos Aires experiences warm temperatures as the subtropical jet moves south (55˚S) and during the negative phase of AAO, cooler temperature are experienced because subtropical jet moves north (40˚S) closer to Buenos Aires area .The correlation between ENSO and temperature is −0.02 (Figure 6).The negative correlation means El Niño events produce warm temperatures while La Niña events produce cold temperatures.ENSO's weak correlation occurs because ENSO circulates and affect the areas of the central Pacific Ocean having very little effect on the Atlantic regions.
There is also a strong positive relationship between Sunspot Numbers and Buenos Aires' summer temperature.The two variables have a correlation coefficient of 0.21 (Figure 7).When there are fewer sunspots, approximately between 0 -150, the temperature ranges from about 19.8˚C to 24.8˚C, and when there are more sunspots (between 150 -250), the temperature tends to be higher, ranging from about 22.4˚C to 25.4˚C.This means that when there are more sunspots, there is an increase in solar radiation received at the surface (Figure 7).
Precipitation
Buenos Aires' precipitation has increased by about 300 mm over the past 151 years (Figure 8).The annual and summer average precipitation is 1020.11mm and 283 mm, respectively.It shows a dramatic decrease in the total annual precipitation in 2008 (La Niña year).The anomalously high pressure on the western coast of South America travels through the South American continent creating a high pressure system that brings dry conditions to Buenos Aires.
Figure 9 depicts the frequency distribution of total annual precipitation of Buenos Aires and shows that 81 of the 151 years of precipitation fall between 900 mm and 1299 mm.Penalba and Robledo [2] explain that these years could have been particularly wetter than the rest because of a probable daily precipitation rate greater than 0.1 mm/day.
The correlation between AO and precipitation shows a strong negative correlation r = −0.20 (Figure 3).The positive phase of AO accounts for less precipitation with a minimum amount of about 90 mm, while the negative phase correlates with more precipitation up to about 800 mm.The low pressure system that forms storms in the Mediterranean region adds moisture to the warm summer air above the Atlantic Ocean and creates more precipitation during the negative phase of AO.
PNA has a strong positive relationship with Buenos Aires' summer precipitation with a correlation coefficient of 0.20 (Figure 4).The positive phase of PNA causes more precipitation and the negative phase causes less precipitation.
AAO also shows a strong negative correlation with the summer precipitation of Buenos Aires with a correlation coefficient of −0.31 (Figure 5).The positive phase of AAO results in less precipitation and the negative phase results in more precipitation.Increased amounts of precipitation during the negative phase of AAO is due to the abundant moisture brought by the northward movement of the subtropical jet stream centered at about 40˚S.The positive phase of AAO exhibits opposing moisture and precipitation effects due to the southward movement of the subtropical jet stream centered at about 55˚S (IPCC, 2007).
ENSO shows a strong negative correlation with the summer precipitation of Buenos Aires.The correlation coefficient is −0.20 (Figure 6).This means that strong negative SOI values represents El Niño, a large low pressure system over the Western coast of South America, and during strong positive SOI values (La Niña), a large high pressure system over Western coast of South America.
The Impact of ENSO on Wheat and Corn Production
The conditions that affect the western coast of South America during the ENSO phases are the same conditions that affect Buenos Aires.The low (high) pressure system transfers from the western coast of South America across the South American continent by mimicking the Peruvian conditions into the eastern slope of Buenos Aires' South Atlantic coast, causing the same conditions in Peru and Buenos Aires during ENSO phases [8] [13].When the Peruvian region experiences an El Niño event (low pressure), there is an abundant amount of precipitation received, as well as in Buenos Aires.The summer is extremely dry, causing drought in a typical La Niña year, but extremely wet, causing soggy fields and floods in a strong El Niño year [2] resulting less wheat and corn production and export [14] [15].In the 1997-1998 El Niño year, the total annual precipitation exceeded 1700 mm and in 2007-2008 La Niña year the precipitation decreased to 550 mm (Figure 8).
ENSO, Wheat Production and Exports
Buenos Aires is especially prone to a severe drought season during La Niña, a province that accounts for 60% -65% on the national wheat output [14].The wheat is planted between May and June, harvested in December and usually exported between December and January.During 1997-1998, a soggy El Niño year, wheat production decreased from 15,740 (Million Tones (MT)) in 1996-1997 to 13,300 MT, and the wheat growth rate dropped 15.5% due to and flooded fields in December, the key harvesting month for wheat [16].In 1998, of the 13,300 MT of production, 10,000 MT were exported [14].During 2007-2008, a dry La Niña year, wheat production dropped from 18,600 MT in 2006-2007 to 11,000 MT and only about 6500 MT of wheat was exported [14].The wheat growth rate dropped 40.86% due to the dry conditions which created damaged wheat that was not able to be sold as grain, therefore, a loss of production and export was experienced during 2008-2009 [16].
ENSO, Corn Production and Exports
The United States is accountable for nearly 60% of global corn exports and Argentina is the second with a 15% share of the world trade [15].Corn is planted between October and November, harvested between March and April and usually exported at the end of April and into May (G.Alonso-Murray, private communication, 2013).During 1997-1998, corn production decreased from 19,361 MT in 1996-1997 to 13,504 MT and the corn growth rate dropped 30.25 [15].From the 13,504 MT of corn production, only about 8000 MT was exported.Corn dropped due to wetter than normal fields through the months of December to February.During the 2007-2008, a dry La Niña year, the low production of 15,500 MT resulted in an approximate drop of 11,000 MT of corn export, and the corn growth rate decreased by 30% [15].During the 2007-2008 droughts, nearly 40% of the harvested corn was damaged and lost as the drought grew more intense in the summer from December to February; therefore the corn loss was cut for silage and was not sold [15].The Argentina national corn yield fell 14% below trend in 2007-2008.Recently, during the 2009-2010, a moderate El Niño year, Argentina's total precipitation increased to 1000 mm (Figure 8) and the corn production was 23,600 MT [15].In 2011-2012, a weak La Niña year the total annual precipitation dropped to about 750 mm (Figure 8), causing corn production to fall to 21,000 MT [15].
Summary and Conclusion
Buenos Aires has experienced a warming of about 1.8˚C of its average annual temperature during the period of 1856-2012.This warming can be associated with more frequent occurrences of warmer days and nights and a decrease of cold days and nights.Another explanation of this warming trend could be due to the cold Arctic air staying in the North due to strong pressure differences between Arctic polar regions and the central Atlantic as well as strong westerly winds aloft, creating warmer temperatures.The unusual cool period in 2007-2008 occurred due to the strong La Niña year causing major wheat and corn economic losses.
The AO, PNA, NAO and Sunspot Number have showed to have major effects on Buenos Aires' summer temperature.The AO, PNA and AAO all showed high temperatures in the positive phase and low temperatures in the negative phase.This is due to their strong positive correlation coefficients with temperature.Sunspot Number showed that with fewer sunspots, the temperature is lower, ranging from about 19.8˚C to 24.8˚C, and with more sunspots, the temperature is higher, ranging from about 22.4˚C to 25.4˚C.
Precipitation in Buenos Aires has increased by about 150 mm to 300 mm during 1861-2012.Also, AO, PNA, AAO and ENSO have showed to have major effects on the summer precipitation of Buenos Aires.The AO accounts for drier summer conditions in the positive phase and wetter summer conditions in the negative phase.The PNA showed an increase in precipitation during the positive phase causing wet summer conditions, and during the negative phase, causing dry summers.Unlike PNA, AAO showed that positive phase resulted in less precipitation, causing dry summer weather and the negative phase resulted in more precipitation, causing moist summer weather.Finally, the ENSO phases causes Buenos Aires to experience extremely wet summer weather during El Niño (negative SOI values) and extremely dry summer weather during La Niña (positive SOI values).Both of these conditions have a major impact on the wheat and corn production and export of Buenos Aires.
The teleconnections discussed showed how climate changes from warmer to colder temperatures and wetter to dryer conditions.Buenos Aires' climate change is based on the various phases of the climate modes affecting the temperature and precipitation.Teleconnections have large effects on Buenos Aires' summer temperature and precipitation and showed an increase in average temperature and total precipitation in the past century and a half.A change in the Earth's climate as a whole in the next 50 -100 years will have an effect on the return periods and magnitudes of the teleconnections discussed and, therefore, they will change the climate and the ability of Buenos Aires' crops to maintain successful wheat and corn production and exports to nurture the world.Further research is needed to understand the relationship between the teleconnections and climate change to be able to predict the future of the wheat and corn production and exports of Buenos Aires and in other parts of the world. | 2018-12-13T04:28:48.565Z | 2014-06-20T00:00:00.000 | {
"year": 2014,
"sha1": "d1d604049337a36af87d9c213d28516c8e03bd36",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=47076",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d1d604049337a36af87d9c213d28516c8e03bd36",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
234862278 | pes2o/s2orc | v3-fos-license | Pathophysiological Correlation between Cigarette Smoking and Amyotrophic Lateral Sclerosis
: Cigarette smoke (CS) has been consistently demonstrated to be an environmental risk factor for amyotrophic lateral sclerosis (ALS), although the molecular pathogenic mechanisms involved are yet to be elucidated. Here, we propose different mechanisms by which CS exposure can cause sporadic ALS pathogenesis. Oxidative stress and neuroinflammation are widely implicated in ALS pathogenesis, with blood–spinal cord barrier disruption also recognised to be involved in the disease process. In addition, immunometabolic, epigenetic and microbiome alterations have been implicated in ALS recently. Identification of the underlying pathophysiological mechanisms that underpin CS-associated ALS will drive future research to be conducted into new targets for treatment. neurodegeneration. neuroinflammatory processes of motor neuron degeneration. secondary BSCB dysfunction may occur concurrently with primary BSCB dysfunction. Here, oxidative and neuroinflammatory processes associated with the neurotoxic environment back to the BSCB, facilitating secondary damage. This augments permeability and increases immune-cell migration, the development and maintenance of a neurotoxic environment.
Background
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease affecting motor neurons in both the cerebral motor cortex and spinal cord, causing progressive paralysis and premature death typically within 3-5 years from the time of diagnosis. Clinical presentation varies, more commonly involving spinal-onset disease (~70%) characterised by limb muscle weakness, or more rarely bulbar-onset disease (~30%) involving disorders of speech [1,2]. The global prevalence of ALS has been estimated to be >220,000, with future predictions reaching 375,000 by 2040 owing to the aging population. These projections are likely underestimated due to improvements in healthcare and economic conditions, particularly among developing nations [3]. Around 90% of all diagnoses are sporadic ALS (sALS), while genetically linked familial ALS (fALS) is responsible for 5-10% of cases [4]. Much of our knowledge of ALS pathophysiology has derived from different cell and animal models carrying gene mutations linked to fALS, such as superoxide dismutase 1 (SOD1), chromosome 9 open reading frame 72 (C9orf72), Fused in Sarcoma (FUS), and TAR DNA-binding protein 43 (TDP-43) [5]. Despite numerous hypotheses being proposed regarding sALS pathogenesis [6], the exact cause remains uncertain. Available treatments for ALS have limited efficacy and therapeutic methods are largely restricted to symptomatic and palliative care [7]. Riluzole, an anti-glutamatergic agent, is a Therapeutic Goods Administration (TGA) approved drug for ALS in Australia that extends median survival by 2-3 months due to delaying respiratory dysfunction [8,9]. Recently, the antioxidant edaravone was approved for the treatment of ALS in the United States (US), although phase III studies revealed limited efficacy in the wider ALS population [10]. It is now well-recognized that ALS is a multifactorial disease, involving a complex interplay between genetic and environmental factors. Indeed, the actual disease process starts long before clinical symptoms manifest.
Cigarette Smoke and ALS
Current evidence of genetic and environmental risk factors associated with the onset of sALS is limited to observation and populational studies. Among many proposed risk factors for ALS, it has been suggested that cigarette smoking may be considered an established risk factor [11] despite conflicting epidemiological evidence. While several population-based and cohort studies report smokers having an increased risk of ALS, greater ALS-related mortality and disease severity [12][13][14][15], others found weak or no association between smoking and ALS risk (Table 1) [16,17]. These inconsistent results may be explained in part by general limitations of observational epidemiological studies [11] as well as small sample sizes and survival, recall and selection bias. In particular, recall bias becomes significantly problematic when smoking history is quantified using "pack years", and further inconsistencies arise when considering different cigarettes contain varying concentrations of toxicants and their delivery is not consistent across all products [18][19][20].
Recently, a large case-control study as part of the Euro-MOTOR project concluded that smokers' pack year history was positively associated with increased sALS risk [21]. An investigation of monozygotic twins with the C9orf72 mutation, a gene strongly associated with both familial and sALS [22], revealed that despite having similar environmental exposures, the ALS affected twin had a 10 year smoking history and a prior instance of head trauma, while the other twin remained asymptomatic [23]. Female ever smokers a were associated with an increased risk of ALS but not men, smoking is a predictor in female ALS-related mortality not men.
Wang et al., 2011 [13] Pooled analysis of 5 prospective cohorts 832 Ever smokers had an increased risk of developing ALS when compared to never smokers, mean number of daily cigarettes smoked and smoking duration were positively associated with ALS, younger age of smoking initiation associated with higher risk of ALS. Smoking pack years positively associated with ALS risk when compared to never smokers, increased risk associated strongly with smoking duration rather than smoking intensity, inverse relationship observed between ALS risk and time-since-quitting smoking.
Opie-Martin et al., 2020 [28] Retrospective case-control study 202 Weak association between current smoking and risk of ALS.
Opie-Martin et al., 2020 [29] Mendelian randomisation population-based study 20,806 No strong evidence was found between ever smokers and ALS risk.
a Ever smokers defined as both former and current cigarette smokers.
The negative influence of cigarette smoke (CS) on health is well-established [30]. Several studies have revealed the association of CS and fine particulate air pollution with neurodegenerative disorders including Alzheimer's disease, dementia, Parkinson's and multiple sclerosis [31][32][33]. Indeed, this is supported by a recent study demonstrating CS to cause greater biomarker expression of oxidation, neurodegeneration and neuroinflammation in cerebrospinal fluid (CSF) [34]. CS contains >100,000 toxic compounds that may be inhaled directly (active smoking) or indirectly (passive smoking) [35]. Environmental case-control studies reveal that exposure to hazardous air pollutants, many of which are present in CS [36], increases the risk of sALS [37,38]. Analysis of CS contents demonstrates significant concentrations of heavy metals [39], which are associated with greater sALS risk [40]. Indeed, formaldehyde, another hazardous compound found in CS, has also been proposed to be associated with sALS in a dose-response relationship [12]. While the exact nature of the relationship between CS and ALS is unknown, many factors, including oxidative stress and neuroinflammation, have been suggested as putative risk factors [41]. Recent reports also suggest several new mechanisms that might be in play. Here, we describe different possible mechanisms (direct and indirect pathways) by which CS can affect ALS onset and progression.
Oxidative Stress
Toxic compounds in CS have been shown to cause tissue damage through hypoxia and oxidative stress in the lungs [42], and other distal tissues [43,44]. It is likely that continuous CS exposure has similar effects on motor neurons and surrounding cells in the spinal cord, contributing to ALS pathogenesis. CS-mediated oxidative stress can also aggravate the onset and progression of symptoms in sALS by augmenting neurodegeneration. Reactive oxygen species (ROS) generated in response to CS has been shown to induce free radical damage in the form of lipid peroxidation of membrane unsaturated lipids, DNA strand breakage, protein oxidation, mitochondrial depolarisation, RNA oxidation and cell apoptosis, which are also some of the well-established markers of ALS [45][46][47].
Currently, oxidative stress is one of the most widely accepted hypotheses for ALS pathogenesis. Oxidative stress occurs when there is an imbalance between ROS production and elimination via protective anti-oxidant factors [48]. The proposed role of oxidative stress in ALS pathogenesis involves direct damage to motor neurons as well as the surrounding microenvironment [49]. Additional mechanisms of oxidative stress-induced neurodegeneration include the promotion of mitochondrial dysfunction, protein aggregation, and endoplasmic reticulum stress responses [50]. Indeed, as oxidative stress contributes to abnormal RNA processing, transcriptional dysregulation, impaired axonal transport and protein misfolding, it has been proposed as a central mechanism in proteostasis disruption observed in ALS [51]. Growing evidence suggests the causes of proteome dyshomeostasis involving imbalances in protein production and degradation may be the unifying mechanism in ALS pathogenesis [52]. Notably, Ash and colleagues [53] demonstrate that aryl hydrocarbon receptor transcription factor agonists, including several toxicant by-products of CS, increase levels of TAR DNA Binding Protein-43 (TP43). TDP-43 is a consistent feature in sALS cases and plays a key role in the pathological progression of ALS, including driving neurodegeneration [54].
Analyses conducted on sALS patients repeatedly demonstrate oxidative stress biomarkers present in CSF, urine and plasma [55]. Durazzo and colleagues [32] reveal greater biomarker expression of free radical-induced oxidative stress in smokers' CSF samples compared to non-smokers. Additional studies on ALS also demonstrate alterations in markers of oxidative stress, such as glutathione and nuclear factor E2-related factor-2 [56][57][58]. Furthermore, post-mortem results in both sALS and fALS patients reveal DNA, lipid and protein damage secondary to oxidative stress [59][60][61][62][63]. Together, this evidence reinforces the likelihood that CS-mediated oxidative stress is a key driver of ALS pathogenesis.
Neuroinflammation
CS-induced oxidative stress can lead to inflammation in neurons and surrounding cells, which is another mediator of ALS pathogenesis. Neuroinflammation generates additional ROS which can further promote oxidative stress. Thus, smokers experience an increased and continual free radical load from both CS and subsequent inflammatory responses. Prolonged exposure to CS causes a combination of proinflammatory and immunosuppressive processes which impair humoral and cell-mediated immune responses [64]. It is likely that repeated CS exposure promotes a neuroinflammatory environment, facilitating ALS neurodegeneration. While this mechanism may involve the direct invasion of proinflammatory cells into the spinal cord, other intrinsic neuroinflammatory inducers such as activated glial cells may be contributing.
Neuroinflammation is characterised by microglial and astrocyte activation along with inflammatory cytokine overproduction [65,66]. Evidence of activated microglia and lymphocytic infiltrates within spinal cords of ALS patients suggests neuroinflammation occurs alongside motor neuron degeneration [67]. Analysis of CSF from ALS patients supports these results, revealing greater inflammatory mediators such as Interleukin (IL)-8 and eotaxin, responsible for inducing neutrophil and eosinophil chemotaxis and activation, respectively [68]. Analysis of astrocytes reveals that motor neuron damaging inflammatory mediators are secreted in both familial and sporadic ALS patients [69,70]. Further studies on ALS patients demonstrate a systemic pro-inflammatory state and an impaired antioxidant system, with increased levels of proinflammatory cytokines such as IL-6 and IL-8 [71]. Similarly, multiple human, animal and in vitro studies have concluded that CS exposure increases systemic markers of inflammatory cytokines, such as IL-6, 8 and 1β along with tumour necrosis factor-α (TNF-α) [72,73]. Together, these results strongly implicate CS-induced neuroinflammation in the pathogenesis of ALS.
Immunometabolism
Microglia are resident macrophage-like cells in the central nervous system (CNS) that act as immune sentinels and maintain CNS homeostasis [74]. They are very sensitive to physiological changes in the local environment and can become "activated" in response to an insult, during which their number, morphology and activity change, with the release of more inflammatory cytokines and chemokines. Evidence of activated microglia has been reported in the motor cortex and spinal cord of sALS patients [75][76][77]. Studies in sALS animal models have provided valuable insights into the involvement of microglia in ALS; however, their dual role in ALS has emerged. Although reactive microglia are thought to have neurotoxic effects, a recent report indicates that they might play a protective role in ALS [78]. It is possible that microglia play a protective role early in the disease and transition to a toxic state upon repeated insults from environmental factors, such as CS exposure. Indeed, carcinogens from CS have been shown to induce microglia activation and neuronal damage [79].
Microglia activation is associated with metabolic reprogramming, where the microglia switch from mitochondrial oxidative phosphorylation to glycolysis when activated [80]. Several pro-inflammatory factors and oxidative stress have been implicated in this phenomenon. CS exposure likely induces such immunometabolic changes in microglia during activation, via direct oxidative stress in these cells as well as by promoting a neuroinflammatory environment involving surrounding cells. At a systemic level, CS exposure has been linked to various metabolic syndromes, such as insulin resistance and abnormal lipoprotein metabolism [81]. At cellular level, CS is known to induce the dysregulation of several metabolic pathways [82]. Thus, CS exposure can not only cause microglia activation but likely keep them locked in an activated state, which can further enhance ALS disease progression.
Epigenetic Alterations
Recent reports indicate aberrant epigenetic modifications as another possible cause of ALS [83]. Epigenetics refers to heritable changes in gene expression without modification to the genome sequence. The three main epigenetic mechanisms include DNA methylation, microRNAs (miRNAs) and histone post-translational modifications [84]. In post-mortem sALS spinal cords, global alternations in DNA methylation and hydroxymethylation were observed [85]. Here, key genes that were either upregulated or downregulated were highly associated with immune and inflammatory responses. Similarly, the dysregulation of miRNAs was observed in ALS patients, with concomitant dysregulation of mRNA targets that were previously implicated in ALS pathogenesis [86]. Furthermore, several histone modifications are associated with ALS [87]. Collectively, these studies indicate various possible roles of epigenetic changes in ALS pathogenesis.
CS accelerates epigenetic age and decreases life expectancy via DNA methylation changes that influence gene regulation and genome stability [88,89]. It is notable that, like many neurodegenerative diseases, aging is a major risk factor for ALS [90]. CS is also associated with unique miRNA signatures that cause systemic inflammation and immune cell activation [91]. Such inflammation is mediated by CS through histone modifications as well [92]. Given the strong links between CS and epigenetic modifications, it is likely that CS exposure causes ALS via the induction of pathologic epigenetic alterations in neuronal and surrounding cells.
Blood-Spinal Cord Barrier Dysfunction
The blood-spinal cord barrier (BSCB) maintains CNS homeostasis, providing a regulated microenvironment for cellular components of the spinal cord, and is often considered as a functional equivalent to the blood-brain barrier (BBB). Both the BSCB and BBB consist of neurovascular functional units, involving microvascular non-fenestrated endothelial cells (ECs) joined by cell-to-cell tight junction (TJ) proteins [93]. The BSCB is more permeable than the BBB, likely due to reduced expression of TJ proteins, exposing the spinal cord to greater amounts of potentially neurotoxic stimuli compared to the brain [94,95].
Over the last decade, there has been an accumulation of evidence demonstrating the BSCB to be involved in ALS aetiology and progression. Spinal cords of Sod1 transgenic mice reveal greater BSCB permeability of harmful blood-borne substances, microhaemorrhages, reduced expression of TJ proteins and basement membrane components along with decreased spinal cord blood flow and capillary length [96][97][98][99][100]. Importantly, many of these observed changes precede motor neuron degeneration and neuroinflammation, implicating BSCB impairment as the primary cause for subsequent neuronal damage. Additional analyses conducted on human post-mortem sALS and fALS spinal cord tissues support these results and also reveal severe intra/extra-cellular oedema, degenerated pericytes and astrocytic end-feet from ECs along with reduced TJ protein mRNA expression [101][102][103]. Notably, spontaneous and/or accelerated BSCB breakdown in Sod1 mice directly contributes to early motor neuron injury, while removing BSCB-derived sources of neuronal injury and/or restoring BSCB integrity delays motor neuron degeneration onset [99].
CS has been demonstrated to impair BBB viability and function over time through inflammatory and oxidative processes [104][105][106]. Hence, given the physiological similarities between the BSCB and BBB, it can be postulated that the oxidative and inflammatory processes involved in barrier disruption may not be BBB-specific, and a similar pathological process of degeneration may be taking place in the BSCB. Even though little is currently known regarding the cause of BSCB dysfunction, oxidative stress has been suggested as a likely mechanism contributing to the pathophysiological cascade of BSCB impairment [107]. Thus, it is likely that CS induces oxidative stress and inflammatory processes that damage BSCB components, initiating primary BSCB disruption. This mechanism may involve the promotion of EC inflammation, modulation of EC activity and their interactions, reductions in TJ protein expression, basal lamina disruption and impaired astrocyte/pericyte-EC communication ( Figure 1).
Cell Influx
BSCB disruption after CS exposure likely increases spinal cord permeability to circulating cells such as erythrocytes and pro-inflammatory innate/adaptive immune cells, including mast cells, macrophages, neutrophils and T lymphocytes, producing a neurotoxic CNS environment. These cells are further activated by CS, increasing their pro-inflammatory behaviour [108,109]. When progressive BSCB disruption begins, erythrocytes, given their small cell diameter of 8 µm [110], may begin the process of cellular infiltration into the spinal cord. Following extravasation and lysis, haemoglobin-derived heme degradation results in local increases in bilirubin, carbon monoxide and iron [111]. Haemoglobin-derived iron has a direct toxic effect on motor neurons via iron-dependant oxidative pathways [112] and has been continually implicated in the ALS disease process [97,99,111,113]. In states of neuroinflammation, immune cells such as iron-containing macrophages also migrate into the CNS and release iron, contributing to this free radical-mediated oxidative damage pathway, and may further facilitate neurotoxicity [113].
Cell Influx
BSCB disruption after CS exposure likely increases spinal cord permeability to circulating cells such as erythrocytes and pro-inflammatory innate/adaptive immune cells, including mast cells, macrophages, neutrophils and T lymphocytes, producing a neurotoxic CNS environment. These cells are further activated by CS, increasing their pro-inflammatory behaviour [108,109]. When progressive BSCB disruption begins, erythrocytes, given their small cell diameter of 8 μm [110], may begin the process of cellular infiltration into the spinal cord. Following extravasation and lysis, haemoglobin-derived heme degradation results in local increases in bilirubin, carbon monoxide and iron [111]. Haemoglobinderived iron has a direct toxic effect on motor neurons via iron-dependant oxidative pathways [112] and has been continually implicated in the ALS disease process [97,99,111,113]. In states of neuroinflammation, immune cells such as iron-containing macrophages also migrate into the CNS and release iron, contributing to this free radical-mediated oxidative damage pathway, and may further facilitate neurotoxicity [113].
Additionally, leukocyte-mediated release of pro-inflammatory cytokines at the BSCB also stimulates astrocyte production of chemokines, which promote extravasation of leukocytes into the CNS [114]. Pro-inflammatory cells generate neuroinflammatory cytokines and subsequent ROS that increase motor neuron oxidative stress, exacerbating neurodegenerative mechanisms in ALS. Here, a degenerative cascade of oxidative damage probably ensues, whereby greater ROS production within the neuron then crosses the cell membrane to activate surrounding microglia, which, in turn, release more cytokines and ROS [50]. These greater levels of inflammatory cytokines produced by microglia and pro- Additionally, leukocyte-mediated release of pro-inflammatory cytokines at the BSCB also stimulates astrocyte production of chemokines, which promote extravasation of leukocytes into the CNS [114]. Pro-inflammatory cells generate neuroinflammatory cytokines and subsequent ROS that increase motor neuron oxidative stress, exacerbating neurodegenerative mechanisms in ALS. Here, a degenerative cascade of oxidative damage probably ensues, whereby greater ROS production within the neuron then crosses the cell membrane to activate surrounding microglia, which, in turn, release more cytokines and ROS [50]. These greater levels of inflammatory cytokines produced by microglia and pro-inflammatory cells further enhance the neuroinflammatory processes of motor neuron degeneration. Given the susceptibility of the CNS to oxidative stress, high rate of oxygen consumption and relatively low concentration of antioxidants [55], we posit that homeostatic pro-oxidative and anti-oxidative mechanisms are shifted to promote a pro-oxidative neuroenvironment. Furthermore, these oxidative and neuroinflammatory processes associated with the neurotoxic environment may extend back to the BSCB, leading to secondary BSCB damage. This facilitates further permeability leading to greater immune-cell migration, aiding the development and maintenance of a neurotoxic environment (Figure 2). CS exposure increases both the recruitment and degranulation of mast cells in different tissues, including lungs and skin [108,115]. Similarly, CS can activate resident mast cells near the BSCB, which along with activated mast cells from other tissues can migrate into the spinal cord following BSCB disruption and cause neurodegeneration. In fact, mast cells have been implicated in both neuroinflammation and neurodegeneration through their ability to release inflammatory cytokines (TNF-α, IL-6), histamine and proteases [66]. These cytokines, along with the release of chemoattractants, result in the recruitment of other leukocytes such as neutrophils, further exacerbating the inflammatory environment. The release of cytokines and proteases from mast cells can degrade TJ proteins and extracellular matrix components to facilitate greater BSCB permeability. This is supported by the analysis of ALS patients, which revealed the presence of mast cells expressing IL-17 in spinal cord tissue [116] along with elevated serum and CSF levels of the proinflammatory cytokine IL-15, an NK cell chemoattractant [117]. Finally, CS can cause metabolic reprogramming in these cells, as described before in regard to microglia, and render them into a more pro-inflammatory state.
NeuroSci 2021, 2,8 inflammatory cells further enhance the neuroinflammatory processes of motor neuron degeneration. Given the susceptibility of the CNS to oxidative stress, high rate of oxygen consumption and relatively low concentration of antioxidants [55], we posit that homeostatic pro-oxidative and anti-oxidative mechanisms are shifted to promote a pro-oxidative neuroenvironment. Furthermore, these oxidative and neuroinflammatory processes associated with the neurotoxic environment may extend back to the BSCB, leading to secondary BSCB damage. This facilitates further permeability leading to greater immune-cell migration, aiding the development and maintenance of a neurotoxic environment (Figure 2).
Figure 2.
Blood-spinal cord barrier (BSCB) leak establishes a neurotoxic environment for motor neurons. Primary damage to BSCB components by cigarette smoke (CS) exposure increases the permeability to pro-inflammatory immune cells such as mast cells, macrophages and neutrophils along with red blood cells that give rise to iron-containing haemoglobin. This leads to the generation of ROS and neuroinflammatory cytokines that increase motor neuron oxidative stress and exacerbate other neurodegenerative mechanisms in ALS. These include protein aggregation, alterations in axonal transport, endoplasmic reticulum stress responses and mitochondrial dysfunction. These pro-inflammatory cytokines and ROS also activate surrounding microglia which, in turn, release cytokines and ROS, further contributing to neurodegeneration. The inflammatory cytokines produced by microglia and pro-inflammatory cells further increase neuroinflammatory processes of motor neuron degeneration. Additionally, secondary BSCB dysfunction may occur concurrently with primary BSCB dysfunction. Here, oxidative and neuroinflammatory processes associated with the neurotoxic environment may extend back to the BSCB, facilitating secondary damage. This augments permeability and increases immune-cell migration, aiding the development and maintenance of a neurotoxic environment.
Figure 2.
Blood-spinal cord barrier (BSCB) leak establishes a neurotoxic environment for motor neurons. Primary damage to BSCB components by cigarette smoke (CS) exposure increases the permeability to pro-inflammatory immune cells such as mast cells, macrophages and neutrophils along with red blood cells that give rise to iron-containing haemoglobin. This leads to the generation of ROS and neuroinflammatory cytokines that increase motor neuron oxidative stress and exacerbate other neurodegenerative mechanisms in ALS. These include protein aggregation, alterations in axonal transport, endoplasmic reticulum stress responses and mitochondrial dysfunction. These pro-inflammatory cytokines and ROS also activate surrounding microglia which, in turn, release cytokines and ROS, further contributing to neurodegeneration. The inflammatory cytokines produced by microglia and pro-inflammatory cells further increase neuroinflammatory processes of motor neuron degeneration. Additionally, secondary BSCB dysfunction may occur concurrently with primary BSCB dysfunction. Here, oxidative and neuroinflammatory processes associated with the neurotoxic environment may extend back to the BSCB, facilitating secondary damage. This augments permeability and increases immune-cell migration, aiding the development and maintenance of a neurotoxic environment.
Microbiome
The gut microbiota represents a key interface between the environment and the immune system, being a major site of toxin and intrinsic antigen production. Given its ability to influence myelination, the BBB, synaptic plasticity and neuronal transmission, gut microbiome imbalance (dysbiosis) clearly exerts a pathogenic influence on the CNS [118][119][120]. Recently, the gut microbiome has been implicated in the pathogenesis of multiple neurological disorders, including ALS [121][122][123][124]. Analysis of Sod1 transgenic mice reveals altered gut microbiota composition, along with exacerbations in motor abnormalities and significant reductions in motor neuron cell counts following microbiome depletion using broad-spectrum antibiotics [121]. Studies of ALS patients demonstrate gut microbiome variations, especially during the disease course, as well as an imbalance in potentially protective and neurotoxic/proinflammatory microbial groups [122,125]. Furthermore, a population-based case-control study of 2484 ALS patients revealed an association between antibiotic use, particularly repeated use, and increased ALS risk [126].
Active and passive smoking are known to impair immune system functioning through a range of immunosuppressive mechanisms and have been continually demonstrated to influence the microbiome. These oral, lung and gut microbiome modifications are associated with various diseases such as chronic obstructive pulmonary disease (COPD), asthma, ulcerative colitis and even cancers [127][128][129][130]. In particular, even after smoking cessation, many changes initiating gut dysbiosis persist for prolonged periods of time [129]. While multiple theories exist regarding smoking's impact on the microbiome, including immunosuppression and oxygen deprivation, the exact mechanisms involved are yet to be established. Importantly, the gut microbiome has been reported to play an important role in BBB permeability, serving as a mechanism that facilitates the direct and indirect transmission of microbial signals from the gut to the CNS [119,131]. Thus, it is possible that CS-induced microbiome modifications may play a key role in shifting the balance between protective and pathogenic immune responses that are responsible for BSCB disruption. This mechanism may involve gut microbiota modifications promoting pro-inflammatory immune cell-BSCB interactions, leading to the development of a neurotoxic environment associated with ALS pathogenesis.
Conclusions
CS is one of the major risk factors associated with sALS; however, the exact aetiology remains elusive. While CS may not be the only cause for ALS pathogenesis, it may aggravate or effectuate other predisposing factors such as genetic susceptibility or unknown environmental factors. Here, we propose several key mechanisms by which CS may contribute to ALS pathogenesis and pathophysiological progression. Neurodegeneration in ALS can be facilitated by CS via direct or indirect pathways. These pathways are likely not mutually exclusive and may indeed involve a complex interplay between one another. Direct mechanisms involve oxidative stress, neuroinflammation, immunometabolic changes and epigenetic alterations caused by toxic by-products in CS. Indirect mechanisms involve primary/secondary disruption of the BSCB due to CS and altered microbiomes, which may, in turn, promote immune cell influx and their activation. Both these proposed pathways create a pro-oxidative and pro-inflammatory CNS environment that enhances the pathological progression of ALS. Indeed, it is highly likely that exposures to air pollution, e-cigarettes and bushfire smoke may also contribute to ALS pathogenesis via similar pathways [38,132].
The pathophysiology of ALS has been extensively studied in animals that carry genetic mutations linked to fALS [133]. Since the aetiology of sALS is not yet defined, much of the human analyses have been performed in post-mortem ALS patients. Given the established link between CS and ALS, we propose using animal models of CS exposure to investigate whether ALS symptoms appear in these animals. Several mouse models of CS exposure have recently been reported [108,134]. Although these models have been primarily used to assess lung function, it will be valuable to analyse their spinal cord as well. We should also take into account the age of the animals as it plays an important role in ALS onset. Thus, comparisons regarding the effects of CS in mice at different ages should be performed.
Finally, it will be interesting to test if CS exposure in fALS mouse models (SOD1-G93A, TDP43, FUS, C9orf72) accelerates disease onset and progression and aggravates symptoms. Establishing these new models of environmental risk factor-mediated ALS will certainly augment and improve our knowledge of the disease process.
From a therapeutic perspective, antioxidant and anti-inflammatory treatments are continually being explored to mitigate the oxidative and neuroinflammatory degenerative effects present in ALS. However, there is insufficient evidence of the efficacy of these treatments in ALS patients [135]. Therefore, new targets and strategies need to be explored to develop more effective ALS therapies. One such evolving target is immune cells. Notably, Masitinib, a tyrosine kinase inhibitor modulating neuroinflammatory processes of multiple immune cells, slows the progression of ALS in combination with riluzole [136]. Considering our proposed hypotheses, new therapeutic strategies may arise in relation to (a) targeting metabolic pathways in immune cells to inhibit activation/inflammation, (b) epigenetic therapy, (c) maintaining BSCB integrity such as limiting mast cell-mediated disruption of the BSCB, and (d) altering the microbiome using antibiotics, probiotics or faecal matter transplantation. We suggest that future studies for sALS treatment should be conducted using CS-exposed animal models. Future studies should also examine the relationship between bushfire exposure, air pollution and vaping with ALS risk and uncover underlying pathophysiological relationships. | 2021-05-21T16:56:30.134Z | 2021-04-20T00:00:00.000 | {
"year": 2021,
"sha1": "e1a6e33ee7025de119bcc66f9207b668c842d453",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-4087/2/2/8/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b73046ccdc951126f67c0675da21f369bf7e3308",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59408498 | pes2o/s2orc | v3-fos-license | Phytoremediation of Arsenic Contaminated Water Phytoremediation of Arsenic Contaminated Water
The present investigation deals with the detoxification of arsenic contaminated water using phytoremediation technique. Three macrophytes Azolla pinnata , Lemna minor, and Hydrilla verticillata were exposed to 1.0 ppm of an arsenic salt (sodium arsenite) separately as well as in combination ( ALH ) for 10 days. The concentration of arsenic in control (wild) macrophytes was below detectable limit. Following exposure, the con - centration of arsenic increased steadily in all the plants, and after 10 days, the efficacy of arsenic depletion in phytoremediated media was in the order: A. pinnata (88.06%) > L. minor (82.56%) > H. verticillata (77.53%) and 85.50% when applied in combination ( ALH ). It was found that A. pinnata can detoxify the arsenic contaminated water most efficiently.
Introduction
In recent years, the areas having arsenic contamination in the ground as well as surface waters are enlarging rapidly in India and its neighboring countries [1][2][3]. Arsenic is for the most part distributed into the nature in form of either metalloids or chemical compounds, which causes a variety of pathogenic conditions as well as cutaneous and visceral malignancies [4]. Arsenic shows toxicity even at low exposures [5] and causes black foot disease [6]. It is posing great challenge to environmental biologists as well as toxicologist to negotiate the problem. A cost effective technologies are needed to eliminate it from the contaminated water. Phytoremediation is a novel, cost effective and eco-friendly bioremediation technology for environmental cleanup. Bioremediation using macrophytes has been a successful tool to detoxify metal contaminations from variously polluted effluents [7,8]. Under present evaluation, the arsenic removal competencies from arsenic contaminated water were assessed using three widely distributed aquatic macrophytes (Azolla pinnata, Lemna minor, and Hydrilla verticillata).
Materials and methods
The experimental aquatic plants (A. pinnata, L. minor, and H. verticillata) were collected from the Agrofarm pond of the Banaras Hindu University, Varanasi, India. For experimentation, monoculture of individual plant was prepared at ambient laboratory temperature under natural photoperiods. Prior to phytoremediation, they were rinsed gently with the tap water (having dissolved O 2 6.3 mg L −1 , pH 7.2, water hardness 23.2 mg L −1 and room temperature 28 ± 3°C, arsenic concentration below detectable limit) to remove debris. They were subsequently acclimated in tap water for a total period of 2 weeks prior to their application in decontamination activity. The test solution was synthesized by adding 1.0 mg of arsenic in 1.0 L of tap water. This concentration (5%) of the LC 50 value of Clarias batrachus [9] was used for toxicity analysis of the arsenic contamination using fish bioassay technique. Ten grams (fresh weight) of each of the macrophyte was transferred to 10 L of the media. For control, same quantities of each of the plants were put into separate aquaria bearing plain tap water. Growths of each of the macrophytes were also evaluated. For each sampling period, separate experimental and control setups were established.
The percentage of removal efficiency of arsenic by aquatic macrophytes was calculated ( Table 1) by using the following formula: where C 1 is the initial concentration of arsenic in the media, and C 2 is the final concentration of arsenic in the media. The bioaccumulation coefficient is defined as the ratio of the concentration of arsenic in the plant and the concentration of residual arsenic in the medium where the plants are growing [10]. The bioaccumulation coefficient was calculated as follows:
Efficiency (%)
The average bioaccumulation coefficients for the aquatic plants tested here are illustrated in Figure 1.
Magnesium ions were analyzed by flame photometer. For chlorophyll estimation, the total chlorophyll was extracted in 80% chilled acetone using Arnon's method [11], prior to observation using a spectrophotometer. The protein content was estimated following Lowry et al. method [12]. Test water was bioremediated with the macrophytes for 10 days. About 1.0 g of each of the plants was collected from experimental setups on different periods (0, 1st, 2nd, 3rd, 4th, 5th, 6th, 7th, and 10th days). For the arsenic analysis, the harvested plants were dried at 80°C. The dried plant materials were digested with acid mixture of HNO 3 :HClO 4 (5:1 v/v) on a hot plate till a clear solution was obtained. Double distilled water was added to make up the volume to 5 mL. A graphite furnace atomic absorption spectrophotometer (Perkin-Elmer 2380) was used to analyze the arsenic accumulation. All results related to each plant setups were expressed as means followed by standard error of mean. Treatment effects were determined by analyses of variance using the general linear model procedure of the standard statistical analysis system followed by Dunnett's t-test at a 5% probability used for post hoc comparisons to separate treatment differences.
Results and discussion
Following exposure to the arsenic solution, the macrophytes did not exhibit significant morphological alteration up to 4 days. Then marked deterioration in the physical appearance of The criterion for significant differences set at p < 0.05. * Indicates significant differences when compared with 1 day exposed controls. a When exposed groups were compared with just previous exposed group. all the three plants was noted in the four experimental setups. The deterioration of the plant health became more extensive after 10 days. However, condition of A. pinnata deteriorated early and after 7 days it became obvious. The natures of deterioration in these plants were more or less identical whether treated separately or in combination. The concentration of magnesium ions (Mg +2 ) in these plants did not show much alteration in all the three plants excepting after 10 days when the alteration was statistically significant. The chlorophyll content of all the three plants showed marked decline following phytoremediation applied singly or in combination of the three macrophytes ( Table 2). Reduction in the chlorophyll content has been observed in variously phytoremediated plants [13]. However, it has been confirmed that the absorption of heavy metals produces phytotoxic effects on plants resulting in inhibition of chlorophyll synthesis and biomass production that often leads to death [14,15]. The other reason for depletion of chlorophyll content may be due to impaired uptake of essential elements, damaged photosynthetic components or due to increased chlorophyll activity causing chlorophyll regeneration [16]. The protein concentration of these plants showed progressive depletion, which was statistically significant after 10 days in all the experimental setups.
Other toxic metals have also caused reduction in plant protein contents [17].
During phytoremediation, the concentration of arsenic in the media decreased progressively, and the residual arsenic concentrations were 11.93, 17.49, and 22.46 % in A. pinnata, L. minor, and H. verticillata, respectively. The arsenic concentration remained 14.5% in the medium when it was phytoremediated jointly by all the three macrophytes ( Table 1).
This increase perhaps may partially be due to progressive accumulation of arsenic by these plants. The increase was insignificant after 7 days and onward of exposure. All these plants continued to exhibit detectable arsenic concentrations (Table 4). However, after 14 days, they decayed extensively. The bioaccumulation coefficients for the aquatic plants tested in this experiment are shown in Figure 1. The results display the bioaccumulation coefficient and illustrate the difference in arsenic accumulation among various macrophyte species. Decrease in the amount of arsenic in the media was due to bioaccumulation of this metalloid by the macrophytes, as reflected by the presence of this metal in the plant tissues (A. pinnata accumulated 0.88 mg, L. minor 0.82 mg, H. verticillata 0.77.5 mg, and 0.85 mg per kg dry wt. of the biomass in the combination of all the three macrophytes) ( Table 4). The arsenic concentration was below detectable limit in all the three untreated (control) plants. The sum of the arsenic detected in the media after phytoremediation and arsenic absorbed by the plant tissues was quite close to the 1.0 mg L −1 , which was the initial concentration prepared for the test solution (Figures 2 and 3).
Bharti and Banerjee [13] observed certain degree of difference in sum of the amounts of metals left behind in the phytoremediated coal mine effluent and metal accumulated in the plant tissues after phytoremediation. This was due to their sedimentation, adsorption to the clay particles and organic matters, co-precipitation with secondary minerals, cation-anion exchange, and complexation [7,18]. In this case, such difference was not noticed because unlike coalmine effluent, the nature and concentration of contaminants in the arsenic solution were simple. A survey of Figure 2 clearly shows that uptakes of arsenic in all the macrophytes are time dependent up to 10 days of treatment. Figure 3 illustrated the relationship between arsenic uptake by the plants and its depletion from the contaminated medium. All the three macrophytes are useful for decontamination of arsenic. This study suggests that A. pinnata is the most efficient plant and can be used singly for this purpose. Phytoremediation beyond 10 days by the same plant will not have additional benefits.
Author details
Randhir Kumar* and Tarun Kumar Banerjee *Address all correspondence to: randhir18bhu@gmail.com Department of Zoology, Eco-physiology Unit, Banaras Hindu University, Varanasi, India | 2018-12-28T06:04:38.381Z | 2018-06-27T00:00:00.000 | {
"year": 2018,
"sha1": "ea9924e4ff5b39a2c82edb1a4dfec141a40da880",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/57972",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8bcb1d4fbcb5c2d677af83fa708e6d99da901d8f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
29586760 | pes2o/s2orc | v3-fos-license | Differential Interactions of Id Proteins with Basic-Helix-Loop-Helix Transcription Factors*
Dimerization of three Id proteins (Id1, Id2, and Id3) with the four class A E proteins (E12, E47, E2-2, and HEB) and two groups of class B proteins, the myogenic regulatory factors (MRFs: MyoD, myogenin, Myf-5 and MRF4/Myf-6), and the hematopoietic factors (Scl/Tal-1, Tal-2, and Lyl-1) were tested in a quantitative yeast 2-hybrid assay. All three Ids bound with high affinity to E proteins, but a much broader range of interactions was observed between Ids and the class B factors. Id1 and Id2 interacted strongly with MyoD and Myf-5 and weakly with myogenin and MRF4/Myf-6, whereas Id3 interacted weakly with all four MRFs. Similar specificities were observed in co-immunoprecipitation and mammalian 2-hybrid analyses. No interactions were found between the Ids and any of the hematopoietic factors. Each Id was able to disrupt the ability of E protein-MyoD complexes to transactivate from a muscle creatine kinase reporter construct in vivo. Finally, mutagenesis experiments showed that the differences between Id1 and Id3 binding map to three amino acids in the first helix and to a small cluster of upstream residues. The Id proteins thus display a signature range of interactions with all of their potential dimerization partners and may play a role in myogenesis which is distinct from that in hematopoiesis.
An increasingly important role is ascribed to protein-protein interactions in the regulation of cellular growth and differentiation pathways. Dimerization serves to convert inactive monomeric molecules into transcriptionally active dimeric complexes at specific times during cellular development. Deletional analysis has identified a number of evolutionarily conserved regions that mediate these interactions. One such region, commonly associated with transcription factors involved in a range of proliferative and differentiation pathways, is the basic-helixloop-helix (bHLH) 1 (1). This domain is conserved from yeast to mammals and is composed of a positively charged basic region followed by two amphipathic ␣-helices separated by a spacer loop. Dimers are stabilized by a series of hydrophobic and electrostatic interactions between the helices of compatible molecules (2)(3)(4)(5). The juxtaposition of two basic regions resulting from dimerization forms a DNA binding interface able to insert into the major groove in a sequence-specific manner (2,6). Although bHLH proteins have no discernible DNA binding activity as monomers, dimers recognize a consensus DNA sequence (CANNTG), termed the E box (4,(7)(8)(9).
bHLH transcription factors can be broadly placed into two categories (reviewed in Ref. 10). The class A factors, or E proteins, (E2-2, HEB, and the E2A gene products E12 and E47) are expressed in a virtually ubiquitous pattern and are able to dimerize efficiently with tissue-restricted class B factors to activate gene expression (1,(11)(12)(13). Because each factor contributes a specific DNA recognition half-site, class A and class B heterodimeric complexes and class A hetero-or homodimers theoretically provide distinct combinatorial E box binding specificities (7). Class B members thus far characterized include the myogenic regulatory factors (MRFs) involved in skeletal muscle development (MyoD, myogenin, Myf-5, and MRF4/Myf-6) and the hematopoietic factors (Scl/Tal-1, Tal-2, and Lyl-1) (14 -21). Mammalian homologues of the Drosophila bHLH achaete scute genes (mash1 and -2) have been implicated in neuronal development as has neurod (beta2) which also has a role in insulin regulation (22)(23)(24). A range of other class B bHLH proteins have been associated with early mesoderm formation and later muscle development (Twist), adipocyte development (Add1), and skeleton formation (Scl-1) (25)(26)(27). E box motifs have been identified in the enhancer elements of a number of bHLH factor-regulated genes such as myosin heavy chain, immunoglobulins, and chymotrypsin (10).
The formation of active class A-class B complexes is modulated by the Id (inhibitor of DNA binding) family members. The four Id proteins identified thus far have an HLH domain that lacks the amino-terminal associated basic region necessary for DNA binding (28 -33). Id proteins act to sequester class A factors, inhibiting the formation of active class A-class B heterodimers and are therefore considered to act as dominant negative regulators of differentiation pathways (28,34,35). Ids are expressed in a largely overlapping but distinct fashion during development, with the highest levels generally being achieved during embryogenesis (33). Significant Id levels persist in a range of actively proliferating tissues and in some tumor cell lines (28,33). Rapid Id down-regulation has been reported in myoblasts and hematopoietic cells during terminal differentiation, consistent with their negative regulatory role (28,34). Indeed, forced Id expression has been shown to inhibit the differentiation of each of these cell types (36 -40). Because considerable overlap exists in the expression patterns of Id proteins, redundancy in their function has been inferred (30,33).
Ids are known to bind avidly to class A factors such as E47, weakly to the myogenic factors, and poorly, if at all, to the hematopoietic factors (28,34,41). This has led to the assump-tion that transcriptional control is exerted primarily at the level of Id-class A interactions. However, these studies were performed with select members of the class A and class B families. Thus far, no exhaustive studies exist to compare the relative strengths of interactions among a broader range of family members.
The yeast 2-hybrid system has become an increasingly popular method for assessing protein-protein interactions and has been employed previously to study a subset of bHLH interactions (35,(42)(43)(44). We sought to develop a quantitative yeast 2-hybrid assay to investigate interactions of three Id proteins with a range of class A and class B factor targets. Our observations, confirmed and extended by co-immunoprecipitation (IP) of in vitro translated proteins and mammalian 2-hybrid analyses, indicate that discrete and reproducible differences exist in relative binding preferences among the Id proteins. As expected, Id proteins bound avidly to all the class A factors tested, although a range of affinities were apparent. A broader range of affinities for myogenic factors was observed, and no interactions with the hematopoietic factors were seen despite their expression in functional form. Transient transfection studies in C3H myoblasts employing a muscle-specific creatinine kinase-chloramphenicol acetyltransferase (CAT) reporter vector provided independent confirmation of the hierarchical interactions of Id proteins with class A factors. Site-directed mutagenesis enabled us to map the regions responsible for establishing Id dimerization preferences to the first helix of the HLH domain and to residues immediately adjacent to this. Our findings have implications for the mechanism by which Id proteins influence class A and class B interactions and for the roles played by different Id proteins in tissue-specific gene regulation.
EXPERIMENTAL PROCEDURES
Plasmid Construction-Parental yeast vectors pGBT9 and pGAD were kindly supplied by Dr. S. Elledge (Baylor College of Medicine). Parental expression vectors for mammalian 2-hybrid analysis, pSG424 and pNLVP16, were obtained from Drs. C. Dang (Johns Hopkins University School of Medicine) and M. Green (University of Massachusetts) (45,46). Murine Id1 was obtained from Dr. H. Weintraub (Fred Hutchinson Cancer Research Center), and murine Id2 and Id3 cDNAs were obtained from Dr. D. Nathans (Johns Hopkins University) (28,30,34). Fragments encoding the HLH regions plus 15-20 flanking amino acid residues were generated by PCR primers incorporating EcoRI (forward primer) and BamHI (reverse primer) sites and cloned directionally and in-frame into pGAD, pGBT9, and pSG424. The fragments amplified encoded amino acids 73-138 of Id1, 72-140 of Id2, and 28 -91 of Id3. All products were sequenced to confirm the fidelity of the PCR amplification reaction. Full-length cDNAs were cloned into pRcCMV (Invitrogen, San Diego, CA) for use in in vitro transcription/translation reactions and transient transfection assays. Murine MyoD, murine myogenin, and human Myf-5 were supplied by Dr. D. Shapiro (St. Jude Children's Research Hospital) and rat MRF4/Myf-6 by Dr. S. Konieczny (Purdue University) (8,15,16,20). Fragments encoding the bHLH domains were again amplified by PCR, incorporating EcoRI and BamHI sites for MyoD or BamHI and PstI sites for myogenin, Myf-5, and MRF4/Myf-6 and cloned in-frame into pGBT9 and pGAD424. The regions amplified included codons 83-184 of MyoD, 55-155 of myogenin, 56 -143 of Myf-5, and 65-168 of MRF4/Myf-6. Due to the presence of an internal PstI site in the second helix of Myf-5 and MRF4/Myf-6, these were cloned as BamHI/blunt fragments. The bHLH domains from MyoD and Myf-5 were also subcloned in-frame into pNLVP16 as bluntended fragments. Full-length MyoD and MRF4/Myf-6 cDNAs were cloned into pRcCMV. The cDNAs encoding each of the class A factors were isolated from a yeast 2-hybrid screen of a murine embryo library in the pGAD10 vector (CLONTECH, Palo Alto, CA) using an Id2 bait. These contained residues 118 -264 of murine E47 (A1), residues 379 -666 of murine E2-2 (ME-2), and residues 574 -729 of murine HEB (Alf-1) (47-49). The E47 fragment was also cloned into both pNLVP16 and pGBT9. A carboxyl-terminal bHLH containing fragment of human E12 (residues 508 -654) in the pAS1 vector, a gift from Dr. E. Olsen (University of Texas), was excised and sub-cloned into pGAD424 (44).
None of the class A clones contained the putative leucine zipper regions that are associated with transcriptional activation domains (50). An almost full-length E12 clone in bluescript (E12R, a gift from Dr. H. Weintraub), including the putative leucine zipper domain, was used for in vitro translation (6). Full-length E12 and E47 in the mammalian expression vector pGK were a gift from Dr. G. Kato (Johns Hopkins University School of Medicine). The cDNAs for human hematopoietic factors Scl/Tal-1, Tal-2, and Lyl-1 were provided by Drs. I. Kirsch (National Cancer Institute), R. Baer (University of Texas), and M. Cleary (Stanford University), respectively (14,19,21). bHLH domains were again amplified by PCR with primers containing EcoRI and BamHI sites and were cloned in-frame into both pGAD424 and PGBT9. The amplified fragments encoded amino acids residues 66 -138 of Scl/ Tal-1, 2-69 of Tal-2, and 127-200 of Lyl-1. Full-length Scl/Tal-1 was subcloned into pBluescript SKϩ (Stratagene, La Jolla, CA). The multimerized gal4:CAT reporter construct pGal5E472CAT was supplied by Dr. M. Green (51). The muscle creatinine kinase (MCK) CAT construct was obtained from Dr. S. Hauschka (University of Washington).
Quantitative -Galactosidase Assay-Yeast transformants were grown to stationary phase in complete EGG medium containing 2% ethanol, 2% galactose, and 3% glycerol and lacking tryptophan and leucine. 10 7 cells were pelleted and resuspended in 50 l of Z buffer (60 mM Na 2 HPO 4 , 40 mM NaH 2 PO 4 , 10 mM MgCl 2 , 50 mM -mercaptoethanol) containing 0.01% SDS. Two microliters of CHCl 2 were added followed by two cycles of freeze-thawing in liquid nitrogen. Lysates were transferred to 96-well plates, and 50 l of a fluorogenic substrate (8 mM 3-carboxyumbelliferyl--D-galactopyranoside, Molecular Probes Inc., Eugene, OR) was added. After 30 min, 100 l of stop buffer (300 mM glycine, 15 mM EDTA, pH 11.5) was added, and the reactions were allowed to stabilize for 1 h. Fluorescence was determined in a Perkin-Elmer microtiter plate reader (Foster City, CA; excitation 390 nm, emission 460 nm), and the amount of -galactosidase synthesized was calculated relative to a dilution series of -galactosidase enzyme standards (Sigma) assayed simultaneously with the yeast lysates. All assays were performed in triplicate with standard errors of less than 10% in each case. None of the individual vectors used in this series displayed a background higher than 10 pg of -galactosidase enzyme/10 7 cells which represents the lower limit of detection with this assay.
Western Blot Analysis-Yeast were grown in EGG medium until they reached stationary phase. They were then pelleted and subject to two cycles of freeze-thawing followed by boiling in standard SDS-polyacrylamide gel electrophoresis (PAGE) lysis buffer for 10 min to ensure complete lysis (53). 100 g of protein was resolved by 12% PAGE and transferred to Immobilon-P membranes (Sigma) by semi-dry electroblotting. Membranes were preincubated in wash solution (1 ϫ phosphate-buffered saline ϩ 0.1% Tween 20). This was followed by a 1-h incubation in block solution (5% non-fat dry milk powder, phosphatebuffered saline ϩ 0.1% Tween 20) containing either an anti-yeast gal4 DNA binding domain or activation domain antibody (at 1:500 and 1:2000 dilutions, respectively) (Upstate Biotechnology Inc., Lake Placid, NY). Membranes were washed four times before the addition of a horseradish peroxidase-linked goat anti-mouse antibody (1:1000 dilution; Santa Cruz Biotechnology, Inc., Santa Cruz, CA) in blocking solution and incubated for a further 1 h at room temperature. After a further four washes, proteins were visualized using an enhanced chemiluminescence detection kit according to the manufacturer's instructions (Amersham International, Buckinghamshire, UK).
In Vitro Transcription/Translation and Co-immunoprecipitation-Full-length E12, Id1, Id2, Id3, Scl/Tal-1, MyoD, and MRF4/Myf-6 cDNAs in either pBluescript SKϩ or pRcCMV were transcribed and translated in vitro using a coupled reticulocyte lysate kit (TNT, Promega, Madison, WI) in the presence of [ 35 S]methionine (1 mCi/mmol). Labeled proteins were mixed and incubated at 37°C for 20 min before the addition of 100 l of IP buffer (250 mM NaCl, 0.25% Nonidet P-40, 20 mM Tris-HCl, pH 7.5, 1 mM EDTA, 1 mM dithiothreitol). Anti-Id, E12, or MyoD polyclonal antibodies were added and reactions incubated on ice for 30 min prior to the addition of 30 l of a 1:1 mix of protein A-Sepharose (Bio-Rad) in IP buffer. After a further 60 min incubation, samples were washed four times in IP buffer before resolution by 12% SDS-PAGE. The intensity of each partner was quantified by Phosphor-Image analysis (Molecular Dynamics) using Image-Quant software and normalized relative to methionine content of each protein. The value was then expressed as a ratio of the amount of input proteins. All antibodies were tested for cross-reactivity and shown to be specific for their respective protein under the conditions described here.
Mammalian 2-Hybrid Analysis/Transient Transfections-5 g of each vector DNA was introduced into HeLa cells by calcium phosphate precipitation, and the cells were maintained in Dulbecco's modified Eagle medium (DMEM) supplemented with 10% calf serum (Life Technologies, Inc.) (54). All transfections included a CMV -galactosidase reporter (CLONTECH) to standardize transformation efficiencies. CAT assays were performed on cell lysates harvested after 48 h and relative conversion calculated by PhosphorImage analysis.
A determination of the ability of Ids to repress E protein-MyoD interactions was performed in C3H myoblasts. The amounts of MyoD and E protein vector DNAs used were determined empirically so as to generate the maximum amount of transcriptional activation in combination over either factor alone, prior to determining the influence of co-transfected Id. This ensured that CAT activity was a reflection of a specific MyoD-E2A interaction, minimizing the influence of additional endogenous factors. 2 days post-transfection, DMEM ϩ 10% calf serum was replaced with DMEM ϩ 2% horse serum, and incubation was continued for a further 2 days. Under these conditions, endogenous Id levels are repressed, allowing for myogenic differentiation to proceed (55). All transfections were adjusted to give uniform DNA concentrations with the empty pRcCMV vector.
Id Protein Helix Swaps and Site-directed Mutagenesis-Helix swaps between Id1 and Id3 were generated by PCR amplification of individual helices followed by a "cut and paste" cloning strategy. Sixteen residues upstream of the amino terminus and 14 residues downstream of the carboxyl terminus of each HLH domain were maintained to promote correct folding. A fragment of Id1 encompassing helix 1 was amplified with a forward primer containing an EcoRI site. The reverse primer mapped to the helix 1/loop junction and contained a BamHI site. A SmaI site was introduced at the helix 1/loop boundary (residues 102 and 103 of Id1) where the codons CCC ACC were altered to CCC GGG. EcoRI/BamHI-digested fragments were cloned into pBluescript SKϩ to create plasmid A. Helix 2 from Id3 was amplified from a 5Ј primer derived from the loop/helix 2 region, also containing a SmaI site and a 3Ј primer containing a BamHI site. BamHI/SmaI-digested fragments were then ligated into BamHI/SmaI-digested plasmid A, thus generating an Id1/3 chimera (plasmid B). A complementary approach was taken to constructing the Id3/1 chimera. Finally, EcoRI/BamHI fragments were excised from plasmid B and cloned into the yeast vector pGBT9 or into an m13 vector for further point mutagenesis. Oligonucleotide-mediated site-directed mutagenesis was performed with the Muta-gene kit (Bio-Rad) according to the manufacturer's recommendations using the chimeric Id1/3 or Id3/1 single-stranded phage DNAs. All clones were completely sequenced to ensure the presence of the desired mutation prior to subcloning into pGBT9. Each construct was tested for interaction with pGAD E12 and shown to interact with consistent affinity.
RESULTS
Yeast 2-Hybrid Analysis-A quantitative 2-hybrid approach was used to study interactions between Id proteins and a series of class A and class B targets expressed as gal4 binding domain and gal4 transactivation domain fusions, respectively (Fig. 1). All Id-class A interactions were determined to be quite strong by this assay (Fig. 1A). Interactions varied over a 5-fold range from the weakest (Id2-HEB) to the strongest (Id2-E47). The consistently strong Id-class A interactions observed stand in contrast to those seen between the Ids and the class B myogenic factors (Fig. 1B). These latter interactions were considerably weaker than those of Id-class A interactions and showed a far greater degree of variability. Id2 displayed the strongest binding which, in the case of MyoD and Myf-5, was 3-4-fold more avid than that of the corresponding Id1 interactions. In turn, Id1 bound MyoD and Myf-5 with a 5-10-fold greater affinity than Id3 which demonstrated barely detectable interaction with any of the myogenic factors. Of note, however, was that the strength of the Id2-Myf-5 and Id2-MyoD interactions rivaled the weakest Id-class A factor interactions. MyoD and Myf-5 showed the strongest interactions with all three Ids tested. Although Id3 bound weakly, if at all, to the myogenic factors, its binding to class A factors was comparable to that of the other Ids (Fig. 1A). Hence the differences in binding seen here cannot be attributed to differential expression. This was also confirmed by Western blot analysis of yeast lysates showing comparable levels of expression of all proteins under study (Fig. 2). When tested for their ability to bind the class A factors E47 or E12, all the myogenic factors (including MRF4/Myf-6) interacted strongly (Fig. 1B), again indicating that the differences in myogenic factor-Id binding were not due to relative differences in MRF protein expression. None of the hematopoietic factors displayed discernible interactions with the Id proteins, whereas all were found to interact with the class A target E2-2 (Fig. 1C). MRFs were also investigated for homodimerization ability. MyoD, myogenin, and Myf-5 displayed weak homodimerization (less than 50 pg of -galactosidase detected), whereas MRF4/ Myf-6 homodimerization was not observed. Neither the Ids nor the hematopoietic factors displayed discernible homodimerization ability in this assay (data not shown).
Co-immunoprecipitation Analysis-Co-IP of selected fulllength in vitro translated proteins was employed to confirm the differences in dimerization properties identified by yeast 2-hybrid analysis (Fig. 3). Increasing amounts of Id1, Id2, or Id3 proteins were incubated with E12, and resultant heterodimers were precipitated with an anti-E12 polyclonal antibody (Fig. 3A). Consistent with the yeast 2-hybrid data (Fig. 1), all three Ids interacted strongly with E12 with any differences being on the order of less than 2-fold. In contrast, interactions between Id proteins and MyoD were considerably weaker and showed more pronounced differences than those observed with Id-E protein interactions. For example, Id1 and Id2 showed comparable interactions with MyoD, whereas Id3-MyoD interactions were significantly weaker. These results were in good agreement with the results obtained by the 2-hybrid system. All Ids also bound very weakly to MRF4/Myf-6 ( Fig. 3C) and not at all to Scl/Tal-1 (Fig. 3D). From these experiments, we conclude that the strengths of Id-E protein and Id-MRF interactions qualitatively and quantitatively reflected those observed in yeast. Furthermore, our results indicate that the dimerization properties of full-length proteins are not significantly different from those of the isolated HLH domains expressed in yeast.
Mammalian 2-Hybrid Analysis-The observation that the Ids displayed the widest range of interactions with MyoD and Myf-5 led us to investigate whether such differences were maintained in mammalian cells (Fig. 4). Id1, Id2, and Id3 were expressed as gal4 DNA binding domain fusions, whereas MyoD and Myf-5 were expressed as VP16 transactivation domain fusions (56). Each was tested for its ability to activate a reporter vector in HeLa cells (Fig. 4A). E47, which displays strong interactions with each Id in yeast (Fig. 1A), was also expressed as a VP16 fusion as a positive control for Id dimerization activities. Appreciable CAT conversion was seen when either Id1 or Id2 was co-expressed with either MyoD or Myf-5 ( Fig. 4B), whereas Id3 interaction with either of the myogenic factors was only marginally above background. These observations were generally consistent with both the yeast 2-hybrid data and with the co-IP experiments (Figs. 1 and 3). As the Ids differed in their ability to dimerize with the MRFs, we investigated their ability to bind E47 (Fig. 4B). Each was found to generate comparable high levels of CAT activity as might have been predicted from our yeast 2-hybrid results (Fig. 1A). Thus the reduced ability of Id3 to interact with MyoD or Myf-5 cannot simply be explained by its differential expression. Our results in both mammalian cells and in yeast appear to reflect the true dimerization preferences of the Id proteins under study.
Id Repression of E Protein-MyoD Heterodimer Activity-By having established that the Ids interacted similarly with each E protein (Fig. 1A), we sought to determine the influence of each Id on the transcriptional activity of full-length E2A⅐MyoD heterodimers (Fig. 5). We had previously established that this combination of factors gave the highest levels of transactivation from the MCK-CAT reporter. 2 The relative consistency of Id-E2A interactions in yeast suggested that each Id should disrupt E2A⅐MyoD complexes with comparable ability. Low levels of transactivation were observed when either 5 g of MyoD or 1 g of E47 or E12 was transfected alone. Co-transfection of these factors, however, generated significant CAT conversion, indicating that transactivation was mediated by an E2A⅐MyoD heterodimer and not by endogenous factors. The enhanced ability of MyoD to form heterodimers with E47 compared with E12 in yeast (Fig. 1B) with a maximal 2-3-fold reduction in transactivation observed at the highest input Id concentration. In metabolic labeling experiments, the expression of each Id was determined to be comparable (data not shown).
Id Protein Helix Swaps and Site-directed Mutagenesis-To map the amino acid residues responsible for the differential binding capabilities of Id proteins with respect to the MRFs, helix swaps were generated, and site-directed mutagenesis was performed (Fig. 6). All mutants were tested in yeast for their ability to bind E12, and all were found to bind with similar avidity, varying over an approximate 2-fold range (Fig. 6, last column). These results were anticipated based on our observations that wild-type Id1 and Id3 interacted to a comparable extent with E12 (Fig. 1A). Initially, chimeras containing the first helix of Id1 and the loop-helix 2 region of Id3 or the first helix of Id3 and the loop-helix 2 region of Id1 were constructed and tested for interaction with all four MRFs (Fig. 6, lines 3 and 4). These experiments clearly demonstrated that the region determining the specificity of interactions resided in the first helix and/or in the amino-terminal region immediately adjacent to this. Based on this observation, the three amino acid residues that distinguish the first helices of Id1 and Id3 were altered sequentially to convert Id1 to Id3 and vice versa. Altering individual tyrosine, glycine, and lysine residues of Id1 to the corresponding aspartic acid, histidine, and arginine residues of Id3, respectively, reduced binding in each case (Fig. 6, lines 5, 7, and 9). However, the relative importance of each residue for dimerization was dependent upon the MRF under study. For example Id1/3 (G92H) showed a 3-fold reduction in the ability to bind MyoD but a 14-fold reduction in the ability to bind Myf-5 (compare Fig. 6, lines 3 and 7). In contrast Id1/3 (K98R) showed a 12-fold reduction in MyoD binding but only a 4-fold reduction in myogenin binding (compare Fig. 6, lines 3 and 9). Similarly, no single Id3 to Id1 mutation restored maximal binding (Fig. 6, lines 6, 8, and 10). This demonstrates that each residue, even the conserved lysine of Id1 and arginine of Id3, contributes to binding specificity although the relative importance of each residue is dependent upon the MRF target. To investigate possible additive effects, two further sets of Id3 mutations were investigated for their ability to restore Id1-like binding. A double mutant (Fig. 6, line 11), created by altering the aspartic acid and histidine residues of Id3/1 to the complementary tyrosine and glycine residues of Id1, resulted in almost full binding to MyoD, increased binding to Myf-5, and weak binding to MRF4/Myf-6. This double mutant did not, however, restore myogenin binding. Even when all three residues were altered to those of Id1 (Fig. 6, line 12), no increase was seen in Myf-5 binding when compared with the double mutant. Some binding was seen to myogenin, whereas MRF4/ Myf-6 binding was undetectable. These data suggested that helix 1 residues are sufficient for MyoD recognition but that additional residues upstream of helix 1 appear to be required for full dimerization with the other three myogenic factors. To investigate the upstream requirement further, a series of deletions were made in the Id1/3 swap background. The initial deletion of six amino acids at the extreme NH 2 terminus of the Id1/3 sequence (Fig. 6, line 13) had a minimal effect on binding (Ͻ2-fold) with the exception of Myf-5 whose binding was reduced approximately 3-fold (Fig. 6, lines 3 and 13). The deletion of a further six residues (Fig. 6, line 14 additional effect was seen with respect to Myf-5 and MyoD. Finally, an internal deletion of the six residues immediately adjacent to the amino terminus of the first helix (Fig. 6, line 15) severely inhibited dimerization with all MRFs, although this was least obvious with MyoD. Interestingly, all of the individual deletion mutants (Fig. 6, lines 13-15) bound E12 as well as either of the wild-type Ids (with Ͻ2-fold differences). From these studies, it appears that complete MyoD and Myf-5 interactions with Id1 require similar amino acid residues. The residues facilitating MyoD and Myf-5 binding are not sufficient for myogenin and MRF4/Myf-6 binding, and these latter factors require additional, upstream amino acids. Binding by E12 would appear not to require any of these upstream residues.
A select number of the Id "helix swap" proteins depicted in Fig. 4 were examined using the mammalian two-hybrid system. In these experiments, the HLH domains of Id1/3, Id3/1, and Id1/3 D80 -85 were cloned into the pSG424 vector and transfected into HeLa cells together with either pNLVPMyoD or pNLVPMyf5. As shown in Fig. 7, these results recapitulated those observed in yeast. Neither Id3, Id3/1, nor Id1/3 D80 -85 gave a detectable interaction with either MyoD or Myf5. In contrast, Id1 interacted strongly. That each of the Id proteins was expressed was confirmed in control experiments showing that all five Id fusion proteins depicted in Fig. 7 interacted strongly with a pNLVP16 fusion of the E12 bHLH domain (not shown). DISCUSSION bHLH proteins are involved in diverse aspects of cellular physiology, from establishing the topography of the early embryo (e.g. Twist) to promoting cellular transformation, proliferation, and apoptosis (e.g. c-Myc) and ultimately to the regulation of terminal differentiation programs such as myogenesis (the MRFs) (25, 57, 58). As an increasing number of such proteins are characterized, it is apparent that spatial and temporal expression patterns combine with a hierarchical network of specific protein-protein interactions to provide precise control of many cellular processes.
A variety of biochemical, genetic, and in vivo approaches have been taken in the investigation of dimerization potential, sequence-specific DNA binding, and transcriptional activity of bHLH factors (35, 41, 43, 56, 59 -62). Although our investigation generally considers only one aspect of these activities, dimerization, we feel that a comprehensive understanding of the relative affinities displayed by a group of key factors may provide insight into mechanisms of transcriptional activation. As the function of the myogenic factors both in establishing the myoblast lineage and initiating terminal differentiation is reasonably well described, myogenesis may provide a useful model for other tissue-specific differentiation pathways (reviewed in Ref. 58).
Numerous groups have reported the ability of class A, class B, and Id proteins to homo-and heterodimerize. Sun and Baltimore (60) reported the dissociation constants (K d ) for MyoD homodimers, E47 homodimers, and E47⅐MyoD heterodimers to be 6.8 ϫ 10 Ϫ4 , 1.5 ϫ 10 Ϫ5 , and 1.9 ϫ 10 Ϫ6 M, respectively (60). These differences are remarkably consistent with the values of 0.05, 1.5, and 24 ng of -galactosidase obtained for these interactions in our study. Similarly, Estojak et al. (63) reported that high, intermediate, and low level interactions, determined in a semi-quantitative yeast 2-hybrid system, accurately reflected calculated K d values (63). Other yeast 2-hybrid analyses have shown that Id1 and the class B factors MyoD and Scl/Tal-1 bind to the class A factors E2-2 and E12 (43,44). Loveys et al. (35) demonstrated the ability of Id3 to interact with E12, E47, HEB, and MyoD both in yeast and in vitro, observing that Id3 interacted strongly with each E protein but with reduced ability to MyoD (35). They were also able to show that Id3 could repress E protein-mediated transcription from a multimerized immunoglobulin E box reporter. Id1 and Id2 bind at low levels to MyoD and very weakly, if at all, to Scl/Tal-1 as demonstrated FIG. 6. Id protein helix swaps and site-directed mutagenesis. A series of Id hybrid molecules were tested in their ability to bind all four MRFs and E12 using a quantitative yeast 2-hybrid assay, performed in triplicate. The amount of -galactosidase enzyme generated per 10 7 cells (ng) for each set of interactions is shown in the right hand columns. Standard errors were less than 5% in each case. by co-IP and glutathione S-transferase-pull down analysis (28,34,41). Mammalian 2-hybrid approaches have also demonstrated the ability of Id1 to interact with both MyoD and E12 and of Scl/Tal-1 to interact with E47 (56,64). These reports concur with the general observation that class A-class B and Id-class A complexes are more stable than Id-class B complexes. Our study not only confirms and extends previous reports of HLH protein dimerization potentials but attempts to provide a consistent comparison of a broad range of interactions.
All three Id proteins tested here interacted strongly with all class A factors, consistent with their role in the sequestration of these ubiquitous proteins. Although Id-class A interactions displayed the strongest and most consistent interactions among the series of factors tested, each Id nevertheless displayed a discrete "fingerprint" of preferred class A partner. Specific patterns of Id-MRF interactions were also observed, although these interactions were considerably weaker and showed greater variability than those of Id-class A interactions. The finding that Id-MRF interactions were conserved, in contrast to Id-hematopoietic factor interactions, suggests that the Ids may have a role in myogenesis that is distinct from that in hematopoiesis. Finally, the overlap in Id-class B and Id-class A interaction strengths provides additional reason to believe that the former may be physiologically relevant. When the Ids were expressed as full-length proteins in vitro, or as HLH domains in human cells, the differences in relative affinities observed in yeast persisted (Figs. 3 and 4), indicating that the discrete range of interactions accurately reflects the in vivo situation and was not merely a consequence of the expression of mammalian genes in yeast.
Homodimerization of bHLH proteins seems to be important in certain contexts. For example, E47 homodimers are sufficient to activate immunoglobulin gene expression (65). MyoD is also capable of binding DNA as a homodimer, although it has not been established if this is biologically significant as the concentrations required for homodimerization in vitro may not be achievable in vivo (8). On the other hand, it is conceivable that apparently weak interactions are stabilized by accessory factors or that low concentrations of MyoD homodimers are sufficient to initiate the myogenic cascade, and this may be sensitive to direct Id-mediated suppression. Therefore, cell fate determination may in part depend on the relative level of distinct Id proteins to create a permissive environment. Interestingly, our observations indicate that the Ids bind with the highest affinity those myogenic factors involved in myoblast lineage determination (MyoD and Myf-5), whereas those MRFs active post-mitotically (myogenin and MRF4/Myf-6), a time when Id levels are generally low or undetectable, bind Ids considerably less well.
The differential affinities of the Id proteins for both the class A and class B MRFs could be explained if these latter proteins homodimerized. Under such conditions, there might exist a differential availability of E proteins or MRFs for Ids due to their sequestration in homodimeric form. In other experiments, we have examined the ability of the Ids, the E proteins, and the class B MRFs and hematopoietic factors to homodimerize using the same two-hybrid system shown in Fig. 1. 3 We detected homodimerization only with E47, although at levels that were 10 -30-fold lower than its interaction with any of the three Ids. Thus, limitation of E protein accessibility to the Ids due to homodimeric sequestration appears unable to explain our results.
Having demonstrated that Id proteins bind each class A molecule and a subset of class B molecules, we tested the 3 K. Langlands and E. V. Prochownik, unpublished data.
FIG. 7. Mammalian 2-hybrid interactions of MyoD and Myf5 with select Id1 and Id3 mutants.
The pNLVP vectors used were the same as those described in Fig. 6. The Id1/3, Id3/1, and Id1D80 -85 mutant HLH domains shown in Fig. 6 were amplified by PCR and cloned into the pSG424 vector. The indicated amounts of each plasmid along with pGal5E472CAT (5 g/plate) and a CMV-gal (2 g/plate) were transfected into HeLa cells using a calcium phosphate precipitation procedure and assayed for -galactosidase and CAT 2 days later. ability of each Id to suppress transactivation from a musclespecific E box by an E2A⅐MyoD complex in vivo (Fig. 5). Id1, Id2, and Id3 were found to disrupt E2A⅐MyoD complexes with comparable ability, consistent with the yeast data. Interestingly, an apparent excess of Id was unable to completely abrogate binding. Other studies on the effect of Id1 or Id3 on the transactivation potential of MyoD or E proteins have also reported a less than complete suppression of activity, with even greater excesses of Id than we have utilized (28,35). Investigation of the activity of any transcription factor in isolation is complicated by the influence of endogenous factors. Having established conditions in which each factor alone results in minimal transactivation, we can be certain that the repressive effect of the Ids is exerted only on the factors under study.
The observation that the Ids differed in their ability to recognize myogenic partners led us to investigate the precise residues responsible for this specificity (Fig. 6). Dimeric bHLH complexes bound to DNA are predicted to form a parallel 4-helix bundle stabilized by a hydrophobic core as well as by electrostatic interactions (2,8,62). Shirakata et al. (62) reported that the alteration of five non-hydrophobic residues distinguishing chicken MyoD from the Drosophila MyoD homologue nautilus (which does not bind E12) led to a progressive reduction in MyoD⅐E12 dimer formation, suggesting that these residues confer an additive effect on binding (62). Likewise, in our study, no single residue was able to confer Id1-like characteristics. Rather a combination of non-conserved residues was involved in MRF binding. However, these residues did not appear to be important for Id-class A interaction (Fig. 6). Two non-conserved residues in the first helix were sufficient for almost complete MyoD binding. These residues, Tyr-88 and Gly-92 of Id1, are uncharged, whereas the corresponding residues of Id3 (Asp-42 and His-46) confer a negative and positive charge, respectively. Thus, the differences in Id binding (with respect to MyoD) may be consistent with the "charged pair" model which suggests that attractive or repulsive forces between contacting residues in the helices of aligned molecules act to stabilize or destabilize dimer formation (62). The residue at the third position also influences heterodimerization even though this represents a conservative change (Lys-98 in Id1 to Arg-52 in Id3). In this case, the larger arginine side chain may act to destabilize Id3⅐MRF complexes. Despite the obvious importance of these residues, the complete conversion of Id3 helix 1 to that of Id1 helix 1 restored only 50% Myf-5 binding and only minimal interaction with myogenin and MRF4/Myf-6, suggesting that additional residues exert an effect. Indeed, a series of amino-terminal deletions (Fig. 6, lines 13 and 14) demonstrated that a region outside the HLH domain was important for Id1-MRF interactions, particularly with respect to myogenin and MRF4/Myf-6. Interestingly, these deletions did not appear to influence E12 binding. Upstream sequences may not be involved in establishing dimeric complexes as such, but rather in stabilizing pre-formed interactions determined by residues in the HLH region. Such subtle cooperative effects might be expected to contribute more to weak Id-myogenin or Id-MRF4/Myf-6 interactions than to stronger associations. Goldfarb et al. (43) used random mutagenesis in conjunction with a yeast 2-hybrid system to generate both hydrophobic and hydrophilic substitutions which enhanced the ability of Scl/ Tal-1 to recognize E2-2 (43). Such alterations were again found to have a synergistic effect on dimer stability. More extensive mutagenesis is required to fully understand the diversity of dimer preferences displayed by HLH factors. The quantitative yeast 2-hybrid system described in this report provides a useful tool for this purpose.
Regions other than the HLH domain, such as the putative leucine zipper present in the class A proteins, have the potential to modulate interactions (1,11,12). Because our co-IP findings with full-length proteins are consistent with both our yeast and mammalian 2-hybrid results, we infer that other potential dimerization domains do not contribute significantly to the interactions observed here. However, we have not evaluated the possible influence of post-translational modifications on dimerization (47,59,66,67). Although HLH activity may be modulated in vivo, the key differences in binding characteristics presented in this study may play a fundamental role in the establishment of differentiation programs. There still exists, however, a great deal of redundancy. One apparent paradox is that there is little evidence, as yet, for the combinatorial variation in E box binding allowed by heterodimerization between different class B groups and any class A factor. For example, all class A⅐MRF dimers so far studied efficiently bind an element in the muscle creatine kinase enhancer, and binding site selection indicates that Scl/ Tal-1, Tal-2 and Lyl-1 heterodimers with E2-2 all recognize the same putative hematopoietic-specific element (68,69). Spatial and temporal differences in the expression of factors with apparently conserved function may also explain some of the apparent overlap. Alternatively, redundancy may reflect the need for cooperative occupancy of proximal promoter elements by accessory factors. A distinct group of tissue-restricted non-bHLH transcription factors have been identified, such as the myocyte enhancer family (MEF-2) in muscle and the zinc finger GATA factors in erythropoiesis (70,71). These may act to enhance the transcriptional activity of bHLH dimers, thereby modulating lineage-specific gene expression. Indeed, the apparent redundancy in function may mask the requirement for a subtle gradient of both ubiquitous and tissue-specific factors to facilitate the controlled establishment of the differentiated phenotype. | 2018-04-03T01:26:14.683Z | 1997-08-08T00:00:00.000 | {
"year": 1997,
"sha1": "314a0a7dac4b680bd76342de36f0f44b9530810b",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/272/32/19785.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7941ed9b11ed956cbae5259939dd475e589e5a0c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119101421 | pes2o/s2orc | v3-fos-license | Landslide Cartography at the Region of Nabeul-Hammamet Based on Geographic Information System and Geomatic
Landslides are one of the most significant natural damaging disasters in hilly environment [1]. The location of our study area is to the north of Tunisia, home to several manifestations of land instabilities, we bring to study this area of interest by GIS and geomatic approach to reduce social economic losses due to landslides. The performance of a cartographic data base for the landslide study in the Cap-Bon region was realized through studying geologic 1/50,000 and topographic 1/25,000 maps, aster optic Remote Sensing, land observation, and climatologic seismic data. These data will be digitalized, georeferenced, vectorized, spatially analyzed, classified and geotreated in order to produce a landslides card. The findings have shown that fields with smooth and friable lithology which are located at rather important seismic zones (more than 4 at Richter’s scale) have some stability. However, the most endangered zones are in the North-West around Oued El Kbir and El Ain. Realizing this work helps to determine the most hazardous zones so that policy makers have an effective field intervention.
Introduction
The Cap Bon region has an interesting economical importance for Tunisia.It contains about 30% of the touristic resorts of Tunisia and is having an important urban devel-opment.For these reasons, this work is focusing on cartography of instability hazards of land for the region lying between the town Beni Khiar and Hammamet.Landslides are a common hazard, especially during the rainy season [2] [3].For these reasons, a data collection study about land instability has been conducted.By analyzing and integrating it, a "SIG" allowed us to realize a land instability map which could be used as a helping tool for decision makers for better urban and environmental management.
The Study Method
The cartography of land instability areas in the regions of Nabeul and Hammamet compels us to analyze the process of realizing this map step by step (Figure 1).
Geographic Level
Nabeul-Hammamet region is located in the North-East of Tunisia between the meridiens 641.963 and 667.000 and the parallels 4.028.304and 4.054.072.It lies on the south west of the Cap Bon peninsula (Figure 2).
Definition of Land Instability
Land instability means wrench, displacement on a slope and along an identifiable rupture of a surface of ground and rockets under the effect of gravity [4].It depends mainly or the type of land morphology, the geologic structure, climate conditions, the seismicity, the hydrographic system, underground hydraulic pressure and anthropic activities [5].These phenomena differ according to their extent, to their movement speed, to the land lithology and to the geometry of their braking surfaces.
Identifying the Cartographic Data Base for Land Instability
Geographic Information System (GIS) modelling of landslide phenomena has taken precedence in recent time.Geospatial technologies like the use of GIS, and Remote Sensing are useful in the hazard assessment, risk identification, and disaster management for landslides [6].For it while performing this study we have used image processing software (Envi 4.7) and SIG (Arcviews 3.1) so that to set a geographic data base (GDB) which allows us to a better understanding to the risks of land instability in the region of Nabeul-Hammamet.Realizing the GDB needed a collection of multidisciplinary data, its numerisation, and georeferencement and vectorisation so that produce the necessary themes for the targeted maps.
Lithologic Maps
Our lithologic map was realized according to a vectorization of two geologic cards at 1/50,000 of Nabeul and Hammamet.With regard to the works of [7]- [10], the studied land is composed mainly of sand, clay and sandstone (Figure 3).
-The Somaa formation, it is composed yellow or red sands intercalated with conglomeratic levels.
-The Beni khiar formation, presents a progressive passage of continental sandstones to oolitic limestone.
-The Oued El Bir formation is composed of sandstones, sand and azoic clays.
-The pottery clay is plastic with a grey color and bleu patina.-The yellow sand of Nabeul is characterized by a varying granulometry with some conglomeratic levels.
-The sand and sandstones of Hammamet are yellow fin sand, sometimes containing clay.
-The Sicilian, its levels are composed generally by conglomerates poorly consolidated and dark grey sand.
-The Tyrrhenian outcrops a cord formed by sandstone and limestone and sandstone at the base dune large oblique stratifications at the top.
The lithologic nature and the consolidation state of each formation allowed us to predict the mechanical behavior of each formation.This is checked out by the work of [11] that had proved the direct relation with the friction angle φ and the horizontal displacement of materials: σh: horizontal stress.σv: vertical stress.Ka: thrust coefficient of land.
Applying this equation allowed us to classify the lithologic unites into 3 classes depending on their aptitude to the instability.To facilitate implementing this classification in the data base, some proportional indexes at each instability have been affected to the surface lithology (Table 1).
Fracture Map
It includes the estimated and identified breaks at the level of our section.These breaks are generally oriented N120 to N170 (Figure 3).According to [12], 1981 who affirm that break disposition is the most important criteria in our thematic, we have chosen to attribute: -Index 1: to the estimated faults that affected sand and clayful lands or those which are not reactivated for a long time.
-Index 3: for the faults that represent a dip less to that designed breaks and also for the conjugated faults.
Slopes Map
The obtained slopes map is a derived map through a digital elevation model (DEM) product following the extraction and processing of contour to 25 m not included on topographic maps at the scale 1/25,000 in the Fevilles of Hammamet and Nabeul NE, NW, SE and SW.
It shows that the study area is a peneplain except the Nabeul-Hammamet monoclinal where the slopes can exceed the value of 30% (Figure 4).The works of [13] [14] have showed that stability of slopes is maintained thanks to a balance between the propulsive forces and the resistance forces.2).
Hydrographic Map
The same previous topographic maps were used as a data base during the vectorization of the hydrographic network illustrated in the study area (Figure 5).The result has shown that the majority of the networks are in the form of trills which prove the preserve of permeable and friable land where water could dig and mobilize the particules easily [15].In consideration with Strahler Classification to the whole network, land observation has proved that land instability is proportional is proportional to the order of the drain.This is confirmed by the progressive increase of the solid charge in the drains from upstream to downstream.Hence the choice of indices is for different drains classes (Table 3).
Seismic Map
The seismic map (Figure 6) of the average change in the intensity of earthquakes that affected the southern part of Cap Bon for the last thirty years which were registered by many geophones situated in Nabeul-Hammamet, Dar Chabaan, Bou Argoub and Grombalia.These data issued by National Meteorology Institute represented in the raster mode prove that the Nabeul-Hammamet region is seismically stable.In fact the maximum magnitude recorded was at the scale of 4.1.
The following Table 4 illustrates the adopted classification for the seismic factors.
Rainfall Map
The rainfall map (Figure 7 Belonging to the Mediterranean climate which is characterized by its precipitations of low frequency and high intensity (Figure 8), it allowed us to attribute the instability index 2 to the whole area.
Soil Exploitation Map
The land use map was created following the supervised classification of satellite Aster high resolution image acquired April 16, 2002, which essentially shows five classes of land cover (Figure 9).It is essentially exploited by agricultural land that covers more than 65%, the urban sites (Nabeul, Hammamet, Dar Chaabane and Beni Khiar), the forest of Jebel Nabeul-Hammamet, Jebel Ksous and waste Lands.
Every type of exploitation contributes to the land instability in a special way.This is illustrated through the following Table 5. [16] has proved that the impact of fracturation sector on the land instability is generally localized at the level of a nearly 100 m thick corridor.This offered us a basis for creating "tampon" areas which are at the level of 50 m and an instability index to the present breaks in our data base.
Applying Tampon Areas to the Hydrographic Factor
In the case of hydrographic network this operation is a bit more complicated because we have attributed a specific distance for each chosen index: -25 meters for the index 1 drains -50 meters for the index 2 drains This choice was taken through land observation of the most sensitive areas to land instability around oued Essghir, El Kbir and Abid.
Themes Union
This operation associates the entities of an introductory these with the polygons of a recovering theme to produce a closing theme containing the attributes and the general view of the two themes.
As a result of our choice, we elaborated the synthesis map by working in vector mode by converting maps of the slopes, the soil exploitation, rainfall, seismicity and the tampon areas in vector files.
The first union operation was realized through lithologic and break themes, the result was afterwards combined with the slope theme.
Analogue operations were performed by integrating theme after theme the instability indexes.These were covered through: -Rainfall, -seismicity, -drain classes, -Soil exploitation.
After each association operation we have paid tribute to the data base to multiply the indexes of instability for the introductory theme by the recovering theme (Table 6).
Table 6.Multiplication of the proportional indexes to the lithologic theme and to the break theme.
Lithology index X Faults index The above box shows the use of new indexes produced through multiplying the above indexes in box 8 by the instability indexes linked to the slope factor (Table 7).The final result of these special analysis operations and data combination is illustrated through the instability map of Nabeul-Hammamet (Figure 10).
Figure 10.Map of field instability hazard.
Interpreting the Cartographic Maps
The realized land instability map contains 11 indexes representing the state of the land instability.The lowest indexes 4, 6, 12 and 16 cover 87.08% of the total surface of the study area.This indicates that the studied area is generally stable despite the abundance of crumbly lands.The impact of topographic, tectonic, seismic and hydrographic factors have generated very important indexes 24, 36 and 48 which proved the increase of surfaces instability.These indexes are mainly in the North-East of the studied area at the level of the seismic aureole of the index 2 or on the tampon areas where rivers and breaks are present.
The most important indexes which represent the most concerned areas by land instability are only 0.3% of the total surface, but they cover 103 hectares.These indexes are localized on the bank of the wadi El Kbir and El Ain.They cross loose lands which are influenced by seismic factors aureole of the index 2.
The highest values on the map (Figure 10) are manifested on land in different forms: -Rational landslides that caused the fracture and detachment (Photo 1) and ever the fall (Photo 2) of some defensive walls.-Some fractures and collapses due to the karst of limestone crusts on the bank of Wadi El Kbir (Photo 3).
Photo 1. Fracturing and detachment of the wall located on the edge El Wadi Ain.
Photo 2. Fall of a portion of the wall at the bridge El Kebir wadi.
Photo 3. Fracturing and collapse of limestone crusts.
Conclusions
Through this study, a hazard instability land map has been realized.This geo-carto-
Figure 1 .
Figure 1.Methodology adopted for the implementation of the hazard map.
Figure 3 .
Figure 3. Map of lithology and fracturing.
angle, α: slope.Applying this equation, which is in relation with the slope, allowed us to classify the topography into three classes depending on their contribution to instability (Table
) shows the special distribution of the interannual average rainfall for the last decade.The data base used in realizing this map is the monthly precipitations measured in the meteorological stations: Nabeul, Oued Souhil, Hammamet, Khangt El Hagag and Bou argoub.
Figure 8 .
Figure 8.Average monthly precipitation the past decade.
graphic multisource document is designed to attract the attention of planners to the potential or real risks of some lands in relation with nature and the specificity of each area.Producing up-to-date and accurate landslide susceptibility maps can ensure safety to people and property at risk and avoid extensive economic loss (Kavzoglu et al., 2013).The SIG techniques have facilitated the use and management of the data base for this work.The realized map has shown that in addition to the lithologic and topographic factors, land instability might be conditioned by hydrographic and seismic factors.
Table 1 .
Classification of evidence of hazards lithological.
Table 2 .
Classification of topographic indices hazards.
Table 3 .
Classification of evidence of hydrographic hazards.
Table 5 .
Soil types classification according to its contribution to land instability.
Table 7 .
Second multiplication of instability indexes. | 2019-04-13T09:23:22.602Z | 2016-09-07T00:00:00.000 | {
"year": 2016,
"sha1": "f074b432e0340ae48b25e551a04e7c30e5217d55",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=71009",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f074b432e0340ae48b25e551a04e7c30e5217d55",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
213184189 | pes2o/s2orc | v3-fos-license | LIVE-PAINT: Super-Resolution Microscopy Inside Live Cells Using Reversible Peptide-Protein Interactions
We present LIVE-PAINT, a new approach to super-resolution fluorescent imaging inside live cells. In LIVE-PAINT only a short peptide sequence is fused to the protein being studied, unlike conventional super-resolution methods, which rely on directly fusing the biomolecule of interest to a large fluorescent protein, organic fluorophore, or oligonucleotide. LIVE-PAINT works by observing the blinking of localized fluorescence as this peptide is reversibly bound by a protein that is fused to a fluorescent protein. We have demonstrated the effectiveness of LIVE-PAINT by imaging a number of different proteins inside live S. cerevisiae. Not only is LIVE-PAINT widely applicable, easily implemented, and the modifications minimally perturbing, but we also anticipate it will extend data acquisition times compared to those previously possible with methods that involve direct fusion to a fluorescent protein.
Introduction
Optical microscopy has traditionally been restricted to a resolution of ~250 nm due to the diffraction limit of light. New methods, collectively grouped under the term superresolution microscopy, have increased the resolution of fluorescence microscopy by almost two orders of magnitude, allowing systems previously inaccessible to fluorescence microscopy to be studied [1][2][3][4][5] . These methods rely on either limiting the illumination of the sample to regions smaller than the diffraction limit 1 , or stochastically and temporally separating the emission of individual fluorophores to allow their positions to be precisely localized. This latter strategy is termed single-molecule localization microscopy (SMLM), and various approaches have been developed to enable the required stochastic emission, including stochastic optical reconstruction microscopy (STORM) 5 , photo-activation localization microscopy (PALM) 2 , and point accumulation for imaging in nanoscale topography (PAINT) 6 .
In the original implementation of PAINT, fluorescent molecules (for example, Nile red) bind transiently and non-specifically to hydrophobic regions of a structure 6 , and a superresolution image is built up as each one is localized. Unlike PALM and STORM methods, which are limited by photobleaching of the dye molecules over time, in PAINTbased methods there is continual replenishment of the fluorescent probes, which allows much longer imaging times, resulting in a higher density of localizations, and the potential for a higher resolution image 7 . In all PAINT methods, the concentration of the interacting fluorescent molecule can be varied and optimized. can be discarded, which also contributes to increased resolution.
DNA-PAINT has seen many variations and innovative applications [8][9][10][11] . A significant limitation, however, is that DNA-PAINT cannot be used to visualize proteins inside live cells 12 . Although extensions of DNA-PAINT, in which the DNA is fused to a nanobody or another protein-binding module, enable intra-cellular proteins to be visualized, the cell must be permeabilized to allow them to enter. As a result, work in live cells has been 6 limited to the visualization of cell-surface proteins 12 . Here, we describe a PAINT-based method that has all the advantages of DNA-PAINT, but with the enormous benefit that it can be used for imaging inside live cells. We refer to this approach as LIVE-PAINT (Live cell Imaging using reVersible intEractions PAINT).
In LIVE-PAINT, reversible peptide-protein interactions, rather than zipping/unzipping of a DNA oligonucleotide duplex, are responsible for the transient localizations required for SMLM. The protein to be imaged is genetically fused to a short peptide and expressed from the protein's endogenous promoter. Additionally, integrated at a suitable place in the genome, a peptide-binding protein is genetically fused to a fluorescent protein (FP) and expressed from an inducible promoter, allowing its expression level to be controlled and optimised. The small size of the peptide tags fused to the protein of interest is another important strength of the method. It enables post-translational fluorescent labeling of target proteins that do not tolerate a direct fusion to a FP. To illustrate this point, we show that LIVE-PAINT can be used to perform in vivo super-resolution imaging of proteins, such as actin and cofilin, which are notoriously refractory to direct fusions 13,14 . Furthermore, we show that LIVE-PAINT can be used to perform diffractionlimited tracking of individual biomolecules for extended periods of time.
LIVE-PAINT achieves super-resolution inside live cells using reversible peptideprotein interactions
The essence of LIVE-PAINT is to visualize individual fluorescent molecules transiently attached to a cellular structure of interest. The individual fluorophores are thus identified by temporal, rather than spatial, separation. LIVE-PAINT achieves sparse labeling by using reversible peptide-protein interactions. The protein of interest is directly fused to a peptide and a FP is fused to the cognate protein ( Figure 1A). The peptide-protein interactions are chosen so that solution exchange occurs on a timescale shorter than or comparable to the bleaching lifetime, allowing many sequential images to be obtained.
In each image, a different peptide-tagged protein of interest is bound to a different protein-FP, allowing individual proteins to be precisely localized ( Figure 1A-D). These localization events are then summed to generate a super-resolution image ( Figure 1E).
As a test case with which to optimize this approach, we visualized the cell division septin protein Cdc12, a component of the readily-identifiable septum that is formed during Saccharomyces cerevisiae budding. We tested LIVE-PAINT using two different peptide-protein interactions with very different dissociation constants and molecular structures: TRAP4-MEEVF (a TPR-peptide pair with a dissociation constant (KD) of 300 nM) and SYNZIP17-SYNZIP18 (an antiparallel coiled coil pair with a KD of 1 nM) [15][16][17][18] . In both cases, the peptide (MEEVF or SYNZIP18) is fused to Cdc12 and the cognate protein or peptide (TRAP4 or SYNZIP17, respectively) is fused to the bright green FP mNeonGreen (mNG) (11). Although mNG is known to blink intrinsically 19 , we chose to use it in our experiments because it is very bright and therefore can produce very precise localization events. mNeonGreen has a brightness of 93 20 , while other FPs we use in this work, mKO and mOrange, have brightness of 31 21 and 49 22 , respectively.
Most importantly, we show that mKO and mOrange, which are not known to blink intrinsically, are also compatible with LIVE-PAINT ( Figure S1 and Supplementary movies 1 and 2). TRAP-peptide pairs have been shown previously to be less perturbative for cellular imaging than direct fusion to a FP 23 . Both TRAP4-MEEVF and SYNZIP17-SYNZIP18, were well-tolerated by the cell, and both can be used for either diffraction limited or super-resolution imaging of the septum in live yeast ( Figure 1E).
We observed no distorted cell morphology or changes in growth rate in liquid media when using the TRAP4-MEEVF and SYNZIP17-SYNZIP18 interaction pairs. In previous work we observed distorted cell morphology for ~5% of yeast expressing a direct fusion of Cdc12 to an FP 23 . We also provide evidence that LIVE-PAINT can be performed with additional peptide-protein interaction pairs ( Figure S2), that the localizations events observed are specific to the protein being labeled ( Figure S3), and that two orthogonal interaction pairs can be used with two FPs to tag two different proteins specifically and concurrently ( Figure S4). 448. Scale bars are 1 μm, except for the inset to (E), which has a 100 nm scale bar.
Low FP expression levels result in a higher percentage of bona fide localization at the septum
In LIVE-PAINT, the peptide-binding proteins fused to mNG (TRAP4-mNG and SYNZIP17-mNG), are expressed from an inducible promoter, so that expression levels can be optimized 24 . See Figure S5 for fluorescence induction profile.
By varying the expression level of either TRAP4-mNG or SYNZIP17-mNG, for the TRAP4-MEEVF and SYNZIP17-SYNZIP18 interaction pairs respectively, we can determine which conditions generate the highest percentage of localizations at the septum relative to non-specific localizations ( Figure 2). For very low expression levels, for example for 0% galactose with 'leaky' expression, not enough mNG is expressed and not enough localization events are achieved to generate a super-resolution image.
Conversely, for example for 0.1% galactose, expression levels are too high and very few individual localization events can be visualized, because the density of mNG is too high to achieve sparse labeling. At intermediate expression levels, for example with 0.005% or 0.02% galactose, there are sufficient FPs that enough localization events can be recorded to resolve a super-resolution image, but the FP expression level is not so high that single localization events cannot be recorded.
We performed cluster analysis using the DBSCAN function (see methods) to quantify the number of localization events in the septum versus in the rest of the cell. We were thus able to identify the conditions that produced the most specific super-resolution images. In an analogous fashion to DNA-PAINT, the FP mNG does not give rise to a localization event until it binds and is immobilized. Some non-specific localization or blinking events are recorded, these are randomly distributed within the cell and can be removed from further analysis by using DBSCAN. The number of these non-specific localization events increases with galactose concentration, because by increasing galactose we increase the number of free mNG which are not bound to a Cdc12 protein.
Septum width increases with increasing yeast daughter:mother diameter ratio
To demonstrate the potential of LIVE-PAINT, we show an example of how it can be used to study a biological structure in live cells. By analyzing SMLM data for Cdc12 in individual cells, obtained using LIVE-PAINT with the SYNZIP17-SYNZIP18 interaction pair, we are able to describe various features of the yeast budding process. For example, we find that for small daughter cell sizes (daughter:mother diameter ratio less than approximately 0.85), the septum width is of the order 200 nm. As the daughter cell gets larger (daughter:mother diameter ratio approximately 0.85 to 1.0), the septum is clearly visible as two separate rings, with a septum width of approximately 400-800 nm.
See Figure S6. This example demonstrates that LIVE-PAINT can be used to study a biological structure in live cells on the single cell level.
Labeling using a construct with three tandem copies of mNG improves localization precision compared to a single copy.
In current super-resolution imaging techniques used inside live cells, such as PALM, the target protein is directly fused to a FP. This fusion adds a large, 25 kDa, modification to the target protein. Trying to enhance the PALM signal by fusing three FPs to the same target protein, would increase the size of the overall protein by about 75 kDa. Many proteins are unable to fold and correctly mature to their functional state when fused to a single FP, therefore a larger modification to a target protein, on the order of 75 kDa would likely be even more detrimental.
With the LIVE-PAINT method, however, the protein of interest is labeled posttranslationally and reversibly. Thus, labeling with multiple tandem FPs should be more feasible. We performed LIVE-PAINT on Cdc12-SYNZIP18 using the SYNZIP17 fused to one or three tandem copies of mNG and compared the super-resolution data obtained for both conditions ( Figures S7 and S8). Cdc12 not only tolerates such posttranslational labeling with the three tandem mNG, but labeling with this construct results in better localization precision. We note, however, that the larger size of the three tandem mNG construct creates additional distance between the protein of interest which would result in increased uncertainty about its actual position.
LIVE-PAINT enables longer data acquisition times
An additional advantageous feature of the LIVE-PAINT method is that it allows bleached fluorescent labels to exchange with unbleached fluorescent labels, in vivo. In the case of STORM and PALM imaging methods, photobleaching of the probe adds a limitation to the number of emitters that can be localized. This photobleaching reduces the resolution of the image because it limits the density of emitters that can be measured.
Thus, researchers have to resort to using localization events with lower signal to noise than is optimal. In many cases, control of the emission is difficult to achieve, and much of the fluorescent probe is bleached early in the acquisition when individual emitters cannot be discerned due to their density being too high, further limiting the density of localizations measured. Here we demonstrate the ability to image for longer periods of time with LIVE-PAINT, using the SYNZIP labeling pair.
When imaging using a conventional direct fusion of Cdc12 to mNG, we observe that after we deliberately photobleach by irradiating with high laser power for two minutes, very few localization events are subsequently observed. In contrast, when using SYNZIP17-SYNZIP18 to localize mNG to Cdc12, after we deliberately photobleach by irradiating with high laser power for two minutes, we subsequently observe many more new localization events, indicating that the bleached SYNZIP17-mNGs can unbind and be replaced by unbleached SYNZIP17-mNGs from the cytoplasm. This result shows that the LIVE-PAINT imaging strategy allows one to obtain more total localization events during an imaging session, because they allow for longer imaging times ( Figure 3). The
18
individual cells imaged using LIVE-PAINT for the data in Figure 3 were measured to have a resolution of ~20 nm (see Figure S9 for maximum projection images of the individual cells analyzed in Figure 3).
LIVE-PAINT signal replenishment increases with increasing concentration of FP
The data in Figure 3 shows that reversible interaction pairs can unbind from the target protein and signal can be replenished by free protein-mNG binding to the target protein.
Building on this result, we compared how long data collection can be continued, when there is a high versus low level of peptide-binding protein-mNG in the cytoplasm. Figure 4 shows the results of such experiments for both the SYNZIP17-SYNZIP18 and TRAP4-MEEVF interaction pairs. For 0% galactose, where the expression level of peptidebinding-module-FP is low, almost all the binding-module-FP will be initially bound to Cdc12, thus all FPs will be illuminated and bleached rapidly, because there is not a cytoplasmic pool of peptide-binding-protein-FP for them to exchange with. By contrast, for 0.1% galactose, where the expression level of the peptide-binding protein-FP is high, there is a sizeable cytoplasmic pool available to exchange with molecules bound to peptide-Cdc12, but which have been bleached. In Figure 4B, for example, we observe that when imaging Cdc12-SYNZIP18 + SYNZIP17-mNG using 0.1% galactose, even after 200 s of imaging, localizations are still being recorded at about 30-40% of the initial rate.
LIVE-PAINT can be used to image proteins that are refractory to direct fusion to a large protein
Actin, an important cytoskeletal protein, is notoriously difficult to tag and image. A number of different methods have been developed to circumvent this problem, but they are not without issues, including changing the stability, dynamics, and lifetime of actin structures 13,25,26 . Moreover, very few of these methods can be used inside live cells and none is currently compatible with live cell super-resolution imaging. We therefore investigated if LIVE-PAINT could be used to image actin. Wild-type actin was chromosomally expressed from its endogenous promoter. We expressed SYNZIP18actin from a low copy number plasmid, using a copper-inducible promoter. SYNZIP17-mNG was expressed, as previously, from the galactose inducible promoter, chromosomally integrated at the GAL2 locus ( Figure 5).
Using LIVE-PAINT, we were able to readily visualize actin patches, which assemble at the cell membrane, at sites of endocytosis 27 ( Figure 5A). Because actin structures are quite dynamic, we investigated how quickly we could obtain super-resolution images (compared to the acquisition time of 200 s for the data shown in Fig 5C). Actin rings, or actin cables that span the cell, are likely not observed because we are imaging in TIRF, which illuminates only about 200 nm into the cell (a typical yeast cell is 1-3 μm thick).
Alternatively, or additionally, it could be that the stringent structural requirements for actin in these assemblies means that even actin with very small ~2 kDa tags may be 24 excluded from ring and cable structures 28 . We find that with a data acquisition time as short as 3 s, we can obtain data with a resolution of approximately 50 nm ( Figure S10).
LIVE-PAINT enables long tracking times in vivo
In the data presented so far, we have used LIVE-PAINT to generate super-resolution images of proteins which do not move significantly during the period of data acquisition.
The extended imaging lifetime enabled by LIVE-PAINT, however, offers the opportunity to detect and track the motion of diffusing molecules within live cells. Cofilin is an important protein that binds to actin filaments promoting severing 14 . It has so far, however, proven difficult to image due to its function being affected by either N-or Cterminal direct fusion of an FP 14 . We therefore C-terminally tagged cofilin with SYNZIP18, and tracked it using the LIVE-PAINT strategy (diffraction-limited, not superresolution). We were able to observe the diffusion of cofilin during the 100 s of imaging ( Figure 6 and Supplementary Movie 3). We observed a wide range of behaviors ( Figure 6).
The success of the LIVE-PAINT tagging approach in these examples demonstrates the value of the method for visualizing proteins that are refractory to direct fusion to an FP 14 , and also its potential to be developed to track moving proteins. Finally, LIVE-PAINT, especially in a TIRFM (or LSFM) format, enables data to be acquired for much longer than other current methods (such as PALM) that can be used inside live cells. In such methods, the FP is directly fused to the protein of interest, so once a fluorophore is bleached, it is not replaced and from then onwards is dark. By contrast, in LIVE-PAINT, the non-covalently bound FP can be exchanged after bleaching, with a non-bleached FP from the cytoplasm. Acquiring data for longer results in more localizations being detected and consequently higher resolution images being obtained. Unbound FPs in LIVE-PAINT result in background fluorescence, but this effect can be mitigated by reducing the illumination volume in the cell, as we have done using TIRFM in this work, or by using other strategies, such as light-sheet fluorescence microscopy (LSFM).
We have demonstrated the power of LIVE-PAINT in S. cerevisiae by using it to image Cdc12 and hence to study septum formation. Furthermore, we have used it to image actin and cofilin, two important proteins that are intractable to direct fusion. Finally, we showed that this approach is fundamentally compatible with tracking the movement of individual proteins inside live cells.
We expect that it will be straightforward to extend LIVE-PAINT to other organisms and cell types. In our work we found that two of the three peptide-pairs that we tested were suitable for LIVE-PAINT. Many more potentially compatible interaction pairs exist, and may be better suited for particular applications. In future work, we will investigate how the optimal labeling requirements differ for different cellular proteins and how best to label and image multiple proteins simultaneously.
Molecular biology
All cloning was performed in Escherichia coli strain TOP10. Peptide tags were cloned into pFA6a-KANMX6 by amplifying the plasmid backbone and inserting gBlocks (Integrated DNA Technologies) using NEBuilder ® HiFi DNA Assembly Master Mix (New England Biolabs). Except where otherwise noted, the protein sequence used to link different protein components was GGSGSGLQ. The two residue linker, GS, was used between the mNG proteins to create the three mNG array. The 3xmNG construct itself was joined to SYNZIP17 using our standard GGSGSGLQ linker.
Peptide-binding proteins fused to FPs were cloned into the pFA6a-HIS3MX6 and tagged actin constructs were cloned into the pCu415CUP1 vector (CEN6/ARS4 origin of replication) using the methods referenced above.
The linker used to fuse actin to SYNZIP18 or MEEVF was GGSGSG.
Primer sequences used in this study are listed in Tables S3, S4, and S5.
Yeast strain construction
Except where otherwise noted, standard methods for genetically modifying yeast and preparing growth media were used 34 . The yeast strains and selection markers used in this study are listed in Table S2.
Yeast strains constructed in this study are all derived from the parent strain BY4741. Transformants were selected by plating first on YPD plates and then replica plating to yeast agar plates including 600 mg/L geneticin (Gibco) and incubating for a further 16 hours.
FP fusions were inserted into the yeast genome at the GAL2 locus by amplifying the desired protein's sequence from a plasmid. The amplification primers also included 45 bp homology arms that match sequences upstream and downstream of the GAL2 gene, and the HIS3 gene.
Transformants were selected by plating on synthetic complete agar plates lacking histidine.
Strain construction was verified by PCR amplification of the modified locus (using primers from Table S3).
Microscopy
For imaging experiments, yeast cells were grown overnight in 500 μL of synthetic complete media. Constructs using the galactose inducible promoter, pGAL1, were all grown with 1% w/v raffinose plus the concentration of galactose desired for a particular experiment. The concentration of galactose used varied between 0% and 2% w/v. One colony was picked into a 500 μL overnight culture to ensure that the OD600 of the cells was between 0.1 and 0.5 by the time of imaging. Two dilutions of the overnight culture, 1:1 and 1:5, were prepared to ensure that one would fall in this OD600 range.
22x22 mm glass coverslips with thickness no. 1 (VWR) were cleaned by a 20 minute exposure in a 2.6 L Zepto plasma laboratory unit (Diener Electronic). Frame-Seal slide chambers (9 × 9 mm 2 , Biorad, Hercules, CA) were then secured to a coverslip. The surface was prepared for the attachment of yeast cells by coating the surface with 2 mg/mL concanavalin A (Sigma-Aldrich), which was dissolved in PBS pH 7.4, using approximately 100 μL per well. After leaving the concanavalin A on the surface of the slide for 30 seconds, it was removed using a pipette tip and by tilting the slide to ensure all liquid was removed. Then, 150 μL of prepared yeast culture was pipetted onto the slide. The yeast culture was left to sit on the slide for approximately 5 minutes. The cells were then aspirated from the slide, the surface washed with milliQ water three times, and then 150 μL fresh milliQ water was then added to the slide before imaging.
Single-molecule imaging was performed using a custom-built TIRF microscope, which For photobleach-and-recovery experiments we first imaged the samples at very high laser power density (26.6 W/cm 2 ). After 1,000 frames (50 s) of imaging, this power density was dropped to 3.1 W/cm 2 . The sample was then imaged for another 1,000 frames (50 s).
Microscope settings/Imaging parameters
Images were analyzed using Fiji and single localizations were processed using the Peak Fit function of the Fiji GDSC SMLM plugin, using a signal strength threshold of 30, a minimum photon threshold of 100, and a precision threshold of 20 nm. The precision threshold was sometimes changed to 30 nm, 40 nm, or 1000 nm, in order to obtain the distribution of precision values for all obtained localization events. Figure S12 shows a matrix of precision and minimum photons per localization thresholds applied to one stack of images, which helped select the cutoffs used.
Image resolution calculation
Image resolution was calculated by first performing cluster analysis using DBSCAN 35 in Python 2.7 to identify localizations in the yeast bud neck. Then, resolution was measured using the equation !"" = #( ̅ ## ) $ + ( *) $ , where Reff is the effective image resolution, ̅ ## is the mean nearest neighbor distance between localizations in the septum, and * is the average localization precision 36 .
Cluster analysis for identifying yeast septum
For the images shown in Figure 2, septum localizations were identified from total cellular localization events using DBSCAN 35 in Python 2.7 and the percent of total cellular localizations in the septum was determined. In order to prevent misidentification of septa in background localizations, DBSCAN was applied to localizations within a 1 μm radius of the center of the cell. DBSCAN parameters were maintained for images of cells the same galactose concentration: 0% galactose -ε = 2, N = 25; 0.005% galactose -ε = 2, N = 50; 0.02% galactose -ε = 1.75, N = 50; 0.1% galactose -ε = 2.8, N = 75.
Quantifying septum width
Budding yeast with septa were identified from z-projections and following thresholding, ImageJ's Analyze Particles tool was used to determine: the maximum Feret's diameter of the cell, the starting coordinates of the Feret's diameter, the angle between the Feret's diameter and the x-axis, and the coordinates of the cellular center of mass. The end coordinates of the Feret's diameter were calculated from the Feret's diameter data.
In the same cells, septum localizations were identified from total cellular localization events as described in above cluster analysis within a radius of the Feret's diameter/5 from the cell's center of mass and using parameters of ε = 2, N = 100.
The distance between the center of the septum points and the coordinates of both the start and end of the Feret's diameter was determined and the larger of the two was taken to be the mother cell diameter and the smaller, the daughter cell diameter.
To find the septum width, the mean absolute perpendicular distance between all the septum localizations and the line bisecting the angle between the center of the septum, and the mother and daughter diameters was doubled.
Plate reader measurements
Plate reader measurements were carried out on a POLARstar Omega microplate reader (BMG LABTECH). To observe the galactose dependent induction of mNG under the pGAL1 promoter in a Δgal2 background, budding yeast cells were grown overnight in 500 μL of synthetic complete media plus 1% w/v raffinose and galactose concentrations ranging from 0 to 0.1% w/v. The next morning, 200 μL of this culture was added to individual wells in a 96 well clear bottom plate (Greiner bio-one, item 655096). Cellular fluorescence was excited using the 485 nm excitation filter and measured using the 520 nm emission filter.
The optical density of the cells was measured using the absorbance setting at 600 nm.
The fluorescence readings were then normalized to the number of cells by dividing the measured cellular fluorescence by the optical density. | 2020-02-06T09:08:19.647Z | 2020-02-04T00:00:00.000 | {
"year": 2020,
"sha1": "19569533404902d1bc6e6531977c93651081136f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s42003-020-01188-6.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "20ec3fd3b2d5947fd4fa7650875e413c38ece799",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
} |
117209064 | pes2o/s2orc | v3-fos-license | Semi periodic maps on complex manifolds
In this letter we proved this theorem: \emph{if $F$ be a holomorphic mapping of $T_{\Omega}$ to a mapping manifold $X$ such that for every compact subset $K\subset \Omega$ the mapping $F$ is uniformly continues on $T_{K}$ and $F(T_{K})$ is a relatively compact subset of $X$. If the restriction of $F(z)$ to some hyperplane $\mathbb{R}^{m}+iy'$ is semi periodic, then $F(z)$ is an semi mapping of $T_{\Omega}$ to $X$.}
In mathematics, an almost(semi) periodic function is, loosely speaking, a function of a real number that is periodic to within any desired level of accuracy, given suitably long, well-distributed "almost-periods". The concept was first studied by Harald Bohr and later generalized by Vyacheslav Stepanov, Hermann Weyl and Abram Samoilovitch Besicovitch, amongst others. There is also a notion of almost periodic functions on locally compact abelian groups, first studied by John von Neumann.
Almost periodicity is a property of dynamical systems that appear to retrace their paths through phase space, but not exactly. An example would be a planetary system, with planets in orbits moving with periods that are not commensurable (i.e., with a period vector that is not proportional to a vector of integers). A theorem of Kronecker from diophantine approximation can be used to show that any particular configuration that occurs once, will recur to within any specified accuracy: if we wait long enough we can observe the planets all return to within a second of arc to the positions they once were in.There are several different inequivalent definitions of almost periodic functions. An almost periodic function is a complex-valued function of a real variable that has the properties expected of a function on a phase space describing the time evolution of such a system. There have in fact been a number of definitions given, beginning with that of Harald Bohr. His interest was initially in finite Dirichlet series. In fact by truncating the series for the Riemann zeta function ζ(s) to make it finite, one gets finite sums of terms of the type with s written as σ + it the sum of its real part ? and imaginary part it. Fixing σ so restricting attention to a single vertical line in the complex plane, we can see this also as Taking a finite sum of such terms avoids difficulties of analytic continuation to the regionσ < 1. Here the 'frequencies' log n will not all be commensurable (they are as linearly independent over the rational numbers as the integers n are multiplicatively independent which comes down to their prime factorizations). With this initial motivation to consider types of trigonometric polynomial with independent frequencies, mathematical analysis was applied to discuss the closure of this set of basic functions, in various norms.
The theory was developed using other norms by Besicovitch, Stepanov, Weyl, von Neumann, Turing, Bochner and others in the 1920s and 1930s.
DEFINITIONS AND SOME THEOREMS
A contious mapping F of a tube to a metric space X is semi periodic if the family F (z + t) t∈R m of shifts along R m is a relatively compact set with respect to the topology of the uniform convergence of T K . Further, let X be a manifold , and F be a holomorphic mapping of a tube with the convex open base Ω ⊂ R m , to X. We will say that F is semi periodic if the restriction of F to each tube T K , with the compact base K ⊂ Ω is semi periodic.
For X = C we obtain the well-known class of holomorphic semi periodic functions; for X = C q the corresponding class was being studied in [1][2][3]; for X = CP we get the class of meromorphic semi periodic functions that was being studied in [4]; the class of holomorphic semi periodic curves, corresponding to the case X = CP q , was being studied in [5].
Uniform or Bohr or Bochner almost periodic functions
The following theorem is due to Bohr [7]: : Bohr [7] defined the uniformly almost-periodic functions as the closure of the trigonometric polynomials with respect to the uniform norm (on continuous functions f on R). He proved that this definition was equivalent to the existence of a relatively-dense set of ǫ almost-periods, for all ǫ > 0: that is, translations T (ǫ) = T of the variable t making An alternative definition due to Bochner (1926) is equivalent to that of Bohr and is relatively simple to state: A function f is almost periodic if every sequence (t n + T ) of translations of f has a subsequence that converges uniformly for T in (−∞, ∞).
Warning: there are nonzero functions with ----B,p = 0, such as any bounded function of compact support, so to get a Banach space one has to quotient out by these functions.
The Besicovitch almost periodic functions in B2 have an expansion (not necessarily convergent) as a n e iλn (10) Conversely every such series is the expansion of some Besicovitch periodic function (which is not unique). The space Bp of Besicovitch almost periodic functions contains the space Wp of Weyl [9]almost periodic functions. If one quotients out a subspace of "null" functions, it can be identified with the space of Lp functions on the Bohr compactification of the reals. Almost periodic functions on a locally compact abelian group With these theoretical developments and the advent of abstract methods (the PeterWeyl theorem, Pontryagin duality and Banach algebras) a general theory became possible. The general idea of almost-periodicity in relation to a locally compact abelian group G becomes that of a function F in L * (G), such that its translates by G form a relatively compact set. Equivalently, the space of almost periodic functions is the norm closure of the finite linear combinations of characters of G. If G is compact the almost periodic functions are the same as the continuous functions.
The Bohr compactification of G is the compact abelian group of all possibly discontinuous characters of the dual group of G, and is a compact group containing G as a dense subgroup. The space of uniform almost periodic functions on G can be identified with the space of all continuous functions on the Bohr compactification of G. More generally the Bohr compactification can be defined for any topological group G, and the spaces of continuous or Lp functions on the Bohr compactification can be considered as almost periodic functions on G. For locally compact connected groups G the map from G to its Bohr compactification is injective if and only if G is a central extension of a compact group, or equivalently the product of a compact group and a finite-dimensional vector space.
If a holomorphic bounded function on a strip is semi periodic on some straight line in this strip, then this function is semi periodic on the whole strip. This theorem was extended to holomorphic functions on a tube in [8]; besides usual uniform metric, various integral metric were being studied here. The direct generalization of Bohr's theorem to complex manifold is not valid.
THEOREM
Theorem:if F be a holomorphic mapping of T Ω to a mapping manifold X such that for every compact subset K ⊂ Ω the mapping F is uniformly continues on T K and F (T K ) is a relatively compact subset of X. If the restriction of F (z) to some hyperplane R m + iy ′ is semi periodic, then F (z) is an semi mapping of T Ω to X. Also there is a corollary: Corollary:Let F be a holomorphic mapping form T Ω to a compact complex manifold X such that F is uniformly continues on T K for every compact set K ⊂ Ω. If the restriction of F(z) to some hyperplane R m + iy ′ is semi periodic , then F(z) is an semi periodic mapping of T Ω to X. Proof of the theorem: Take an arbitrary sequence {t n } ⊂ R m . Since the function F(z) is uniformly continues, the family {F (z + t n )} is equicontinous on each compact set S ⊂ T Ω . Further, it follows from the condition of the Theorem that the union of all the images of S under mappings of this family is contained compact subset of X. Therefore, passing on to a subsequence if necessary, we may assume that the sequence {F (z + t n )} converges to a holomorphic mapping G(z) uniformly on every compact subset of T Ω . It easy to see that the mapping G(z) is bounded and uniformly continues on every tube T K with the compact base K ⊂ Ω. Let us prove that this convergence is uniform on every T K . Assume the contrary. Then we get for some sequence z n = x n + iy n ∈ T K ′ , where K ′ is some compact subset of Ω.
Replacing sequences by a subsequence if necessary, we may assume that the mapping G(x n + z) converge to a holomorphic mapping H(z), and the mappings F (z + x n + t n ) converge to a holomorphic mappingH(z) uniformly on every compact subsets of T Ω . We may also assume that y n → y 0 ∈ K ′ . Using (1) we get |H(iy 0 ) − H(iy 0 | ≥ ε 0 (12) Since the mapping F (x+iy ′ ) of R m to X is semi periodic, we may assume that a subsequence of mappings F (x+t n +iy ′ ) converges to G(x+iy ′ ) uniformly in x ∈ R m . Therefor the sequences of mappings F (x+x n +t n +iy ′ ) and G(x+x n +iy ′ ) have the same limit, i.e.,H(x + iy ′ ) = H(x + iy ′ ) for all x ∈ R m . SinceH(z), H(z) are holomorphic mappings we get H(x + iy ′ ) ≡ H(x + iy ′ ) on T Omega . This contradiction proves the Theorem. | 2010-11-26T08:25:29.000Z | 2010-11-26T00:00:00.000 | {
"year": 2010,
"sha1": "6b1ffb2ed62bbfc3024fe67d894ec21c18ca70e4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6b1ffb2ed62bbfc3024fe67d894ec21c18ca70e4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
245727981 | pes2o/s2orc | v3-fos-license | COMPARISON OF MICRODEBRIDER-ASSISTED TURBINOPLASTY VERSUS ENDOSCOPIC PARTIAL TURBINECTOMY IN CASES OF INFERIOR TURBINATE HYPERTROPHY IN ALLERGIC RHINITIS PATIENTS
Objective: To compare microdebrider-assisted turbinoplasty versus endoscopic partial turbinectomy in cases of inferior turbinate hypertrophy in allergic rhinitis patients in terms of relief/improvement of nasal obstruction, post operative bleeding, crusting and synechie formation. Study Design; Quasi experimental study. Place and Duration of Study: Combined Military Hospital Mardan and Combined Military Hospital Malir, from Jan 2019 to Jan 2020. Methodology: A total of 90 patients of allergic rhinitis with severe nasal obstruction due to bilateral inferior turbinate hypertrophy fulfilling the inclusion exclusion criteria were selected. Cases were randomly divided into two groups of 45 each. Group A cases underwent microdebrider assisted turbinoplasty and Group B cases underwent partial turbinectomy via endoscpic approach. They were comparedin terms of post op bleeding, relief of nasal obstruction, post op crusting & synechie/ adhesions. All the data was entered on SPSS-17 and analyzed. Results: Out of 90 cases, there were 43 (47.8%) females and 47 (52.2%) males with age range from 15-65, mean age 37.68 ± 11.56 Years. There was only 1 case of post op bleeding after microdebrider assisted turbinoplasty requiring nasal packing in contrast to 6 cases of post op bleeding after endoscopic partial turbinectomy. On one month post op visit, there was no case of nasal crusting in turbinoplasty group in contrast to 7 of mild and 1 of moderate crusting & 3 synechie/adhesions in endoscopic partial turbinectomy group. Conclusion: Microdebrider-assisted turbinoplasty is associated with less post operative bleeding and synechie formation as compared to endoscopic turbinectomy.
INTRODUCTION
Allergic rhinitis has been classified into perennial and seasonal types. Perennial allergic rhinitis persists throughout the year in contrast to seasonal variety in which symptoms occur during specific seasons. 1 However, all the patients do not fit into this classification scheme. Therefore, allergic rhinitis is now also classified according to duration of symptoms (intermittent or persistent) and severity (mild, moderate or severe). 2 Rhinitis is called intermittent rhinitis when the duration of the episode of inflammation is less than 6 weeks, and is considered persistent when symptoms continue throughout the year. Prevalence of allergic rhinitis has been reported as 24.62% in Pakistan. 3 However much higher prevalence has been reported in USA and Europe, as high as 40%. 4 Symptoms are classified as mild when patients are generally able to sleep normally and perform normal activities of daily life including work or school. Symptoms are categorized as moderate to severe if they significantly affect sleep and activities of daily routine. Severe allergic rhinitis has been associated with change in quality of life and decreased performance at work. 5 A thorough history and physical examination constitute the basis of diagnosing allergic rhinitis. Nasal obstruction in allergic rhinitis patients is a source of constant discomfort and worry for the individuals. Nasal obstruction is due to inferior turbinate hypertrophy in majority of such cases and is persistant. 6 Inferior turbinate is targeted to amplify and enhance space for airway with medical and surgical options. 7 Alpha agonist including oxymetazoline, xylometazoline etc are prescribed to decongest nasal mucosa to relieve nasal obstruction. But they can only be used for a short duration because of rebound congestion and further nasal obstruction after continuous use. 8 Topical corticosteroids are also used with varying results, some cases show slight improvement and others are even refractory to the medical treatment. Surgical resection offers the only permanent solution. Surgical procedures for inferior turbianates include conventional partial & total turbinectomy, submucosal resection, submucosal diathermy, endoscopic turbinectomy, microdebrider assisted turbinoplasty, radio frequency assisted turbinoplasty and lasers assisted turbinectomy. 9 Previously (before advent of endoscopic technology), turbinectomy was performed by otolaryngologists as more of a blind procedure, major part of turbinate inside nasal cavity being invisible to naked eye, with turbinectomy scissors. This caused more mucosal damage and resulted in frequent peroperative and post operative nasal bleeding. In addition, post operatively nasal packing had to be done which not only caused discomfort, headache, restlessness, insomnia, frequent post operative sinusitis but also aggravated any comorbid conditions in high risk patients i.e. hypertensives/heart patients. 10 We have carried out this study to compare these two procedures of inferior turbinate resection/volume reduction i.e. endoscopic partial turbinectomy versus microdebrider assisted turbinoplasty. The study will help otorhinolaryngologists to opt for the better procedure with less post operative discomfort, and morbidities to the patients.
METHODOLGY
The quasi experimental study was carried out at Combined Military Hospital (CMH) Mardan and CMH Malir, from January 2019 to January 2020. Total 90 patients of allergic rhinitis with nasal obstruction due to bilateral inferior turbinate hypertrophy fulfilling the inclusion exlusion criteria were selected from outpatient department of hospitals. Epitool software was used to calculate sample size. Using Epittol sample size calculator, estimated population was 0.06%, 11 desired precision was 0.05%, cofidence interval 95%. Calculated sample size was 87, we took 90 cases as sample size for the study and divided them into two groups of 45 each. Sampling technique was non probability convenient sampling. Patients of allergic rhinitis of any gender, older than 15 years of age with severe nasal obstruction due to bilateral inferior turbinate hypertrophy were included in study. Patients with nasal obstruction due to deviated nasal septum, nasal adhesions, synechie, septal perforations were excluded. Similarly patients who had undergone surgery for turbinate hypertrophy previously were also excluded along with patients with bleeding disorders, liver or renal failure.
Informed consent was taken from all the cases. The cases were randomly divided into two groups of 45 each. Group A cases underwent microdebrider assisted turbinoplasty and Group B cases underwent partial turbinectomy via endoscpic approach. In group A patients, 2% lignocaine with adrenaline was infiltrated at the anterior end of inferior turbinate. A small incision was made with number 15 blade. Microdebrider unit was set at 3000 rouds per minute oscillating mode and used advanced antero-posteriorly in the inferior turbinate, staying submucosally. Bipolar cautery was used to secure/control bleeding preoperatively, if any. In group B cases, local anesthesia was infiltrated with cotton pledgets soaked in 4% lignocaine and 0.1% xylometazoline. Partial turbinectomy was done under vision, working with zero degree endoscope and fine turbinectomy scissors. The medial hypertrophied part of inferior turbinate was resected and hemostasis achieved with bipolar diathermy carefully. No nasal packing was done in patients after both procedures. Post operatively all cases were given oral antibiotic Coamoxiclav, oral paracetamol as pain killer, xylometazoline hydrochloride nasal spray and liquid paraffin nasal drops for one week. No nasal splints were placed post operatively in both groups. Total 90 cases were distributed equally in both hospitals i.e out of total 45 cases of group A were done in CMH Malir and 45 cases of group B were done in CMH Mardan.
Postoperatively, both groups were compared in terms of efficacy and complications. All the cases were checked postoperatively; on operation day evening, 1st post operative day, at fortnight and one month post op. Post operative bleeding was considered to be present if sufficient bleed occurred resulting in nasal packing and was said to be absent; if no nasal packing was required. Patients were kept admitted for 24 hours then discharged. They were advised to report immediately if nasal bleeding occurred.
Efficacy was assessed by relief of nasal obstruction. Relief of nasal obstruction was categorized into marked improvement/relief, moderate improvement/ relief, mild improvement/relief and no improvement/ relief. Patient was asked to categorize the relief in nasal obstruction from grade 0 (no relief), grade 1 (mild relief), grade 2 (moderate relief) and grade 3 (marked relief) in nasal obstruction. Efficacy of procedures was checked on 02 weeks post op visit and one month post op visit. Crusting was categorized into mild, moderate and severe depending on post operative nasal crusts cleared from nose during cleansing by patient. Patient was asked to grade the amount of crusting from none (grade 0), mild (grade 1), moderate (grade 2) and marked (grade 3). Crusting was also assessed on 02 weeks post op visit and one month post op visit. Nasal adhesions/synechie are bands of scars and tissues between nasal septum and lateral nasal wall, formed after nasal surgeries. 12 Postoperative nasal examination via anterior rhinoscopy was done to assess/see the cases with synechie/adhesions on one month post op visit because it takes >2 weeks for formation of adhesions.
Ethical committee permission was taken for subject study (ERB no. 05-2/19). All the data including, age, gender, efficacy, post operative complications i.e. pain, bleeding, crusting and synechie/adhesions were analyzed with help of Statistical Package for Social Sciences (SPSS) 17. Chi sqaure was used to analyze qualitative variables and independent t-test was used for quantitative variables. A p-value less than 0.05 was taken as significant.
RESULTS
A total of 90 cases of allergic rhinitis with inferior turbinate hypertrophy, refractory to medical treatment were operated. There were 43 (47.8%) females and 47 (52.2%) males with age range from 15-65 years. Mean age was 37.68 ± 11.56 years. The cases were randomly divided into two groups of 45 each. Group A cases underwent turbinoplasty with microdebrider and group B cases underwent endoscopic partial turbinectomy. Both the groups were comparable in terms of age and gender as shown in Table-I. Both the groups were compared in terms of post operative bleeding, post operative crusting & improvement/relief of nasal obstruction at two weeks and one month duration. There was only 1 (2.2%) case of post op bleeding after microdebrider assisted turbinoplasty requiring nasal packing. In contrast there were 6 (13.3%) cases of post op bleeding after endoscopic partial turbinectomy requiring nasal packing. There were only 09 (20%) cases of mild nasal crusting after turbinoplasty on 02 weeks post op visit in contrast to 15 (33.3%) cases of mild crusting, 18 (40%) of moderate and 11 (24.4%) of marked crusting on 2 weeks post op visit. There were 02 (4.4%) cases of moderate improvement in nasal obstruction, 43 (95.6%) of marked improvement after turbinoplasty. There were 3 (6.7%) cases of mild improvement, 8 (17.8%) of moderate improvement and 34 (75.6%) of marked improvement/ relief in nasal obstruction.
On one month post op visit, there was no case of nasal crusting in Turbinoplasty group in contrast to 7 (15.6%) of mild and 1 (2.2%) of moderate crusting in endoscopic partial turbinectomy group. Similarly, 44 (97.8%) patients said they had marked improvement and 1 (2.2%) had moderate improvement/relief of nasal obstruction after turbinoplasty as compared to 4 (8.9%) of mild improvement, 3 (6.7%) of moderate and 38 (84.4%) of marked improvement after endoscopic turbinectomy as shown in Table-II. Post operative synechie formation which is a known complication of nasal surgery was also compared in these two procedures. There were 3 (6.7%) cases of post op synechie after endoscopic partial turbinectomy, however there were none after turbinoplasty.
DISCUSSION
Inferior turbinates are important anatomical structures, playing significant role in nasal functions of warming, filtering and humidification of inhaled air. 13 Inferior turbinate hypertrophy causes persistant nasal obstruction which is quite discomforting and worrisome for patients. Chronic nasal obstruction affects quality of life. 14 Conventional turbinectomy also had significantly more chances of post operative crusting and synechie/adhesion formation because of inadvertent mucosal damage. Newer procedures including endoscopic partial turbinectomy and microdebrider assisted turbinoplasty are gaining popularity because of less mucosal damage and under vision resection. These benefits of minimum mucosal damage and under vision resection offers safety to surrounding tissues in nose and less post operative bleeding and crusting. 15,16 Surgical reduction in size of turbinates via endoscpic approach and microdebrider are modern turbinate resection techniques. Older conventional methods caused extensive mucosal loss and damage to surrounding tissues. Endoscpoic turbinectomy provides under vision controlled surgical resection of hypertrophied turbinate tissue. Microdebrider offers resection of itraturbinate hypertrophied tissue with a minimal scar at proximal end of turbinate with preservation of mucosa, thus reducing side-effects associated with mucosal damage. 13 In our study, we found turbinoplasty technique better in terms of post operative complications i.e. crusting, bleeding, synechie formation but both procedures equally effective in relieving nasal obstruction.
Joniau et al compared powered turbinoplasty with submucosal diathermy of inferior turbinates. They found powered turbinoplasty significantly better than submucosal diathermy in terms of better long term results i.e improvement in nasal obstruction and less post operative crusting. 17 Kassab et al, compared turbinoplasty with microdebriderverus diode laser and declared both techiques equally safe, reliable and successful with minimal post-operative complications. 11 Thimmaiah et al, had compared turbinectomy with turbinoplasty and turbinectomy in inferior turbinate hypertrophy, they showed less post operative complications after turbinoplasty i.e. bleeeding, crusting, synechie and headache but in contrast they showed better long term relief of nasal obstrction after total turbinectomy as compared to turbinoplasty 18 , Bozan et al, compared turbinoplasty with outfracture and bipolar cautery and found turbinoplasty more effective in reducing inferior tubinate volume. 19 Turbinoplasty has excellentbenefit of preserving turbinate mucosa thus greatly reducing post operative granulation formation on damaged/nude mucosa and tissues which is basis of post operative crusting and synechie/ adhesion formation. Turbinoplasty also offers economy of time because of rapid removal of submucosal tissues/bone etc as compared to endoscopic partial turbinectomy which is comparatively time consuming. 18,19 CONCLUSION Turbinoplasty technique was found to be better than endoscpic partial turbinectomy in terms of post operative complications i.e., crusting, bleeding, synechie formation but both procedures equally effective in relieving nasal obstruction. | 2022-01-06T16:08:51.834Z | 2021-12-31T00:00:00.000 | {
"year": 2021,
"sha1": "064ea4947af4c3082068d2ffcb81651b4ddf6eb1",
"oa_license": "CCBYNC",
"oa_url": "https://pafmj.org/index.php/PAFMJ/article/download/4121/3766",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bee86aa3b23c853087f4121b4a198c4adb4d8f02",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
232343169 | pes2o/s2orc | v3-fos-license | Numerical simulation of formation damage by drilling fluid in low permeability sandstone reservoirs
Understanding the formation damage surrounding the well during the drilling operation is the key to predict damage degree and protect the formation in oil/gas reserviors. Based on the core drainage results, we obtained an empirical relationship between the invasion volume of drilling fluid and permeability reduction of formation. Furthermore, the equation is incorporated into a commercial reservior numerical simulation simulator to characterize the behaviors of drilling fluid invasion process. The results show that, although the invasion depth in low permeability reservoirs is short with the range of 1.7–2.5 m, the effect on recovery factor is significant due to the narrow seepage area in the near fracture region. When considering the formation damage, the pressure in the near-fracture damage region drops sharply, leading to a three-stage shape in pressure distribution curve. In addition, we found that high viscosity and low density oil-based slurry and shorter soaking period are conducive to decrease the formation damage during drilling operation. This work reveals the fundamental mechanisms of formation damage in low permeability reservoirs, which is a theoretical basis in formulation drilling fluids and optimization operation parameters.
Introduction
The oil/gas in low permeability reservoir, as one of the important fossil fuels, has a great deal of resources in worldwide (Xu et al. 2016). Compared with conventional oil/gas reservior, the low permeability reservior has stronger formation heterogeneity with complex pore-throat structure (Al-Yaseri et al. 2015;Kamal et al. 2019;Zhao et al. 2019). As a result, the property of the low permeability reservoir is very sensitive to the external fluid (Bazin et al. 2010;Mahmoud et al. 2016;Liang et al. 2017;Zhang et al. 2020a). For example, the reservoir formation can be severely damaged due to the invasion of drilling fluid during the drilling operation (Coskuner 2004).
Generally, the formation damage by drilling fluid in low permeability reservoirs can be classified into three types: (1) Formation damage by water blockage (Waldmann et al. 2005;Amani et al. 2012;Dias et al. 2015;Ahmad et al. 2017;Lu et al. 2021). The pore size distribution in low permeability reservoirs is very small with the range of microto nanometer (Su et al. 2018;Lyu et al. 2018;Zhang et al. 2020b). In addition, the minerals of the low permeability sandstone reservoirs tend to be strongly water-wet (Ballard and Dawe 1988;Yan et al. 1993). That is, the drilling fluid is easily to be imbibed into the formation triggered by the strong capillary pressure, blocking the effective flow pathways of the oil/gas from the deep reservoir into the wellbore.
(2) Formation damage due to the swelling or migration of clay minerals (Clark 1995;Caenn and Chillingar 1996;Frequin et al. 2013;Ramézani et al. 2015). The clay minerals in low permeability reservoirs are usually rich, especially the sensitive clay minerals such as montmorillonite and kaolinite (Windarto et al. 2011;Windarto et al. 2012). After the introduction of drilling fluid, the balance between fluid and clay is broken (Doty 1986;Jiao and Sharma 1994;Iscan et al. 2007). The clay swelling (montmorillonite) can reduce the effective pore size and the particle migration (kaolinite) can block the pore throat completely, resulting in permanent reduction of formation permeability. (3) Formation damage by the particle blockage from the drilling fluid (Growcock et al. 1994;Arhuoma et al. 2009;Van et al. 2012;Li et al. 2018). During the drilling, the introduction of rock debris into the drilling fluid is inevitable. Under the pressure difference between the drilling fluid and the formation, the rock debris and other solid particles can invade into the near-well formation, which affects the performance of the subsequent hydraulic fracturing (Whitaker 1996;Civan 1998;Jilani et al. 2002;Ding et al. 2004;Cai et al. 2010). Currently, the investigations for the formation damage induced by the drilling fluid are mostly from the perspective of laboratory experiments (Parn-Anurak and Engler 2005;Suryanarayana et al. 2007;Jin 2009;Shabani et al. 2019), and the numerical simulation study related to this topic is limited.
In this work, based on the results of core drainage experiments, an empirical equation between the permeability reduction and the invasion volume of drilling fluid is obtained. Then, the equation is incorporated into a commercial simulator to numerically characterize the invasion behaviors of drilling fluid in low permeability reservoirs. We quantified the effects of formation permeability, properties of drilling fluid (viscosity, density, etc.), and the soaking period on the damage radius, invasion volume, skin factor, and reduction degrees of recovery factor. Our investigation is conducive to predict the formation damage degrees and propose the corresponding strategies to mitigate the formation damage during drilling operation.
Gridding and model set up
We built a single porosity single permeability one-dimensional numerical model based on a commercial simulator IMEX™ from CMG. The model includes matrix and fracture where the fracture is located at the first grid near the well, as shown in Fig. 1. Considering the good symmetry of the reservoir after hydraulic fracturing and the available computation resources, we only model the invasion phenomenon for the near-well region by using one-dimensional model. In order to accurately capture the sudden variation of pressure and saturation in the near-well region, local grid refinement in a logarithmic form is applied to discretize the simulation domain to enhance the consistency and stability as well as to capture the transient flow. The grid size for the far end is 50 m, while the grid size near the well is 0.008 m, consistent with the real fracture width in the subsurface. The input data for the numerical model are the typical data for low permeability reservoir. There are two production layers with one having high permeability (50 mD) and another one having low permeability (30 mD). The thickness, porosity, fracture permeability of the both two layers are 13 m, 10% and 20 D, respectively. The formation depth is 4000 m; reservoir pressure is 36.7 MPa; original water saturation is 0.25; rock compressibility coefficient is 4 × 10 −9 Pa −1 . The pressure of drilling fluid in the simulation is 45 MPa with soaking period of 1 week in the base case. The relative permeability and capillary pressure for the matrix are obtained by analytical equations. The capillary pressure curves considered the difference of imbibition (wetting phase as drainage fluid) and drainage (nonwetting phase as drainage fluid). In addition, exponential empirical formula is used to characterize the stress-dependent permeability of low permeability reservoir, and the detail parameters can be found in Zhang et al. 2017.
Invasion equation from laboratory experiments
Formation damage by drilling fluid is ascribed to the interaction between drilling fluid and clay minerals which causes clay swelling and clay migration to decrease permeability. In order to quantify the relationship between permeability reduction and invasion volume of drilling fluid, we conducted core invasion laboratory experiments. The core of the experiment is collected from the target reservoir as shown in Fig. 2. We inject polymer-based drilling fluid and measure the permeability of the core sample simultaneously, and the detail experimental apparatus, procedures, and cautions can be found in Li et al. (2017). After the fluid injection, the absolute permeability of the sample changed from 13 to 5.4 mD with a permeability reduction of 58.3%. The fitting empirical equation between the invasion volume and permeability can be shown as: (1) where K 0 is the original permeability; K is the current permeability; Ω is the invasion volume; Ω 0 is the saturated volume; the coefficient α is 0.15. Therefore, by using the equation, the matrix permeability of the invaded region can be adjusted in real-time according to the invasion volume of drilling fluid, achieving the numerical simulation of formation damage caused by the drilling fluid.
Results and discussion
In this section, firstly, we characterized the invasion behaviors of drilling fluid by using the reservoir-scale numerical simulation; we quantified the invasion depth of the drilling fluid and its effects on the pressure distribution and the recovery factor. Furthermore, the effects of reservoir permeability, fluid properties (viscosity and density), and soaking period on the invasion characteristics were analyzed in detail. Figure 3 presents the permeability and water saturation distribution in the near-well region after the invasion of drilling fluid. As we can see, due to the low permeability of the formation and imbibition hysteresis, the invasion depth is not very far which is mostly located at the near-well region. In the base case, the invasion depth of the drilling fluid is only in the range of 1.7-2.5 m after 7 days of soaking period. In addition, the total fluid invasion volume is 18.7 m 3 including 11.9 m 3 in high permeable layer and 6.8 m 3 in low permeable layer. Because the matrix in the near-well region (within 1.5 m) is fully saturated with drilling fluid, the formation damage is severe and the permeability reduces to the lowest limit, in which the permeability of high permeable layer reduced to 14.6 mD and the permeability of low permeable layer reduced to 5.2 mD. Figure 4 presents the pressure distribution in the nearwell region after the invasion of drilling fluid. If we ignore the formation damage in the simulation, the pressure curve exhibits two stages characteristics: a pressure-drop cone within the pressure interference region and a constant pressure in the far end of the reservoir. However, if we consider the formation damage in the simulation, the pressure curve exhibits three-stage characteristics. Besides the two stages mentioned above, a sharper reduction of pressure in the nearwell region can be observed where the flowing resistance of hydrocarbon is increased by the invasion of drilling fluid. Therefore, the invasion of drilling fluid increased the pressure consumption in the reservoir and decreased the effective drainage pressure during the production.
Invasion behaviors of drilling fluid
We also evaluated the effect of formation damage on the recovery factor of each layer, as shown in Fig. 5. The recovery factors for the simulated cases are smaller than 25%, showing low recovery factor for the typical low permeability reservoir. The layer with high permeability produces hydrocarbon fasters at the early production stage, while the layer with low permeability produces most of the reserves at the late production stage. Finally, the recovery factor of the two layers is nearly same after 3500 days of continuous production. After considering the formation damage, the recovery factor would reduce at some extent, regardless of the original permeability of the formation. Especially, the reduction in the recovery factor of each layer at the early production stage can reach up to 3%.
Effects of permeability on fluid invasion
To understand the effect of reservoir permeability on the drilling fluid invasion characteristics, we simulated a total number of 11 cases with permeability ranging from 1 to 40 mD at same porosity of 10%. The specific weight and viscosity of the cases are maintained as 1.01 and 50 cP, respectively. The reservoir matrix is exposed to the drilling fluid with 10 days, and the simulation has 10 years production. As shown in Fig. 6, the skin factor contributed by the stress-dependent permeability is stronger at a lower permeability, which is ascribed to the exponential form of stress-dependent equation. However, according to Eq. (1), the skin factor contributed by formation damage of drilling fluid decreases with the decrease in reservoir permeability. Therefore, the total skin factor decreases first then increases with the decrease of reservoir permeability. The effects of permeability on damage radius and reduction in recovery factor are shown in Fig. 7. With the increase in permeability, higher invasion volume of drilling fluid results in larger damage radius and subsequently higher reduction of recovery factor. For example, when the permeability is 40 mD, the damage radius can reach up to 400 m, and the reduction of recovery factor is larger than 45%.
Effects of fluid properties on fluid invasion
We evaluated the effects of drilling fluid properties (viscosity, specific weight) on the damage radius, invasion volume, skin factor, and reduction of recovery factor, as shown in Figs. 8 and 9. Higher viscosity of the drilling fluid indicates a larger flow resistance during invasion, which leads to a smaller invasion volume, damage radius, and reduction of recovery factor. Heavier specific weight of the drilling fluid results in a larger bottom-hole pressure, and the Effects of drilling fluid density on the formation damage drainage pressure between the drilling fluid and reservoir fluid increased. As a result, invasion volume, damage radius, and reduction of recovery factor are increased. It should be noted that, when the specific weight of drilling fluid is larger than 1.1, the increase rate of skin factor tends to decreases. Overall, the effects of drilling fluid properties on the formation damage is significantly smaller than that of reservior permeability. Figure 10 shows the effects of soaking time on the damage radius, invasion volume, skin factor, and reduction of recovery factor. As we expected, when the reservoir matrix is exposed to the drilling fluid for a longer period, the invasion volume and damage radius are larger due to larger imbibition time, which also induces a larger skin factor and reduces the recovery factor of the reservoir. Especially, at the early soaking time (within 12 days), the skin factor increases sharply. Therefore, extremely caution should be paid to avoid longer soaking period during the drilling operation in low permeability reservoirs.
Conclusion
(1) Due to the low permeability and capillary hysteresis during imbibition, the invasion depth of drilling fluid is usually in the range of 1.7-2.5 m. Although the invasion depth is small, the formation damage has a significant effect on the recovery factor due to the small seepage area in the near-well region. Compared with the case without considering formation damage, the recovery factor of the case considering formation damage decreased 3%.
(2) When formation damage is considered, the pressure distribution from wellbore to deep reservoir exhibits three stages. Besides the two stages including a constant pressure in the far end of the reservoir and a pressure-drop cone within the pressure interference region, a sharper reduction of pressure in the near-well region can be observed where the flowing resistance of hydrocarbon is increased by the invasion of drilling fluid. (3) Using oil-bearing drilling fluid with a higher viscosity and lower specific weight can decrease the formation damage due to the invasion of drilling fluid. The effects of drilling fluid properties on the formation damage is significantly smaller than that of reservoir permeability. Extreme caution should be paid to avoid long soaking period during the drilling operation in low permeability reservoirs.
Funding We acknowledge the National Science and Technology Major Projects of China (2016ZX05023-005-001-003) and the National Natural Science Foundation Projects of China (51474070).
Conflict of interest
We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-03-25T13:45:30.369Z | 2021-03-24T00:00:00.000 | {
"year": 2021,
"sha1": "b806cde08a797b2e71b4f8e34aaedbdc390b5360",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s13202-021-01137-x",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "b806cde08a797b2e71b4f8e34aaedbdc390b5360",
"s2fieldsofstudy": [
"Geology",
"Materials Science"
],
"extfieldsofstudy": []
} |
246634590 | pes2o/s2orc | v3-fos-license | On a prior based on the Wasserstein information matrix
We introduce a prior for the parameters of univariate continuous distributions, based on the Wasserstein information matrix, which is invariant under reparameterisations. We discuss the links between the proposed prior with information geometry. We present sufficient conditions for the propriety of the posterior distribution for general classes of models. We present a simulation study that shows that the induced posteriors have good frequentist properties.
Introduction
In Bayesian parametric inference, the choice of the prior plays a fundamental role. In scenarios where the prior information about the model parameters is vague or unreliable, it is desirable to use priors which do not require the user to specify their parameters (hyperparameters). The main aim of Objective Bayes [Berger, 2006, Consonni et al., 2018 is indeed to produce priors via formal rules [Kass and Wasserman, 1996], which typically depend only on the statistical model. Such rules usually aim at producing a prior that has little effect on the inference on the parameters, or that is invariant under reparameterisations, or that penalises the model complexity. Priors obtained with formal rules are usually referred to as Objective priors or Non-informative priors. We refer the reader to Leisen et al. [2020] for a recent review of methods for constructing priors based on formal rules. A pioneering contribution in this area is the Jeffreys prior [Jeffreys, 1946], which is obtained by calculating the square root of the determinant of the Fisher information matrix (FIM) [Robert et al., 2009]. The aim behind the construction of the Jeffreys prior is to produce a prior that is invariant under reparameterisations.
Another direction for constructing a prior based on a formal rule consists of looking at the genesis of the Jeffreys prior [Kass and Wasserman, 1996]. The Jeffreys prior is typically motivated by its invariance under reparameterisations, however, it can also be motivated using concepts from information geometry [Amari, 2016, Nielsen, 2020, Amari, 2021, Amari and Matsuda, 2022. Briefly, the Kullback-Leibler divergence behaves locally as a function of a distance function determined by the Riemannian metric. The Jeffreys prior can be seen as the natural volume associated to such metric, and natural volume elements generate uniform measures on manifolds [Kass and Wasserman, 1996]. Moreover, natural volumes of Riemannian metrics are invariant under reparameterisations [Kass and Wasserman, 1996]. Intuitively, this suggests that other distances could be used to construct alternative priors. In this line, a natural alternative consists of using the optimal transport induced information matrix [Li and Zhao, 2019], referred to as the Wasserstein information matrix (WIM). The construction of the WIM can be justified using ideas from "transport information geometry", which is the intersection between optimal transport [Villani, 2003] and information geometry [Amari, 2016[Amari, , 2021. We refer the reader to Li [2021b,a] and Amari [2021] for a more extensive treatment of this area. The idea behind the construction of the WIM consists of using tools from optimal transport, where a distance between distributions is used to construct an information matrix. Li and Zhao [2019] focused on the particular choice of the Wasserstein-2 distance. The Wasserstein-2 distance can be associated to a metric operator, namely the WIM, which is different in nature from the Fisher information matrix [Amari, 2021]. Although a vast amount of literature has been devoted to the study of the Fisher information matrix and the Jeffreys prior, there is a void in the study of priors associated to the Wasserstein information matrix.
We propose a formal rule for constructing a prior, which is invariant under reparameterisations, based on the Wasserstein information matrix. The construction of this prior (referred to as the Wasserstein prior hereafter) is analogous to that of the Jeffreys prior. However, as shown later, we find that the Wasserstein prior has a different functional form for several models and, appealingly, requires a lower order of differentiability. This helps overcome some challenges with the Jeffreys prior, where the required higher order of differentiability precludes its construction for some non-regular models [Shemyakin, 2014, Li andZhao, 2019]. Moreover, as we will show later in the simulation study, the Wasserstein prior induces a posterior with good frequentist properties in the models studied here.
Wasserstein information matrix
Let X be a continuous random variable with finite second moment, and F (x | θ) be the corresponding cumulative distribution function (cdf) with support D ⊂ R, and parameter θ ∈ Θ ⊂ R d , with d ≥ 1. Let us assume that F (x | θ) is absolutely continuous, and let f (x | θ) be the corresponding probability density function (pdf).
Consider the Wasserstein information matrix (WIM) proposed in Li and Zhao [2019]
where the expectation is taken with respect to F (x | θ). A clear difference between the WIM and the FIM is that the former is based on derivatives of the cdf (with respect to the parameters), while the latter is based on derivatives of the pdf. This is an appealing property as it reduces the conditions for the existence of the WIM [Li and Zhao, 2019], allowing its construction for non-regular models. Next, we present a brief description of the motivation behind the construction of the WIM. The details are somewhat technical, but we refer the reader to Li and Zhao [2019] for a detailed derivation of the WIM.
As discussed in Section 1, a distance between probability distributions can be used to define an information matrix. In our case, we focus on the analysis of the information matrix (WIM) implied by the Wasserstein-2 distance. Given two parameter values θ 0 , θ 1 ∈ Θ, the Wasserstein-2 distance between two probability distributions with support on D ⊂ R, F (· | θ 0 ) and F (· | θ 1 ), satisfies the following relationship with the corresponding quantile functions [Villani, 2003] where F −1 is the quantile function associated to the cdf F . It can be shown that the Wasserstein-2 distance Dist W defines a Riemannian metric among probability distributions [Villani, 2003], which can be used to establish the connection between such metric with an information matrix. More specifically, the infinitesimal expansion of the squared Wasserstein-2 distance establishes a link of this metric and the Wasserstein information matrix. That is, let ∆θ = (∆θ 1 , . . . , ∆θ d ) ∈ R d such that θ 0 + ∆θ ∈ Θ, one can show that [Li and Zhao, 2019] This shows a link between the Wasserstein-2 distance and the WIM, which is discussed in detail in section 7.7 of Amari [2021]. We can also see from this result that the WIM shares a similar derivation to that of the Fisher information matrix (see Chapter 5 of [Ghosh et al., 2006] for an extensive discussion). We remark that one can also define the WIM in higher dimensional sample spaces (that is, for random vectors). However, this requires solving an elliptic partial differential equation [Li and Zhao, 2019].
The Wasserstein prior
In this section, we propose the Wasserstein prior, whose main motivation is to obtain an invariant prior. We also describe the precise meaning of the invariance property and its connection with the Jeffreys prior.
One parameter case
Consider the case where d = 1, that is, we focus on the case where F (x | θ) contains only one parameter. Then, the WIM (1) becomes Let ϕ = h(θ) be a reparameterisation of F (x | θ). Let us denote the WIM associated to F (x | θ) by W (θ), and the WIM associated to F (x | ϕ) byW (ϕ). From the above expression, we can see that Indeed, the FIM also satisfies this relationship [Robert et al., 2009]. This suggests the construction of an invariant prior, based on the WIM, in a similar fashion as the Jeffreys prior (which is based on the FIM). Define the prior (up to a positive proportionality constant) It follows that this prior is invariant under reparameterisations in the sense that where πW (ϕ) ∝ W (ϕ). That is, the priors π W (θ) and πW (ϕ) are related by the corresponding change of variable.
Therefore, this represents a strategy for constructing a prior based on a formal rule [Kass and Wasserman, 1996] which is invariant under reparameterisations, in the same spirit as the invariance property of the Jeffreys prior [Jeffreys, 1946]. We formalise this idea next.
Multi-parameter case
Consider now the general case d ≥ 1 and let ϕ = h(θ) be a reparameterisation of F (x | θ). Note first that the WIM of ϕ,W (ϕ), can be written after a change of variable as: where J is the Jacobian matrix with entries The proof of this result is analogous to the proof of the invariance property of the FIM, which can be found in Lehmann and Casella [2006]. Consequently, we have that This result suggests the construction of an invariant prior, based on the WIM, in a similar fashion as the Jeffreys prior is obtained from the FIM. The construction of this prior is formalised in the following definition. Definition 1. The Wasserstein prior is defined, up to a positive proportionality constant, as where W (θ) denotes the Wasserstein information matrix (1).
Examples
In this section, we present three examples where we illustrate the calculation of the WIM and the Wasserstein prior. In all cases, we provide sufficient conditions for the propriety of the posterior distribution.
The location-scale family
Let f 0 be a symmetric and unimodal pdf with mode at 0 and support on R, and F 0 be the corresponding cdf. Let denote the cdf and pdf of the class of symmetric and unimodal location-scale family of distributions, with location parameter µ ∈ R and scale parameter σ ∈ R + .
The WIM and the Wasserstein prior of (µ, σ) in the location-scale family (3) are: An important class of location-scale models is the family of scale mixtures of normal distributions. A pdf f 0 is said to belong to family of scale mixtures of normal distributions if it can be represented as: where H is a cumulative distribution function with support on R + . The family of scale mixtures of normal distributions contains important distributions such as the Normal, Logistic, Laplace, Student-t, among other distributions (see Rubio and Steel, 2014 for a discussion). The next result provides sufficient conditions for the propriety of the posterior distribution of (µ, σ) under the Wasserstein prior (4) for the case when f 0 belongs to the family of scale mixtures of normal distributions.
The skew-normal distribution
We now present a result in a one-parameter model, where we obtain the Wasserstein prior for the skewness parameter of the skew-normal distribution [Azzalini, 1985]. Let φ(x) and Φ(x) be the pdf and cdf of the standard normal distribution. The skew-normal pdf is defined as [Azzalini, 1985]: where α ∈ R is a skewness parameter. The following result characterises the WIM and Wasserstein prior of α. Theorem 3. Consider the skew-normal distribution (6). Then, (i) The WIM of α is given by is symmetric about 0.
(iv) The tails of π W (α) are of order O(|α| −5/2 ). The tail behaviour of the Wasserstein prior π W (α) differs from that of the Jeffreys prior of α [Rubio and Liseo, 2014], which has tails of order O(|α| −3/2 ), and the total variation prior proposed in Dette et al. [2018], which has tails of order O(|α| −2 ). The characterisation of the propriety and tail behaviour of π W (α) in the previous theorem suggests that one could approximate it using a symmetric distribution with the same tail behaviour. A natural candidate is the Student-t distribution with ν = 3/2 degrees of freedom. We found that a scale parameter σ t = 0.757 produces a good approximation in the main body of the distribution, while the tails have the exact same weight (see Figure 1).
In the next theorem, we construct a prior for the skew normal distribution with location and scale parameters (µ, σ) and skewness parameter α, based on a product prior structure using the priors (4) and (7). We show that the posterior distribution is proper under mild conditions. This prior can be interpreted as an Independence Wasserstein prior (analogous to the independence Jeffreys prior, Steel, 2014, Rubio andLiseo, 2014), in the sense that it is constructed as the product of the Wasserstein priors for each parameter (or groups of parameters) while considering the other parameters as fixed. Theorem 4. Let x = (x 1 , . . . , x n ) be an i.i.d. sample from the skew normal distribution with pdf Consider the improper product prior structure using the Wasserstein priors (4) and (7) π Then, the posterior distribution of (µ, σ, α) is proper if n > 2.
Normal linear regression
We now study the WIM and the Wasserstein prior for the normal linear regression model, where x i ∈ R p is a vector of covariates, β ∈ R p is a vector of regression coefficients, i i.i.d.
∼ N (0, σ 2 ) denote the errors. Let X = (x 1 , . . . , x n ) denote the design matrix and y = (y 1 , . . . , y n ) the vector of response variables. Theorem 5. Consider the linear regression model (9), and suppose that X has full column rank. Then, the WIM and the Wasserstein prior are given by, The next result presents sufficient conditions for the propriety of the posterior distribution of (β, σ) under the Wasserstein prior (10).
Theorem 6. Consider the Normal linear regression model (9) together with the Wasserstein prior (10). Suppose that X has full column rank and that y is not in the column space of X. Then, the posterior distribution of (β, σ) is proper if n > p + 1.
Simulation Studies
In this section we present two simulation studies to assess the performance of the posterior distributions induced by the Wasserstein prior.
In the first simulation scenario, we evaluate the performance of the independence Wasserstein prior (8) and compare it against the independence Jeffreys prior [Rubio and Liseo, 2014]. We simulate N = 250 samples of size n = 50, 250, 500 from a skew-normal distribution (6) with µ = 10, σ = 1, and λ = 1, 3, 5. We emphasise that the value λ = 1 represents a very challenging scenario as the skew-normal distribution is weakly identifiable for |λ| < 1.25; in the sense that the skew-normal pdf is virtually symmetric for values of λ in this region [Rubio and Genton, 2016]. In the second simulation scenario, we evaluate the performance of the Wasserstein prior (9) in linear regression models. We simulate N = 250 samples of size n = 50, 250, 500 from the linear regression model (9), with β = (1, 0, 0.5, 1) and σ = 0.5, where the first entry of β represents the intercept. The entries of the design matrix are simulated from a multivariate normal distribution with zero mean, unit variance, and pairwise correlations of 0.5. The values of β are chosen to reflect different levels of signal-to-noise ratio and the effect of a spurious variable. For each of these samples, we simulate a posterior sample of size 1000 using the R package 'Rtwalk', using a burn-in period of 5000 iterations and a thinning period of 25 iterations (this is, a total of 300, 000 posterior samples were obtained for each sample). In all scenarios, we also compare the results against those associated to the maximum likelihood estimators (MLE). We choose the following performance measures to evaluate the different estimation methods and priors: 'mMean' denoting the average of the posterior means across the N simulated samples; 'mSD' denoting the average of the posterior standard deviations; 'mRMSE' denoting the average of the root mean squared errors; 'Coverage' denoting the coverage proportion of the 95% credible intervals; 'mMLE' denoting the average of the maximum likelihood estimators; and 'RMSE-MLE' denoting the root mean squared error of the maximum likelihood estimators across the N samples.
Tables 1-3 in the Appendix show the results associated to the first simulation scenario. From Table 1 in the Appendix, we observe that (in the case λ = 1) the estimation of the parameter λ is indeed quite challenging for all sample sizes as the true model is very close to symmetry. Both priors (independence Jeffreys and independence Wasserstein) induce a marked shrinkage of the parameter λ towards zero as the likelihood is relatively flat. This shrinkage naturally induces a bias in the Bayesian point estimators (posterior mean) for both priors. Although, for n = 50, the coverage produced by the Jeffreys prior is slightly better than that produced by the Wasserstein prior, the average RMSE and standard deviations of the Bayes estimators associated to the Jeffreys prior are much larger. This is likely a consequence of the very heavy tails of the Jeffreys prior which, together with the flatness of the likelihood, produce a heavy tailed posterior. Indeed, the MLE also exhibits a very large RMSE for n = 50. The stronger regularisation induced by the Wasserstein prior also produces a faster concentration of the predictive posterior densities around the true model. The fit of the posterior predictive pdfs is particularly better than that obtained with the fitted pdfs using the MLEs (Figure 2). The cases λ = 3, 5 (Tables 2-3 and Figures 2-4 in the Appendix) show that the estimation of the parameter λ is much better behaved when the true value of λ is away from λ = 0, and the density function is clearly asymmetric. The performance of the independence Jeffreys and the independence Wasserstein in terms of all measures is quite similar. Since the true value of the parameter lies in the tails of the prior, the shrinkage effect of the priors is minimal. In those cases, the MLE also exhibits a much larger RMSE for n = 50. Table 4 shows the results associated to the second simulation scenario. We notice that the performance of the Wasserstein prior is good for all sample sizes in terms of the chosen measures. Indeed, given that the prior is flat, the performance of the MLE coincides with that of the maximum a posteriori (MAP).
Discussion
We have introduced the Wasserstein prior, a prior based on the Wasserstein information matrix, which is invariant under reparameterisations. We have briefly discussed the link of the construction of this prior with concepts from information geometry. We have also introduced the independence Wasserstein prior, which aims at reducing the functional dependence between the parameters in a similar fashion as the independence Jeffreys prior (and more generally, the reference prior [Yang and Berger, 1997]). The simulation study (results presented in the Appendix) shows that the Wasserstein prior induces a posterior with good frequentist properties (at least for the models studied here), compared to the posteriors induced by the Jeffreys prior and the fitted models using maximum likelihood estimation.
Additional numerical examples related to the models presented here can be found at https://github.com/ FJRubio67/PIW.
As discussed in the introduction, objective priors are based on formal rules with specific aims. The Wasserstein prior is based on a formal rule aiming at obtaining a prior that is invariant under reparameterisations. Consequently, the construction of such prior does not necessarily penalise model complexity, and thus may produce suboptimal results in sparse scenarios, such as linear regression models with many spurious variables.
Natural extensions of our work include the calculation of the Wasserstein prior for other univariate continuous distributions (with bounded support, with positive support or supported on the entire real line). In this paper, we have taken a conservative position as we do not claim superiority of the Wasserstein prior over the Jeffreys prior in terms of a specific optimality criterion, even though the simulation study illustrates a competitive performance. Our work represents a step forward in the analysis of invariant priors obtained by a formal rule, and shows that it is possible to go beyond those induced by the Kullback-Leibler divergence. We believe it would be interesting to provide a theoretical treatment of the inferential properties of the Wasserstein prior, beyond the propriety of the posterior shown here. This includes the study of the asymptotic normality of the posterior distribution; establishing more formal links of the Wasserstein prior with information geometry [Kass and Wasserman, 1996, Kass, 1989, Nielsen, 2020; and the effect of the parameterisation on the orthogonality of parameters [Cox and Reid, 1987] based on the Wasserstein information matrix.
The Exponential Distribution
Consider the exponential distribution with scale parameter θ > 0. The corresponding cdf and pdf are given by In this case, the WIM and the Wasserstein prior are Let x = (x 1 , . . . , x n ) be an i.i.d. sample from (11). Then, the posterior distribution of θ associated to the Wasserstein prior (12) is proper if the sample size n > 1. We omit the proof of this result, for the sake of space, as it is straightforward.
Proof of Theorem 1
Let X denote a random variable with cdf and pdf F and f . First, note that the first partial derivatives of the cdf are given by Then, the entries of the WIM are given by Consequently, the WIM and the Wasserstein prior for the location-scale family of (µ, σ) are given by , π W (µ, σ) ∝ 1.
Using this decomposition and integrating out µ as a normal distribution, we obtain Now, for n ≥ 3, integrating this expression with respect to σ 2 as a Gamma distribution we obtain,
Proof of Theorem 3
(i) The skew normal cdf can be written as [Azzalini, 1985]
Using the Fundamental Theorem of Calculus
Replacing this expression in formula (1) together with the relationship Φ(x) = 1 2 1 + erf x √ 2 we obtain dx.
(ii) The symmetry of π W is a consequence of the integrand of W being a function of α 2 and the erf function applied to α, together with the property erf(z) = −erf(−z).
(iii) Note that for α = 0 Now, for M > 0 and α ∈ [−M, M ], and using the symmetry of W , there exists K 1 > 0 such that denotes the inverse Mills ratio. It is well known that r(x) is a decreasing function and that r(x) ∼ |x| as x → −∞ and r(x) ∼ |1/x| as x → ∞. Then, for α ∈ [−M, M ] and x > 0, there exists K 2 > 0 such that r(αx)φ(2αx) ≤ K 2 and Now, for α > M , and since r(·) and φ(·) are decreasing functions, there exists K 3 > 0 such that r(αx)φ(2αx) ≤ K 3 , and Consequently, Finally, for α < −M , appealing to the symmetry of π W , we also obtain that (iv) Using the expression for α > 0 and noting that φ(x) is upper bounded, it follows that Consider now the change of variable t = αx. Then, we obtain the upper bound Now consider the change of variable t = αx applied to the expression Note now that for α ≥ A > 0, φ t α ≥ φ t A , for all t > 0. Then, Combining (13)- (14) we obtain that W (α) has tails of order O(|α| −5 ). This implies that π W (α) has tails of order O(|α| −5/2 ).
Proof of Theorem 4
The marginal likelihood can be upper bounded by The last expression is proportional to the marginal likelihood associated to a normal sampling model together with the Wasserstein prior. By Theorem 2, we have that this marginal likelihood is finite.
Proof of Theorem 5
First, note that the cdf and pdf associated to the ith observation of the normal linear regression model are given by where Φ and φ are the standard normal cdf and pdf, respectively. The derivatives with respect to the parameters are Replacing these expressions in formula (1), we obtain Consequently, using Proposition 5 in Li and Zhao [2019] and the assumption of independence of the errors, the WIM for the entire sample is given by Taking the square root of the determinant of the WIM, we obtain the Wasserstein prior π W (β, σ) ∝ 1.
Proof of Theorem 6
The posterior distribution is proper if the marginal likelihood (normalising constant) is finite. This is, we need to prove that Consider the classical decomposition where β = (X X) −1 Xy. Replacing this expression in the marginal likelihood and integrating β out as a p-variate normal distribution and σ 2 as a Gamma distribution, and using that X has full column rank and that y is not in the column space of X, we obtain for n > p + 1 for a positive constant K > 0.
Simulation Results
Throughout, we denote by 'mMean' the average of the posterior means across the N simulated samples. Similarly, 'mSD' represent the average of the posterior standard deviations; 'mRMSE' denotes the average of the root mean squared errors; 'Coverage' is the coverage proportion of the 95% credible intervals; 'mMLE' is the average of the maximum likelihood estimators; and 'RMSE-MLE' is the root mean squared error of the maximum likelihood estimators across the N samples. β 0 (1) β 1 (0) β 2 (0.5) β 3 (1) σ (0. | 2022-02-08T04:00:24.653Z | 2022-02-07T00:00:00.000 | {
"year": 2022,
"sha1": "b33d9e65104eae100aa2745b4bbc8cd2928f3062",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8690d020f5ddd7f65f38e306def91f5f567e3c91",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
91547656 | pes2o/s2orc | v3-fos-license | Rates of molecular evolution and genetic diversity in European vs . North American populations of invasive insect species
Many factors contribute to the ‘invasive potential’ of species or populations. It has been suggested that the rate of genetic evolution of a species and the amount of genetic diversity upon which selection can act may play a role in invasiveness. In this study, we examine whether invasive species have a higher relative pace of molecular evolution as compared with closely related non-invasive species, as well as examine the genetic diversity between invasive and closely related species. To do this, we used mitochondrial cytochrome c oxidase subunit I sequences of 35 species with a European native range that are invasive in North America. Unique to molecular rate studies, we permuted across sequences when comparing each invasive species with its sister clade species, incorporating a range of recorded genetic variation within species using 405,765 total combinations of invasive, sister, and outgroup sequences. We observed no signifi cant trend in relative molecular rates between invasive and noninvasive sister clade species, nor in intraspecifi c genetic diversity, suggesting that differences in invasive status between closely related lineages are not strongly determined by the relative overall pace of genetic evolution or molecular genetic diversity. We support previous observations of more often higher genetic diversity in native than invaded ranges using available data for this genetic region. * Corresponding authors; e-mails: ryoung04@uoguelph.ca, tmitterb@uoguelph.ca. These authors contributed equally. INTRODUCTION Non-native species are of large concern to natural ecosystems and can have drastic negative impacts on the native fl ora and fauna if they become established. An introduced species is considered invasive when it can survive introduction, establish a population, and spread or have the potential to spread further in the introduced range (Richardson et al., 2000; Blackburn et al., 2011). Insects introduced into North America are responsible for major ecological damage and result in an economical impact estimated at 32.56 billion dollars per year (Bradshaw et al., 2016). If we can better understand the factors contributing to a successful species invasion, we can better manage high-risk invasion corridors and mitigate negative effects on native populations. Some currently identifi ed factors infl uencing a successful invasion include available space, available food, and the absence of predatory and parasitic organisms (Sorte et al., 2010). While it can be argued that anthropogenic factors, such as the movement of organisms across vast distances by human-mediated transportation routes, have been largely responsible for the global distriEur. J. Entomol. 115: 718–728, 2018 doi: 10.14411/eje.2018.071
INTRODUCTION
Non-native species are of large concern to natural ecosystems and can have drastic negative impacts on the native fl ora and fauna if they become established.An introduced species is considered invasive when it can survive introduction, establish a population, and spread or have the potential to spread further in the introduced range (Richardson et al., 2000;Blackburn et al., 2011).Insects introduced into North America are responsible for major ecological damage and result in an economical impact estimated at 32.56 billion dollars per year (Bradshaw et al., 2016).If we can better understand the factors contributing to a successful species invasion, we can better manage high-risk invasion corridors and mitigate negative effects on native populations.Some currently identifi ed factors infl uencing a successful invasion include available space, available food, and the absence of predatory and parasitic organisms (Sorte et al., 2010).While it can be argued that anthropogenic factors, such as the movement of organisms across vast distances by human-mediated transportation routes, have been largely responsible for the global distri-may also result in higher mutation rates (Bromham, 2011).This higher mutation rate would then provide new variation upon which selection could act, facilitating evolution within a shorter timeframe.Thirdly, biological invasion may lead to increased substitution rates via positive selection or relaxed selective constraints following invasion (consequence hypothesis, Fig. 1C).Invasive species establishment in non-native environments (Prentis et al., 2008) can increase positive selection pressures in specifi c genes (Roman & Darling, 2007) and could therefore increase fi xation in those genes.Ho wever, we do not expect that the molecular rates would be observably changed as a consequence of invasion in this study, given the very recent occurrences (within the last approximately 180 to 20 years, Supplement S1-13) of these insect invasions and the use of COI, which has no clear link to adaptive changes in invasion success.Population genetics trends related to population molecular diversity are more likely to display trends as a consequence of invasion, such as due to population promote invasion and affect molecular rates (correlation hypothesis; Fig. 1B).Biological traits that could be considered 'invasive traits' are those that favour establishment, including high fecundity and population growth rates (Lee & Bell, 1999) and fast generation time (Sakai et al., 2001).High fecundity and population growth rates are expected to lead to more mutations over time, increasing the pace of molecular evolution at DNA sites with low selective pressures.Faster generation time is expected to lead to both higher mutation rate and faster pace of evolution due to more bouts of fi xation per unit time (Bromham, 2009).Thus, the possession of these traits may correspond with a generally faster rate of molecular evolution as well as promote invasion.
Secondly, molecular rates and invasion success could correspond due to a higher mutation rate promoting invasion potential (causation hypothesis, Fig. 1B).Biological traits that are thought to be more common in successful invasive species (e.g.high fecundity, Lee & Bell, 1999) Fig. 1.Hypotheses on the link between invasive potential and molecular evolutionary rates or population genetic diversity in closely related lineages.An example geographic range for the invasive species is represented in circles/ovals while an example geographic range of the sister species is represented by squares.(A) Higher molecular rates may be observed for invasive vs. sister species.If invasive species have consistently higher rates across examples of invasive vs. sister clade species, then the explanation for such an observation could be that (B) higher molecular rates lead to invasive potential, or a common factor increases both rates and the propensity for invasion.Alternatively (C), a successful invasion could lead to higher rates of molecular evolution as compared to related species that have not undergone an invasion.We similarly investigate population genetic diversity; however, population genetic diversity may be lower in the invaded range as a consequence of invasion (C).Note that the sister clade species is not necessarily from Europe but is not known to be invasive.
bottlenecks occurring upon colonization of a new environment by few invading individuals.
Published research in invasion genetics, which has tended to focus primarily upon plants, has highlighted the ability for introduced invasive species to respond to selection pressures as an important factor in invasion success (Lee, 2002;Sherman et al., 2016).Additive genetic variation (Lee, 2002), i.e. when phenotypic variation of a given trait is related to the additive, independent effects of multiple genes, is thought to increase invasion success when the characteristics (e.g.phenotype) involved are relevant to the invasion.Often, a high genetic diversity within populations is invoked as facilitating invasive success (Rius & Darling, 2014).The various measures of genetic variation are often assumed to be correlated; however, empirical studies have estimated that molecular genetic variance correlates weakly (r = 0.22) with additive genetic variation (Reed & Frankham, 2001).Genetic diversity has been reported for invasive species compared to similar non-invasive species in specifi c target taxa (e.g.Pappert et al., 2000), but a general test of the genetic diversity in invasive vs. related non-invasive species is lacking in insects.Following the logic of Fig. 1, we test whether molecular genetic variation corresponds with (Fig. 1A) or promotes invasive success (causation hypothesis, Fig. 1B); however, we acknowledge that molecular genetic variation may not be useful for inferring or predicting processes of adaptation (except in the individual gene investigated), as with additive genetic variation.
There is great interest into how newly introduced nonnative species, often with reduced genetic variation in comparison with variation in the native species range, can adapt so rapidly in new environments (Allendorf & Lundquist, 2003;Roman & Darling, 2007;Schrieber & Lachmuth, 2016), but there are few studies that have synthesized the extent of this reduction in genetic variation in insects.Molecular genetic data are useful to infer effects of more neutral factors including population size, gene fl ow, and population structure (Reed & Frankham, 2001).The Dlugosch & Parker (2008) synthesis of literature included investigations of allelic diversity in nuclear genes for 13 invasive insect species; they concluded reduced allelic diversity and heterozygosity in introduced populations as compared to the native range populations.However, there have been no large studies on population genetic trends in mitochondrial loci between native and introduced ranges in insects that applied consistent methods across all comparisons.To address this gap, we test whether there is a general trend of reduced genetic diversity in the invaded vs. native range of invasive insect species that have invaded North America from Europe (consequence hypothesis, Fig. 1C).
The two measures of molecular evolutionary rates and genetic diversity we will examine here are not directly related and could exhibit different trends.An elevated mutation rate alone could lead to an increase in both intra-and interspecifi c variation.However, often, intraspecifi c genetic diversity and interspecifi c lineage rates do not correspond due to their relationship with effective population size (which affects the rate of mutational fi xation), and instead they can be inversely related (Fujisawa et al., 2015).As well, genetic diversity can be increased through other avenues such as repeated introductions and hybridization (Roman & Darling, 2007).Thus, we test both interspecifi c rates and intraspecifi c diversity to investigate whether either corresponds with invasion potential and success.
Study design
We use a widely available region of the cytochrome c oxidase subunit I gene (COI) to test 35 European to North American invasive insects for rates of molecular evolution as compared to related non-invasive species from the closest available sister clade.Firstly, we test the rates of molecular evolution in the invasive species as a whole (all geographic locations) vs. related sister clade species (Fig. 1A).To differentiate cause/association vs. consequence in any observed trends in relative molecular rates (Fig. 1), we examine whether members of the invasive species occupying the native range (Europe) have higher molecular evolutionary rates than closely related non-invasive species, which would suggest a causative infl uence or an association of rates with invasion (Fig. 1B, cause/association).To test for an effect of invasion on rates (consequence hypothesis), we observe whether invasive lineages in the invaded range have higher rates than those in the native range (Fig. 1C).Similar to tests on molecular evolutionary rates (Fig. 1), we also test population genetic diversity for 32 of those invasive insect species, examining the genetic diversity in invasive species (whole range) vs. sister clade species (as in Fig. 1A, correlation), genetic diversity in the invasive species' native ranges vs. sister clade species (as in Fig. 1B, causation/association), and genetic diversity within invasive species in their invaded vs. native range (as in Fig. 1C, consequence).
Invasive insect data acquisition and data verifi cation
A list of European invasive insects into North America was obtained from the Canadian Wildlife Federation website (http://cwffcf.org/en/;download performed May 2015).A literature review using a Web of Science search (criteria in S1-1) was conducted in May 2016 to support the selection of target species as invasive in North America, based on peer-reviewed research (References in S1-1).Using the Barcode of Life Data (BOLD) Systems (Ratnasingham & Hebert, 2007), we verifi ed that there were COI sequence data available for each invasive insect species of interest (referred to here as "target" species) by obtaining all Barcode Index Numbers (BINs) associated with the Latin names of the target species.A Barcode Index Number (BIN) references a group of nucleotide sequences of the COI-5P region, which represent species-like units, based on the Refi ned Single Linkage molecular clustering method (Ratnasingham & Hebert, 2013).BINs associated with our target species were validated for correspondence with our target species: fi rst, BINs were used if the majority of records in the BIN belonged to the target species or a synonymized name.When a single morphological species name was prevalent in multiple BINs, indicating potential cryptic diversity within the named species, all of the obtained BINs were included in analyses (this occurred in 4 of our 35 tested invasive species.See S1-3).
Using the BOLD system, we constructed preliminary phenograms using the neighbour-joining (NJ) method with Kimura-2-parameter (K2P) distances using all publicly available sequence data on BOLD (minimum 400 nucleotides in length, no fl ags or stop codons) for the entire genus containing each of our identifi ed target species.After visual inspection, if the resulting NJ phenogram did not contain four successive deeper nodes from the target species lineage, the genus group was not used, and the process was repeated for all sequences within the subfamily or family containing a target species.Using the appropriate genus or family name, we then downloaded all available sequence data from BOLD using the BOLD public data API.
Each genus or family-level data set was globally aligned in MAFFT Ver. 7 (Katoh & Standley, 2013), followed by trimming and verifying the amino acid alignment by eye using MEGA6 (Tamura et al., 2013).The multiple sequence alignment (MSA) was then reduced by eliminating sequences exhibiting ≥ 98% similarity using ElimDupes (https://hcv.lanl.gov/content/sequence/ELIMDUPES/elimdupes.html) to reduce computational demands and facilitate maximum likelihood phylogenetic analysis.In two alignments that had fewer and more closely related sequences (genus-level alignments), the criterion was changed to ≥ 99% similarity in order to keep all BINs in the alignment.Model testing was completed using MEGA6 on the generic or sub/family alignments, and the model with lowest Bayesian Information Criterion (BIC) score was selected (S1-2).Maximum likelihood (ML) trees were constructed using MEGA6, using the best-fi t model of nucleotide substitution and 1000 bootstrap pseudoreplicates to indicate node support.
For each genus or subfamily/family containing a target invasive species, sister lineages to the target species were identifi ed using the ML tree.If the node connecting the target and sister clade had a bootstrap support value of 70% or greater, the 2 ndbranching lineage to the target + sister clade was used as an outgroup.If a bootstrap support value of less than 70% was present at this node, then the 3 rd -branching lineage from the target + sister clade was used as an outgroup.The 3 rd -branching outgroup to the target species was selected in cases of low bootstrap values of closer relatives in order to provide confi dence that the outgroups were indeed phylogenetically outside the ingroups; otherwise, incorrect conclusions could be drawn about the relative molecular evolutionary rates between the target and sister lineages.If the ML tree did not contain the appropriate number of branching outgroups, a new sequence download and ML tree construction was performed using the next higher taxonomic level (for example moving from a genus download to a subfamily download).
BINs associated with sequences in the sister lineage were obtained.These BINs were checked to ensure that they contained individuals bearing a Linnaean taxonomic identifi cation to species level on the BOLD system.The sister clade BIN species names were then used to complete a Web of Science search to check for evidence of invasiveness using the same process and criteria used to check the target species.Sister clade BINs were removed from analysis if literature evidence was found of introduction and establishment in non-native ranges, as we aim to compare known invasive target species to sister clade species not known to be invasive.We acknowledge that some of the sister clade species could have invasive tendencies not yet reported in the literature or may not have had dispersal opportunities to non-native regions; however, the target species are well-known invasive species, and we expect that they differ, on average, in some element of invasiveness from their sister clade species.
For the target and sister clade BINs selected above from each genus or subfamily/family group ML tree, all publicly available COI sequences were downloaded from BOLD.BINs were used to download sequence data, as opposed to downloading based on taxonomic identifi cations, to include as many sequences as possible, including those within these BINs that currently lack lowlevel taxonomic identifi cations.The outgroup sequences used to construct the ML trees were added to the alignments with the target and sister downloaded sequences, and each sequence set was aligned and trimmed as above.Alignments were verifi ed to be in reading frame, and alignments lengths were all of a multiple of three for later analysis.To ensure that high-quality and consistent data were present for further analysis, sequences with greater than 1% unknown nucleotides (N's and/or gaps) were removed by a Perl script (S2-6).Unlike above, these alignments were not reduced by eliminating identical or similar sequences, as all target and sister sequences were necessary for further down-stream analyses.These versions of the alignments (hereafter called 'target sequence alignments' identifi ed by the invasive species name, e.g.Yponomeuta malinellus) were used in "Genetic diversity analysis" further below.
Relative rates analysis
Three separate analyses (North American regional, European regional, and total) were conducted using each of the 'target sequence alignments'.The fi rst two analyses used geographic regions of collection (North America, Europe) to conduct independent analyses of relative rates of molecular evolution.This enabled comparisons between geographic regions.To accomplish this, the sister clade sequences were reduced to unique sequences in R v.3.2.4 (R Core Team, 2016) for North America and Europe separately (all R scripts provided in S2-1 to 5).The target sequences were reduced to unique sequences by geographic region for conducting a rates analysis for the species by region without duplicate sequences.The third analysis conducted was a total data analysis without considering geographic region.The 'target sequence alignments' were reduced to unique sequences for the entire alignment and used for further analyses to compute molecular evolutionary rates using all available data.Species trees were assembled as input for rates analysis and included a representative target sequence, sister sequence, and outgroup sequence (Fig. 2).A single outgroup was used at a time as the use of multiple outgroups does not signifi cantly improve rate estimations in relative rate tests (Robinson et al., 1998).A balanced (1 vs. 1) choice of target and sister sequence was used to remove bias arising from the node-density effect (Robinson et al., 1998).The program baseml in package PAML version 4.7 (Yang, 2007) was used to estimate the length of each branch on the 3-species trees.For each of the three analyses -North American, European, and total -all possible combinations of target sequences, sister sequences, and outgroup sequences were permuted for rates analyses using R.The nucleotide model results previously obtained from MEGA6 for conducting the taxonspecifi c ML trees were used for rates analysis using the target sequence alignments; the best (lowest BIC score) model without G or I parameters was used for PAML analysis (S1-2).Non-synonymous substitution (dN) rates, synonymous substitution (dS) rates, and the ratio between the previous two (dN/dS) were obtained for each target and sister lineage similarly using the codeml program in PAML.
Relative molecular rates are uncertain for recently diverged lineages.While the rate results can be trimmed for uncertain estimates (e.g.Welch & Waxman, 2008), in this analysis all of our contrasts are on short time frames, and most branch lengths are necessarily short (but greater than 2% sequence divergence in all cases).In cases where species sampling within a genus was lower, the branch lengths between contrasts are expected to be longer and the relative rates less uncertain; however, the biological character of interest (invasiveness) still only occurs at the species level, and thus comparisons on more highly-divergent lineages more poorly represent a contrast in character state.Due to the recent divergences of all comparisons (i.e.closely related species within genera), we elected not to eliminate contrasts with less certain estimates or to weight results based on divergence.From the rate estimates obtained from the PAML program, rela-tive rates between target and sister clade species were calculated.Firstly, for each analysis, the larger divided by the smaller substitution rate (minus one to center fi nal values around 0) were examined.These relative rates were signed based on direction whereby positive indicated a higher target species rate than sister rate, and negative was assigned for a higher sister clade species rate than target species rate.Relative dN rates, dS rates, and dN/dS ratios were obtained by a different method where we compressed the relative rates between -1 and 1 through taking '1-smaller/larger' and signing based on direction (target rate > sister rate is positive, sister rate > target rate is negative) (as in Wright et al., 2006;Mitterboeck et al., 2016) in order to account for low values without displaying extreme results.
To determine whether target vs. sister clade species exhibited different relative rates, binomial and Wilcoxon signed-rank tests were performed using the medians of the relative substitution rates, dN rates, dS rates, and dN/dS ratios across the 35 comparisons for the total data set category.This analysis was repeated with the removal of 8 species that were purposefully introduced into North America (indicated in S1-13), since we have no hypothesis for the molecular evolutionary rates of these species as a causative factor in invasion potential.Secondly, we compared the relative position of the North American and European population medians for all comparisons having both sets of data points; this was performed using a binomial and Wilcoxon signed-rank test for their relative position and difference; e.g. if the North America median was a higher value than the European, then the result was signed as positive.The relative breadths of range in the relative rates between regions were also compared using the differences between regions by 2-tailed binomial and Wilcoxon signed-rank tests.
Genetic diversity analysis
The 'target sequence alignments' were used to calculate haplotype numbers, haplotype diversity, and nucleotide diversity in DNAsp version 5.10 (Librado & Rozas, 2009).Haplotypes were recognised as sequences differing in two or more nucleotides, not counting unknown nucleotides.BINs were used for the analysis, though most often the Linnaean species name matched a single BIN.For each set of sequences from invasive BINs, diversity statistics were calculated for the total set of sequences, the European sequences, and the North American sequences where there were two or more sequences available per BIN and geographic location.Two-tailed binomial tests were performed on the number of sequences, haplotype numbers, and diversity measures for North American vs.European sequences in each invasive species BIN to test the hypothesis of consequence (Fig. 1C); two-tailed Wilcoxon signed-rank tests were similarly performed after taking 1-smaller/larger value and signing based on direction (e.g.Europe greater is positive, North America greater is negative).We additionally test whether the species that were purposefully introduced into North America represent a difference in their proportion between North American > European and European > North American genetic diversity results, using a Fisher's Exact test in R.
Similarly, we calculated the number of haplotypes, the haplotype diversity, and the nucleotide diversity for the sets of sequences in each associated sister clade BIN and compared these metrics to the results obtained for each invasive species BIN (corresponding to testing for a general observation between invasive species and molecular genetic diversity, Fig. 1A).We repeated the diversity measures for European vs. sister clade BIN sequences only, since the European geographic region relates to the hypothesis of cause (Fig. 1B).Binomial and Wilcoxon signed-rank tests were performed as described above.Since there can be multiple invasive or sister clade BINs within a paired comparison, we compare each invasive species BIN to all possible sister clade BINs (using 1-smaller/larger and sign) and summarise those results per invasive BIN by the median.We then summarise results for multiple BINs within a single invasive species name by taking the median of the genetic diversity metric from the various sister pairings.We repeat all summaries and Wilcoxon signed-rank tests using only those comparisons with higher sample size -6 or more sequences in each BIN or region -since the subsampling of 6 sequences from a population allows the distinction between high and low population estimates of haplotype and nucleotide diversity in the COI gene in animals (Goodall-Copestake et al., 2012).Comparison set-up and permutation approach for the molecular rates analysis.From the maximum likelihoo d trees built using COI sequences, BINs of the target invasive species, BINs belonging to the closest sister lineage, and outgroup sequences in the 2 nd or 3 rd branching lineage from the ingroup were used.One unique sequence per target and sister clade BIN, and an outgroup sequence, formed a 3-species tree used to estimate branch lengths for the invasive vs. sister lineage.All possible 3-sequence combinations were run, with the data points representing the relative invasive:sister substitution rates presented in Fig. 3.
Relative substitution rates
The relative rates for all invasive-sister pairs, along with their medians, are plotted in Fig. 3A.405,765 combinations of invasive, sister, and outgroup sequences were analysed for relative substitution rates.In 19 out of 35 invasive species analysed, the medians of the substitution rates were positive, indicating that the invasive rate was higher than the sister clade species rate (p binomial[b] = 0.74, p Wilcoxon[W] = 0.23) (Fig. 3A, blue/darkest bars).This result corresponds with the question in Fig. 1A (observation).The median of the rate medians was +0.11, signifying that the invasive species molecular rate was 11% higher (by median) than the sister rate in 50% or more of the invasive species tested.The results were similar when the purposefully-introduced species were removed (16 out of 27 medians positive, p b = 0.44, p W = 0.16).
Similarly, the medians of the European sequences did not differ signifi cantly from the null expectation.This result corresponds with the question in Fig. 1B (cause/association).Neither North American nor European comparisons had more often high median relative rates than the other region for comparisons possessing data from both regions (14 Europe higher, 16 North America higher, p b = 0.88, p W = 1.0).This result corresponds with the question in Fig. 1C (consequence).The range in relative substitution rates was larger for European data points than North American data points, with 22 out of the 31 comparisons with data from both regions having a larger range of relative substitution rates for the European data (p b = 0.029, p W = 0.0014).
Thirty-two of 35 comparisons had both positive and negative data points for the total data set of permuted se- 2014).Panel B shows the distribution of the medians of relative dN rates, dS rates, and dN/dS ratios across the 35 comparisons, where each data point in the boxplot is a median of all relative rates belonging to a single invasive vs. sister comparison.The number of positive (invasive > sister) and negative (sister > invasive) medians are given above and below each boxplot, respectively.Zero values are included on the graph but not tallied.Species purposefully introduced into North America are marked with '^'.
quence groupings.This signifi es that the varying choice of a single sequence each from a target, closest sister, and outgroup BIN can give different directional results, i.e. that the invasive rate was greater than the sister, or vice versa.However, in the 32 cases that span zero, 20 cross by a tail quartile of data.Therefore, this signifi es that less than 25% of the data points within each of those comparisons would yield an opposite conclusion.
Relative dN/dS ratios, dN rates, and dS rates
The relative dN rates, dS rates, and dN/dS ratios for all invasive-sister pairs, along with their medians, are plotted in Fig. 3B.There was no signifi cant difference in the frequency of higher rates between invasive vs. sister clade species.Neither the North American nor European region had signifi cantly higher dN/dS ratios, dN rates, or dS rates than the other region (p values in S1-9).
Genetic diversity by region and in invasive vs. sister clade species
The number of invasive species sequences available was similar between geographic regions (Europe and North America), with more sequences in the European region in half (14 of 28) of the invasive species (p binomial = 1.0, p Wilcoxon = 0.57).However, the number of haplotypes (21 of 26, p b = 0.0025, p W = 0.0033), the haplotype diversity (20 of 26, p b = 0.0094, p W = 0.014), and the nucleotide diversity (19 of 26, p b = 0.029, p W = 0.012) were each signifi cantly more often higher in the European region than in North America by both binomial and Wilcoxon signed-rank tests (Fig. 4, corresponding with the question in Fig. 1C).The results excluded two cases of equal measures.The directional results were relatively consistent among the metrics of genetic diversity, with species exhibiting higher diversity in Europe 81% of the time for haplotype number, 77% of the time for haplotype diversity, and 73% of the time for nucleotide diversity.This fi nding was similar when examining only pairs represented by 6 or more sequences from each region (full results in S1-11).The species displaying No rth American > European genetic diversities represented a greater proportion of purposefully introduced species than the species displaying European > North American genetic diversities (3 of 6 vs. 1 of 19 cases, p Fisher's = 0.031, for 25 cases where the haplotype and nucleotide diversity were in the same direction).
Invasive species had more sequences and haplotypes (by medians) than the sister clade species; in 28 of 32 comparisons the invasive had more sequences (p b = 1.9 × 10 -5 , p W = 5.0 × 10 -7 ), and in 27 of 32 comparisons the invasive had more haplotypes (p b = 0.00011, p W = 0.00071).Nevertheless, the haplotype diversity and nucleotide diversity metrics, which consider number of sequences, did not differ signifi cantly between invasive and sister clade BINs (corresponding with the question in Fig. 1A) (haplotype diversity: 14 of 32, p b = 0.60, p W = 0.29; nucleotide diversity: 13 of 32 with the invasive higher than the sister, p b = 0.38, p W = 0.14).When considering only comparisons with higher sequencing sample size (6+ per species), the number of haplotypes did not differ signifi cantly; instead, the haplotype diversity (7 of 27; p b = 0.019, p W = 0.0036) and nucleotide diversity (7 of 27; p b = 0.019, p W = 0.00074) were signifi - cantly more often greater in the sister clade species in both the binomial and Wilcoxon signed-rank tests.When considering only the invasive species' European sequences vs. the sister clade species (corresponding with the question in Fig. 1B), the number of sequences was again higher in the invasive species, and again the haplotype and nucleotide diversity did not differ signifi cantly.Similar to the results when including data from all regions, for those pairs with 6+ sequences/species, the haplotype diversity (7 of 23, p W = 0.017) and nucleotide diversity (8 of 23, p W = 0.013) of the sister clade species were signifi cantly more often higher by Wilcoxon signed-rank test as compared to the invasive species (S1-11).
Relative substitution rates and interpretation of cause or consequence
Invasive species did not display higher molecular substitution rates more often than closely related non-invasive species as seen by the medians of the relative rates (Fig. 3A).These results provide no support that invasion occurs as a consequence of faster rates of molecular evolution (Fig. 1A represented by Fig. 3A 'Total', Fig. 1B in Fig. 3A European region).The North American sequences also did not have consistently higher relative rates than European sequences (Fig. 1C represented by relative position of European [EU] and North American [NA] region bars in Fig. 3A), which suggests rates were not increased as a consequence of invasion.
The short evolutionary timeframes investigated between target and sister clade species (a few million years) has not provided a lot of opportunities for differential substitution accumulation.Because of this, it is diffi cult to distinguish whether the null result in molecular rates is a general trend in species-level contrasts or due to uncertainty in measurement of rates.A signifi cant directional trend, however, would have suggested that rates of molecular evolution are associated (through cause, association, or consequence) to invasive potential in insects on short (and likely long) timeframes.Furthermore, we elected only to explore overall trends, rather than make more subtle considerations such as estimate the degree of invasiveness for each species.Our results were a fi rst step toward addressing the question of general molecular rates and invasive potential.Future investigations of this question through species-level contrasts may require much larger sample sizes, or investigations could include taxonomically higher clades containing invasive species to provide longer timeframes so that rate differences would be more apparent.For example, testing molecular rate differences between clades differing in their proportion of known invasive species would be an interesting avenue for a future study.Currently, this would be diffi cult to accomplish accurately and on a worldwide scale as there are few comprehensive lists of invasive species (Rilov & Crooks, 2008;Clout & Williams, 2009) and none in insects (Foottit & Adler, 2009).Our use of the medians of relative rates obtained through multiple rates calculations using all available combinations of invasive, sister, and outgroup sequences helped to reduce variation introduced through small sample size and the stochasticity of genetic change on short time scales.The greater range in the relative rates of molecular evolution in European vs. North American populations corresponded with a greater number of haplotypes observed for the European population.
Relative dN rates, dS rates, and dN/dS ratios Examination of the different types of substitutions (synonymous/non-synonymous) provides more details on the effects of selection and mutation in the COI gene of invasive vs. related species, providing a broader understanding on how COI acts both as a representative of genome-wide trends and may experience differing selection pressures itself.The relative synonymous substitution (dS) rate can serve as a proxy for relative mutation rate differences.There was no signifi cant difference between invasive and sister clade species dS rates, suggesting that invasive species do not have a higher mutation rate, either as a causative factor in their success in invasion or as a correlation with other biological or ecological traits that could both promote invasion and impact the mutation rate.Similarly, the invasive species in the native range did not have greater dN rates than the corresponding sister clade species, suggesting that evolutionary pace (such as due to faster generation time) is not higher in the invasive species.The North American sequences did not have signifi cantly more often higher non-synonymous rates than the European sequences.This suggests no trend in adaptive evolution in the COI gene, neither positive selection (e.g.Scott et al., 2011) nor relaxation of selective constraints (e.g.Mitterboeck & Adamowicz, 2013), in conjunction with the population becoming established in the new environment.A trend in COI related to pos itive selection was not necessarily expected; while metabolism and other gene types have been shown to be under positive selection in certain invasive insect species (Wang et al., 2011), there lacks evidence for selection in COI associated with invasive success.Our interpretations of selection are with respect to a single mitochondrial marker, and we acknowledge that different selective forces may be acting on additional genetic markers relevant for invasion-related traits.
Genetic diversity
The literature suggests additive genetic variance in a source population as a facilitator of successful invasion (Lee, 2002), with additive genetic variance linked weakly with molecular genetic variation (Reed & Frankham, 2001).We observed more haplotypes present in invasive species than related sister clade species.However, the invasive species also had more often a greater number of sequences available than closely related species.Due to the nature of data collection for this study, we cannot be certain if available sequences are representative of the true population make-up for the included species.This difference in research effort could be a direct consequence of a species having 'invasive' status being more available or of higher interest to researchers.Despite these limitations of sample size, the genetic diversity results displayed the opposite trend compared to what might be expected based upon sample size of sequences alone.When considering haplotype diversity and nucleotide diversity measures, which do consider sample size, the sister clade species typically exhibited higher diversity than the paired invasive species for the subset of comparisons that included 6 or more sequences per BIN.To truly consider whether genetic diversity infl uenced the propensity for invasive potential, the source population should be examined.While the sequences obtained from the native range may not represent all sequences from which the invasion was derived (due to sampling effort), it provides a point of comparison to evaluate diversity relative to the sister clade species.When considering the European region only, the pattern was similar between invasive species vs. sister clade species, with higher diversity in the latter.Thus, our data do not provide support toward the assertion of genetic variation facilitating invasion, since we did not observe any signifi cant difference in the European region of the invasive species as compared to the sister clade species.However, we used COI as a representation of molecular genetic variation; our analyses did not consider additional genes, which may be more relevant to invasion-related traits.
Genetic diversity for invaded geographic ranges cited in the literature is often observed to be reduced as compared to the native range of the invasive species (e.g.Kliber & Eckert, 2005); there are, however, some exceptions to this reduced genetic diversity in the invaded range (Klobe et al., 2004).We observed more often a greater genetic diversity in the native European geographic range as compared with the invaded North American range, as seen in the number of hapotypes (81% of species), haplotype diversity (77%), and nucleotide diversity (73%).Our work included information for 28 target species having sequences from both geographic regions.There were 6 cases in which the haplotype and nucleotide diversity was higher in the invaded range, contrary to expectation.However, most (4) of these cases appear to correspond with potential explanations: in 3 cases the species were purposefully introduced, as well in 3 cases the sequence availability was highly unbalanced (5 times different), with higher diversity measures corresponding with the region having the much greater sequence availability.While the species in these regions were not exhaustively sampled, the publicly available data from BOLD represents the most wide-spread effort to generate DNA sequence data across as many species as possible, allowing us to perform a broad-scale analysis using consistent methods.Our collated data suggest a general trend of higher intraspecifi c variation in the source range as compared to the invaded range for invasive insects.This result is similar but less consistent in direction to the synthesis of Dlugosch & Parker (2008), in which 11 of 13 (85%) invasive insect species with various countries of origin and invasion had greater heterozygosity in the native ranges.Our species lists did not overlap with that study.
Taxon representation
Our included invasive species represented 5 of 32 insect orders, and the genera and sub/families included here often contained more than one invasive species analysed.The most common families included in our analysis, the beetle families Curculionidae and Chrysomelidae, are both species rich with over 60,000 and 36,000 species, respectively (Foottit & Adler, 2009).A higher number of invasive species would be expected in these groups, as compared to less-specious insect families, simply by chance.Out of our included insect orders (Diptera, Hymenoptera, Coleoptera, Lepidoptera, and Hemiptera), Hemiptera has been observed to be disproportionally represented by invasives as compared to native species in North America (Yamanaka et al., 2015).
Considerations and limitations of the work
Here we used a simple measure of genetic differentiation between invasive and closely-related non-invasive species.We did not test for genomic selection in various genes.While the genetics of invasion are undoubtedly more complex than simple rates of molecular evolution, the relative molecular rate is one potential correlate of invasiveness that had not been directly addressed to date.
When selecting our molecular marker for this study, we gave preference to COI due to the larger number of publicly available sequences, in terms of both number of sequences and breadth of taxon sampling, which would have been reduced by using a multiple marker data set.Due to the use of a single marker, relationships constructed may be considered gene trees rather than species trees.Given the phylogenetic uncertainty, we were cautious in our choice of outgroup (taking the 2 nd or 3 rd branching outgroup, based on node support) in order to limit the possibility of our outgroup choice being closer to the target or sister lineages, thereby infl uencing our analysis results.Furthermore, COI has been shown to produce fairly accurate relationships for lower taxonomic levels when compared with multi-gene trees (e.g.Wilson, 2010;Boyle & Adamowicz, 2015).We chose to collect data for species introduced from Europe into North America to reduce latitudinal differences as a confounding factor on our molecular evolutionary rates analysis.The sister clade species used in our analyses were not necessarily native to Europe, which could introduce noise due to geographic region of origin, such as potential latitudinal effects on genetic diversity or molecular rates.However, since multiple comparisons were used that included data from various individuals and various geographic collection ranges, this aids to mitigate confounding geographic issues by assessing larger trends.
Implications of the permutation method
Permuting through equal choices of individuals within a species and through species within the same phylogenetic range may be useful in two ways: the fi rst is in avoiding node-density effects while still considering information from multiple terminal lineages; the second is in the demonstration of how sequence or lineage choice can infl uence the result obtained for molecular rates studies in simple three-taxon trees.Ninety-one percent (32 of 35) of our comparisons using all available (total) data spanned both positive and negative sides of the null result.In 63% (20 of 32) of cases, the crossing was only by a tail quartile.The choice of sequences to represent a target, sister, or outgroup species, such as with multiple sister clade BINs and outgroup sequences, can greatly infl uence the resulting relative rates calculations and thereby the possible interpretation of the data.Variation in rates results, introduced by species choice, was evident.However, even in the Ceutorhynchus obstrictus comparison, which had a single target BIN (6 sequences), sister clade BIN (1 sequence), and outgroup (1 sequence), the relative rates results were variable.Thus, our study suggests that whether or not real biological differences are present, the choice of a single sequence to represent a species may not refl ect the majority of the data or demonstrate the variability of results available.Since relative rates studies may opt for a single sequence per sister lineage for analysis, to avoid the node-density effect, we suggest that the issue of sampling effects requires further attention in molecular rates research.
CONCLUSION
We have investigated the relationship between rates of molecular evolution in the COI gene, which is commonly used in identifying specimens to species, and the trait of invasiveness in insect species.No defi nitive trends were observed in rates; however, it remains possible that trends exist when considering longer time frames.Using permutation in the analysis of rates, we showed that variation in relative rates exists within a species or BIN, and unique sequence choice using simple trees impacts rate estimates.Similar permutative analysis using data at various taxonomic levels and a comparison across these levels is suggested for future efforts.Our intraspecifi c diversity results also do not support the hypothesis that higher genetic variance promotes invasive potential, since we did not observe a higher genetic diversity in the invasive source population than in closely related non-invasive species.However, our results indicate that the intraspecifi c diversity in a mitochondrial gene within invasive sp ecies is signifi cantly more often higher in the source geographic range than in the invaded ranges, using a large sample size of regional comparisons in insects, consistent with previous fi ndings using nuclear markers.
Fig
Fig. 2.Comparison set-up and permutation approach for the molecular rates analysis.From the maximum likelihoo d trees built using COI sequences, BINs of the target invasive species, BINs belonging to the closest sister lineage, and outgroup sequences in the 2 nd or 3 rd branching lineage from the ingroup were used.One unique sequence per target and sister clade BIN, and an outgroup sequence, formed a 3-species tree used to estimate branch lengths for the invasive vs. sister lineage.All possible 3-sequence combinations were run, with the data points representing the relative invasive:sister substitution rates presented in Fig.3.
Fig. 3 .
Fig. 3. Thirty-fi ve comparisons of relative molecular evolutionary rates between related invasive and non-inva sive insect species.The distribution of data points representing the relative invasive : sister molecular rates are shown in panel A. Positive relative rates represent data points for which the invasive rate > sister rate, and negative where the sister rate > invasive rate.Yellow (light) bars indicate results of sequences of individuals that were collected in North America (NA); green (medium weight shading) indicates European (EU) individuals; blue bars (dark weight shading) indicate the total (TL) data points from all regions and those without any location information.The thick vertical lines on the boxes represent the median relative rate.Phylogenetic relationships shown to the left are based on taxonomy, as well as molecular phylogenetic relationships from Hunt et al. (2007), Regier et al. (2009), McKenna et al. (2009),Wiegmann et al. (2011), andMisof et al. (2014).Panel B shows the distribution of the medians of relative dN rates, dS rates, and dN/dS ratios across the 35 comparisons, where each data point in the boxplot is a median of all relative rates belonging to a single invasive vs. sister comparison.The number of positive (invasive > sister) and negative (sister > invasive) medians are given above and below each boxplot, respectively.Zero values are included on the graph but not tallied.Species purposefully introduced into North America are marked with '^'.
Fig. 4 .
Fig. 4. Haplotype diversity and nucleotide diversity estimates between populations in the native range (Europe [EU]) and the invaded range (North America [NA]) in twenty-eight invasive insect species.The European population had greater haplotype diversity (orange bars above the x axis) in 20 of 26 cases (p b = 0.0094, p W = 0.014) and greater nucleotide diversity (purple bars above the x axis) in 19 of 26 cases (p b = 0.029, p W = 0.012), with two cases of equal (0) diversity measures, as compared to the population in North America (bars below the x axis).The diversity measures were made relative by taking 1-(smaller/larger) and signing based on direction (EU > NA signed as positive, NA > EU signed as negative).The bars are ordered by approximate date of introduction of the species (earliest to latest, with '~' indicating approximate); the lighter bar colouration indicates that there were fewer than 6 sequences available for one of the regions (5 sets of bars, and 2 nd 0 result); the dashed bars indicate cases where the number of sequences collected between regions differed by 5 or more times (6 sets of bars).Species purposefully introduced into North America are marked with '^'.
ACKNOWLEDGEMENTS.
This work was supported by the Natural Sciences and Engineering Research Council of Canada (Alexander Graham Bell Canada Graduate Scholarship to T.F.M., Discovery Grants 386591-2010 and 06199-2016 to S.J.A., and support to S.J.A. and R.G.Y. through an NSERC Strategic Network, entitled the Canadian Aquatic Invasive Species Network II (CAISN II)), the University of Guelph (Tri-council Scholarship to T.F.M.), and a doctoral scholarship from the Consejo Nacional de Ciencia y Tecnología (CONACYT -315757) to T.L.Q.We thank the BOLD team for the development of this resource and the many researchers, including the staff of the Centre for Biodiversity Genomics, who contributed sequence data to public databases.R.G.Y. and T.F.M. designed the study and wrote the manuscript with input from S.J.A. and T.L.Q.. R.G.Y., T.F.M., and T.L.Q.collected the data.T.F.M. and R.G.Y. performed the scripting and analyses. | 2019-04-03T13:09:15.201Z | 2018-12-27T00:00:00.000 | {
"year": 2018,
"sha1": "c6c678a332b727d2eb2f6debdb411131f7118779",
"oa_license": "CCBY",
"oa_url": "http://www.eje.cz/doi/10.14411/eje.2018.071.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c6c678a332b727d2eb2f6debdb411131f7118779",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
6480287 | pes2o/s2orc | v3-fos-license | Response of Soil C and N, Dissolved Organic C and N, and Inorganic N to Short-Term Experimental Warming in an Alpine Meadow on the Tibetan Plateau
Although alpine meadows of Tibet are expected to be strongly affected by climatic warming, it remains unclear how soil organic C (SOC), total N (TN), ammonium N (NH4 +-N) , nitrate N (NO3 +-N), and dissolved organic C (DOC) and N (DON) respond to warming. This study aims to investigate the responses of these C and N pools to short-term experimental warming in an alpine meadow of Tibet. A warming experiment using open top chambers was conducted in an alpine meadow at three elevations (i.e., a low (4313 m), mid-(4513 m), and high (4693 m) elevation) in May 2010. Topsoil (0–20 cm depth) samples were collected in July–September 2011. Experimental warming increased soil temperature by ~1–1.4°C but decreased soil moisture by ~0.04 m3 m−3. Experimental warming had little effects on SOC, TN, DOC, and DON, which may be related to lower warming magnitude, the short period of warming treatment, and experimental warming-induced soil drying by decreasing soil microbial activity. Experimental warming decreased significantly inorganic N at the two lower elevations,but had negligible effect at the high elevation. Our findings suggested that the effects of short-term experimental warming on SOC, TN and dissolved organic matter were insignificant, only affecting inorganic forms.
Introduction
Soil organic C (SOC) and total N (TN) are very important C and N pools in the terrestrial ecosystems [1,2]. As the components of labile C and N pools in soils, dissolved organic C (DOC) and N (DON) and soil ammonium and nitrate N (NH 4 + -N and NO 3 − -N) play crucial roles in the biogeochemistry of C and N and in the nutrient transformation [3][4][5]. With the context of climatic warming, how SOC, TN, DOC, DON, NH 4 + -N, and NO 3 − -N respond is vital to global C and N cycling [1,2]. However, inconsistent results on the responses of these C and N pools to climatic warming have been observed with respect to vegetation types and initial soil characteristics [2,3,[6][7][8][9][10][11][12][13][14]. For example, He et al. [2] demonstrated that six-year warming (∼1.4 ∘ C increase of 10 cm soil temperature) significantly decreased soil C by 129.3 g m −2 in a temperate steppe of Inner Mongolia. In contrast, Li et al. [7] found that two-year warming significantly increased SOC in an alpine meadow (∼2.1 ∘ C increase of air temperature) but significantly reduced TN in an alpine swamp meadow (∼2.3 ∘ C increase of air temperature) on the Tibetan Plateau. Hagedorn et al. [13] indicated that one-growing-season warming (∼4 ∘ C increase of 5 cm soil temperature) did not significantly influence DOC. Song et al. [1] pointed out that six-year warming (∼1.2 ∘ C increase of 10 cm soil temperature) significantly reduced DOC in a temperate steppe in Inner Mongolia. Biasi et al. [15] indicated that two-year warming (∼ 0.9 ∘ C increase of 5 cm soil temperature) did not have obvious effects on DON, NH 4 + -N, NO 3 − -N, and N min in a lichenrich dwarf shrub tundra in Siberia. Bai et al. [14] stated that experimental warming (∼0.6-6.7 ∘ C in soil temperature) had a significant positive effect on N min but not on TN across all biomes. Therefore, how climatic warming acts on C and N cycling still remains unclear.
More than 70% of the Tibetan Plateau is covered with grasslands [16]. The alpine grasslands of this Plateau are one of the systems most sensitive to global change [17,18]. In alpine grasslands, understanding the responses of SOC, DOC, TN, DON, NH 4 + -N, and NO 3 − -N to climatic warming are crucial for predicting future changes in soil fertility and C sequestration. The alpine meadow is one of the most typical grasslands types on the Tibetan Plateau being subjected to climatic warming [19]. Information on how these C and N pools along an elevation gradient respond to climatic warming is scarce on the Tibetan Plateau. Here we set up a warming experiment in an alpine meadow at three elevations (i.e., 4313 m, 4513 m, and 4693 m) on the Northern Tibetan Plateau.
The main objective was to investigate the effects of short-term experimental warming on SOC, TN, DOC, DON, NH 4 + -N, and NO 3 − -N. Our previous study indicated that short-term experimental warming could not affect soil microbial biomass [20] and soil microbial activity regulated the balances of soil C and N pools in the alpine meadow [21]. We hypothesized that experimental warming may not affect these C and N pools in this study.
Study Area, Experimental Design, and Soil Sampling.
A detailed description of the study area, the warming experimental design, the measurements of microclimate factors 4 The Scientific World Journal (including soil temperature and soil moisture), and the soil sampling are given in Fu et al. [20,22]. Briefly, three alpine meadow sites were established at three elevations (i.e., a low (30 ∘ Annual mean air temperature and precipitation is 1.3 ∘ C and ∼476.8 mm, respectively [20,21]. The vegetation is Kobresia-dominated alpine meadow and roots are mainly concentrated in the topsoil layer (0-20 cm) [21,22]. The soil is classified as sandy loam, with pH of 6.0-6.7, organic matter of 0.3-11.2%, and total N of 0.03-0.49% [20,22].
Open top chambers (OTCs, 3 mm thick polycarbonate) were used to enhance temperature [22,23]. The bottom and top diameters and the height of OTCs were 1.45 m and 1.00 m and 0.40 m, respectively [20,22]. For each site, four OTCs and their paired control plots (1 m × 1 m) were randomly established in May 2010. There was ∼3 m distance between plots.
Daily mean soil temperature ( ) during the study period of July-September in 2011 inside the OTCs increased by 1.26 ∘ C, 0.98 ∘ C, and 1.37 ∘ C at the low, mid-, and high elevation, respectively, compared to control plots [20]. In contrast, experimental warming decreased daily mean soil moisture (SM) by 0.04 m 3 m −3 in all sites [20]. Daily mean decreased with increasing elevation from the low to high elevation [20].
We collected topsoil samples (0-20 cm depth) inside each plot using a probe 3.0 cm in diameter on July 7, August 9, and September 10, 2011 [20]. Five soil subsamples were randomly sampled and composited into one soil sample for each plot [20]. Subsamples of the fresh soil were used to measure DOC, DON, NH 4 + -N, and NO 3 − -N and other subsamples of the fresh soil were air-dried for the measurements of SOC and TN.
Soil Analysis.
A more detailed description of measurements of soil inorganic N (N min , i.e., sum of NH 4 + -N and NO 3 − -N), DON, and DOC can be found in Fu et al. [21]. Briefly, soil inorganic N in 20 g fresh soil sample was extracted with 100 mL K 2 SO 4 , filtered through 0.45 m membrane, and analyzed on a LACHAT Quikchem Automated Ion Analyzer. Dissolved organic C and TN (DTN) in another 20 g fresh soil sample was extracted with 100 mL ultrapure water and filtered through 0.45 membrane. The extractable SOC and TN concentrations in the ultrapure water extracts were measured using a Liqui TOC II elementar analyzer (Elementar Liqui TOC, Elementar Co., Hanau, Germany) and a UV-1700 PharmaSpec visible spectrophotometer (220 nm and 275 nm), respectively. We also analyzed dissolved inorganic N (DIN) in the ultrapure water extracts on a LACHAT Quikchem Automated Ion Analyzer. Then DON was calculated as the difference between DTN and DIN. The potassium dichromate method was used to determine SOC [24]. Soil TN was measured on a CN analyzer (Elementar Variomax CN). Soil microbial biomass (MBC) and N (MBN) data were obtained from Fu et al. [20].
Statistical Analysis.
In order to examine the elevation effect, repeated-measures ANOVA with experimental warming and elevation as the between subject factors and with sampling date as the within subject factor was performed for a specific soil property (i.e., SOC, TN, DOC, DON, ratio of DOC to DON (DOC/DON), NH 4 , and N min ). At each site, repeated-measures ANOVA with experimental warming (i.e., OTCs versus control) as the between subject factor and with sampling date as the within subject factor was conducted for each soil property. Single factor linear regressions were performed between soil properties and , SM, MBC, and MBN. In addition, multiple stepwise regression analyses were conducted for soil properties to examine the relative importance of , SM, MBC, and MBN in affecting the variations of soil properties. All data were examined for normality and homogeneity before analysis and natural logarithm transformations were made if necessary. The level of significance was < 0.05. All the statistical tests The Scientific World Journal (Table 1). In contrast, the sensitivity of N min to experimental warming increased with increasing elevation (Table 1). In detail, experimental warming significantly decreased N min by 29.2% and 23.5% at the low and mid-elevation, NO 3 − -N by 36.4%, 29.5% at the low and mid-elevation, and NH 4 + -N by 16.7% at the mid-elevation across all the three sampling dates, respectively. In contrast, experimental warming had little effects on NO 3 − -N and N min at the high elevation. The multiple stepwise regression analyses were listed in Table 3. Both SOC and TN were simultaneously affected by MBC and , whereas MBC explained more variation of the two soil properties than . Only MBC was included in the multiple regression equations for DOC, DON, and NH 4 + -N/NO 3 − -N, while only MBN was included in the regression equation for NO 3 − -N. Soil microbial biomass C explained the variation of NH 4 + -N more than SM. Both MBC and MBN were simultaneously and positively correlated with N min . In addition, all the five concerned variables were excluded for DOC/ DON.
Effects of Experimental Warming on SOC, TN, DOC, and DON.
Recently, some studies showed that short-term (<3 years) experimental warming had little effects on SOC, TN, DOC, and/or DON in a tallgrass prairie with a silt loam soil (∼2 ∘ C increase of 5 cm soil temperature) in USA [25], in a dragon spruce plantation with a mountain brown soil (∼0.6 ∘ C increase of 5 cm soil temperature) on the Tibetan Plateau [8], in an alpine treeline with a sandy Ranker and Podzols soil (∼4 ∘ C increase of 5 cm soil temperature) in Switzerland [13], and in a lichen-rich dwarf shrub tundra with Gleyic Cryosols soils (∼0.9 ∘ C increase of 5 cm soil temperature) in Siberia [15]. However, other studies with long-term (>3 years) experimental warming indicated that warming significantly increased or decreased SOC, TN, DOC, and/or DON in a temperate steppe with a Calcic Kastanozems soil in Inner Mongolia (∼1.4 ∘ C increase of 10 cm soil temperature) [2], in an alpine meadow (∼3 ∘ C increase of 5 cm soil temperature) on the Tibetan Plateau [3], and in a temperate steppe with chestnut soil in Inner Mongolia (∼1.2 ∘ C increase of 10 cm soil temperature) [1]. Therefore, the insignificant responses of SOC, TN, DOC, and DON to warming (Table 1) may be due to the short period of warming treatment (14-16 months). A meta-analysis showed that the effects of experimental warming on N min , net N mineralization, and nitrification were significantly and positively correlated with raised soil temperature (∼0.6-6.7 ∘ C for N min , ∼0.6-5.5 ∘ C for net mineralization, and ∼1.3-5.5 ∘ C for net nitrification) across all biomes [14]. Similarly, we found that experimental warming-induced change of soil temperature tended to be negatively correlated with that of TN ( 2 = 0.43, = 0.057) and positively correlated with that of MBN ( 2 = 0.43, = 0.056) [20]. In addition, MBN was significantly correlated with SOC, TN, DOC, and DON (Table 2). Therefore, the negligible responses of soil C and N pools to experimental warming (Table 1) may be also due to lower warming magnitude in this alpine meadow.
Microbial activity regulates the production of dissolved organic matter [5,8,26] and experimental warming-induced decline in soil moisture may suppress soil microbial activity [20,27]. Similarly, we also found that soil C and N pools increased with increasing soil microbial biomass and soil moisture ( Figure 2, Table 2). Moreover, short-term experimental warming had little effect on soil microbial biomass in this system [20]. Therefore, the negligible responses of SOC, TN, DOC, and DON to short-term experimental warming may be also related to that of soil microbial biomass [8,20]. Moreover, experimental warming-induced soil drying may also suppress the production of DOC and DON [8,20].
Effects of Experimental Warming on Soil Inorganic N.
Bai et al. [14] demonstrated that experimental warming did not significantly increase net N nitrification in grasslands. Similarly, experimental warming did not increase net N mineralization in an alpine meadow on the Tibetan Plateau [28]. In the same alpine meadow as this study, the finding that experimental warming did not increase ecosystem photosynthesis and aboveground plant biomass [22] also indirectly supported that experimental warming may not increase soil N availability because it has been observed that plant productivity is positively correlated with net N mineralization [29]. Therefore, the negligible or negative effect of experimental warming on soil inorganic N ( Figure 1, Table 1) may result from the suppression of net N mineralization and nitrification under warming. The suppression of net N mineralization and nitrification may be owing to decreases in soil moisture and microbial activity because N min , NH 4 + -N, and NO 3 − -N increased significantly with increasing soil moisture and microbial biomass ( Figure 2, Table 2). Similarly, the experimental warming-induced significant reductions or insignificant changes of inorganic N ( Figure 1, Table 1) were also partly attributed to experimental warming-induced decline in soil microbial biomass [20] and soil drying [10,29,30]. This was in line with the finding that the effect of experimental warming on soil moisture was significantly correlated with that on soil nitrification [14]. On the other hand, microbial biomass was more closely related to soil inorganic N than soil moisture (Table 3). This implied that microbial biomass may dominate the variation of soil inorganic N in this study. However, our previous study showed that short-term experimental warming tended to reduce microbial biomass due to soil drying in the same alpine meadow as this study [20]. Therefore, the experimental warming-induced changes of soil inorganic N, net N mineralization, and nitrification may be directly related to that of microbial activity and indirectly related to that of soil moisture.
The different responses of N min to experimental warming among the three elevations across the sampling dates could be attributed to several probable underlying mechanisms. First, DON is high-quality N source for N mineralization [8,31]. This was supported by the positive relationships between DON and N min and NH 4 + -N and NO 3 − -N ( Figure 3). DON under warmed plots tended to be decreased by 10.3% at the low elevation and by 28.7% at the mid-elevation but to be increased by 4.4% at the high elevation across all the three sampling dates, compared to control plots. Second, experimental warming-induced different changes in soil microbial biomass N (MBN) among three elevations [20] could partly explain this phenomenon considering that the production of DON and the immobilization of soil inorganic N were regulated by MBN [3,32,33]. This viewpoint was confirmed by the positive correlations between MBN and DON, N min , NH 4 + -N, and NO 3 − -N (Table 2). Third, the response of soil N availability to warming could be strongly related to the initial conditions [8,34]. In our system, N min , DON, and microbial biomass at the high elevation were significantly larger compared to the low and mid-elevation, whilst there were insignificant differences between the latter two [20].
Conclusions
In summary, short-term experimental warming had no obvious effects on topsoil organic C, total N, dissolved organic C, and N pools for the alpine meadow in this study. The insignificant responses of these C and N pools to warming may be due to short-term warming treatment, experiment warminginduced soil drying, and lower warming magnitude. In contrast, the response of soil inorganic N to experimental warming differed among the three elevations, which may be attributed to different response trends of dissolved organic N and microbial biomass and different initial soil inorganic N. | 2018-04-03T03:32:59.745Z | 2014-05-22T00:00:00.000 | {
"year": 2014,
"sha1": "b1dbd5ecc9393cf2a93da8cc4cce82a046f58ab4",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2014/152576.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29a123558eaaf556a7185805f944ced00b0d31ce",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
119683901 | pes2o/s2orc | v3-fos-license | Whittaker functions on metaplectic covers of GL(r)
This paper establishes a combinatorial link between different approaches to constructing Whittaker functions on a metaplectic group over a non-archimedean local field. We prove a metaplectic analogue of Tokuyama's Theorem and give a crystal description of polynomials related to Iwahori-Whittaker functions. The proof relies on formulas of metaplectic Demazure and Demazure-Lusztig operators, proved previously in joint work with Gautam Chinta and Paul E. Gunnells.
1. Introduction 1.1. Motivation. The study of metaplectic groups was initiated by Matsumoto [Mat69]. Analytic number theory, in particular questions about the mean values of L-functions led to research on multiple Dirichlet series, which in turn motivated interest in Whittaker coefficients of metaplectic Eisenstein series. Whittaker functions are higher dimension generalizations of Bessel functions and are associated to principal series representations of a reductive group over a local field. Kubota [Kub71] was the first to closely examine Eisenstein series on higher covers of GL 2 , and the theory of associated Whittaker functions was further developed by Kazhdan and Patterson [KP84]. In recent years, this development gained further impetus from unexpected connections to other areas, such as combinatorial representation theory, the geometry of Schubert varieties, and solvable lattice models.
While the theory of metaplectic Whittaker functions is familiar in the case of double covers of reductive groups, it is less well understood in the case of higher covers.
1.1.1. The Casselman-Shalika formula. The Casselman-Shalika formula is an explicit formula for values of a spherical Whittaker function over a local p-adic field in terms of a character of a reductive group. It is a central result in understanding the local and global theory of automorphic forms and their L-functions. A metaplectic analogue, describing Whittaker functions on n-fold covers of a reductive group, has similar significance in the study of Dirichlet series of several variables. Different approaches to generalize the Casselman-Shalika formula to the metaplectic setting have recently emerged.
1.1.2. Metaplectic analogues. Chinta-Offen [CO13] and McNamara [McN16] generalize the Casselman-Shalika formula by replacing the character with a metaplectic analogue: a sum over the Weyl group involving a modified action of the Weyl group that depends on the metaplectic cover. Brubaker-Bump-Friedberg [BBF11a] and McNamara [McN11] express a type A Whittaker function as a sum over a crystal base. Both constructions produce the Whittaker function as a polynomial determined by combinatorial data: the root datum of the group, a dominant weight, and the degree n of the metaplectic cover. The first one handles all types of root datum, while the second one makes it possible to compute the coefficients of the polynomial individually. The fact that the descriptions are purely combinatorial in nature, and rely heavily on Weyl group combinatorics and on the structure of the crystal graph, indicates that deeper properties of these constructions can be understood using methods of combinatorial representation theory.
1.1.3. Combinatorial link. In the present paper we develop a combinatorial understanding of the relationship of the two approaches described in section 1.1.2. This is one aspect of our main result (Theorem 1); an other is giving a crystal description of polynomials related to Iwahori-Whittaker functions. Both of these aspects will be made explicit in section 9.
Furthermore, both approaches to constructing Whittaker functions, i.e. summing over the Weyl group, or, respectively, over a crystal graph, also make sense in the nonmetaplectic setting. In this special case, a theorem of Tokuyama provides a combinatorial link between them [Tok88]. In the metaplectic setting, the constructions of section 1.1.2 naturally extend the meaning of respective sides of Tokuyama's identity. Thus explicitly relating the two constructions is in essence proving a metaplectic analogue of Tokuyama's formula. Viewed purely as an identity about the combinatorial data, the special case of Theorem 1 stated as Theorem 2 is this metaplectic analogue.
1.2. Methods and tools. The connection between Tokuyama's theorem and the constructions of metaplectic Whittaker functions also gives a hint as to where the difficulty in this project lies. The classical proof of Tokuyama's formula is by induction on the rank using Pieri-rules: in the metaplectic setting, these have no convenient analogue. We sidestep this obstacle by refining the metaplectic statement to allow for a finer induction. Theorem 2 (the metaplectic version of Tokuyama's theorem) is as a statement about the long element in the Weyl group, while the more general Theorem 1 is the same statement for any "beginning section" of a particular long word.
Phrasing a statement that lends itself to finer induction requires us to exploit interesting properties of the respective constructions. On the one hand, our joint work with Gautam Chinta and Paul E. Gunnells [CGP14] interprets the formulas of Chinta-Offen and McNamara [CO13,McN16] in terms of metaplectic Demazure-Lusztig operators. On the other hand, one may exploit the branching structure of highest weight crystals of Dynkin type A to relate the formulas of Brubaker-Bump-Friedberg and McNamara [BBF11a,McN11] to smaller, similar expressions on Demazure crystals. The connection to Demazure-Lusztig operators is well motivated, and yields several avenues of possible applications; we mention these below.
1.2.1. Demazure operators. The relevance of classical Demazure and Demazure-Lusztig operators to the study of Whittaker functions was first indicated by work of Littelmann and Kashiwara [K + 92] giving character formulas on a crystal, and of Brubaker, Bump and Licata [BBL14] relating them to Iwahori-fixed Whittaker functions. They have since been used by Patnaik [Pat14] to give a generalization of the Casselman-Shalika formula to Whittaker functions on the p-adic points of an affine Kac-Moody group.
The metaplectic versions of these operators were introduced by G. Chinta, P. E. Gunnells and the present author [CGP14]; the definitions involve the Chinta-Gunnells action of the Weyl group on rational functions over the weight lattice. This action was first used in [CG10] to construct p-parts of multiple Dirichlet series, and have since proved instrumental in metaplectic constructions, for example the ones mentioned above in 1.1.2.
1.3. Applications. We mention connections to the literature and avenues of further research utilizing the methods and results of this paper.
1.3.1. Iwahori-Whittaker functions. In [BBL14], the authors use Demazure and Demazure-Lusztig operators to compute values of Iwahori-Whittaker functions in terms of Hecke algebras, the geometry of Bott-Samelson varieties and the combinatorics of Macdonald polynomials. The analogies between these topics are intertwined with the combinatorics of the Bruhat order on the Weyl group, and identities satisfied by the Demazure and Demazure-Lusztig operators. Furthermore, the non-metaplectic version of the operator in Theorem 1 is related in [BBL14] to Iwahori-Whittaker functions. Recent work by Lee, Lenart, and Liu [LLL16] applies these results to compute coefficients of the transition matrix between natural bases of Iwahori-Whittaker functions. On the other hand, the work of Patnaik [Pat14] generalizing the Casselman-Shalika formula to the affine Kac-Moody setting also involves computing Iwahori-Whittaker functions, and their recursion in terms of Demazure-Lusztig operators. (The results of Brubaker-Bump-Licata and Patnaik are recalled in detail in section 9.) The metaplectic Demazure and Demazure-Lusztig operators satisfy analogous identities to the classical, nonmetaplectic ones. Thus some of the above results will generalize to the metaplectic setting. Recent joint work with Manish Patnaik [PP15] shows that the connection between Iwahori-Whittaker functions and Demazure-Lusztig operators extends to the metaplectic setting (see section 9.3.3). It is natural to ask if the explicit crystal description of Iwahori-Whittaker functions given by Theorem 1 leads to a better understanding of all these results. It is especially interesting to consider how the connection with Iwahori-Whittaker functions, perhaps together with a more type-independent combinatorial description as mentioned in section 1.3.2 below, would elucidate the situation in the affine setting (see 1.3.3).s 1.3.2. The Alcove Path Model. The construction of Whittaker functions as a sum over the Weyl group [CO13,McN16] has the key feature that the Weyl group functional equations satisfied by the Whittaker function become very apparent. These functional equations play a key role in the analytic construction of global multiple Dirichlet series constructed from the Whittaker functions. Moreover, they have proven useful in studying certain affine analogues.
The functional equations are less explicit in the description by crystal graphs. However, the crystal construction gives explicit formulas for individual coefficients of the Whittaker function. Reasons for trying to understand these coefficients are mentioned in section 1.3.3.
Various authors have worked on generalizing the crystal approach to root systems of other types: Chinta and Gunnells for type D [CG12], Beineke [BBF12], Brubaker, Bump, Chinta, Gunnells [BBCG12], Frechette, Friedberg and Zhang [FZ15] for type B and C, McNamara [McN11] working, less explicitly, with crystal bases in general. The resulting formulas are all significantly more intricate than the type A construction in [BBF11b].
A possible applications of this paper is to understand how the crystal approach extends to other types. Preliminary work by Beazley and Brubaker suggests that perhaps the alcove path model is better suited for creating a construction that generalizes the type A crystal approach. Demazure and Demazure-Lusztig operators promise to give a metaplectic Casselman-Shalika formula in terms of the alcove path model; we are currently investigating this avenue in joint work with Gautam Chinta, Cristian Lenart and Dan Orr. The resulting construction might better reflect the Weyl group symmetry of the individual coefficients of Whittaker functions. [Whi14] attempts to extend the theory of multiple Dirichlet series to the affine setting. There the theory of Eisenstein series is not (yet) available. These authors construct multiple Dirichlet series that satisfy functional equations corresponding to an affine Weyl group. The coefficients of these power series can be explicitly related to character sums and coefficients of Lfunctions [Whi14]. Some of our methods may lead to a combinatorial understanding of these coefficients. Furthermore, it would be especially interesting to understand the connection between affine Weyl group multiple Dirichlet series and Whittaker functions on p-adic points of affine Kac-Moody groups introduced by Patnaik [Pat14]. The possibility of extending the affine construction to the metaplectic setting is currently investigated by Patnaik and the author of this paper.
Preliminaries and Statement of the Main Theorem
This section is dedicated to the statement of the main result of the paper (Theorem 1), and a brief outline of the methods and structure of the proof.
2.1. Background. We start by introducing some notation and background. We restrict ourselves to what is necessary to state the main theorem, Theorem 1, and to give an outline of the methods of the paper. Much of the background will be covered in more detail in later sections.
2.1.1. Notation. Let Λ be the weight lattice corresponding to a root system Φ of type A r . We identify C(Λ) with a ring of rational functions C(x), where x = (x 1 , . . . , x r+1 ) and x α 1 = x 1 /x 2 . The Weyl group W is generated by σ i simple reflections. Let w 0 ∈ W be the long element. We favour a particular reduced decomposition for w 0 (see (4.10) for this "favourite long word"); Theorem 1 is stated for elements w ∈ W whose reduced decomposition is a "beginning segment" of this favourite long word (Definition 5). The integer n denotes the degree of the metaplectic cover of a split reductive algebraic group corresponding to Φ. We also introduce the indeterminate t, and v = t n ; in applications, we set v = q −1 , where q is the order of the residue field of a nonarchimedean local field.
2.1.2. Crystals and Gelfand-Tsetlin coefficients. The highest weight crystal C λ+ρ and its parameterizations will be introduced in Section 4. For now, it suffices to say that it is a graph whose vertices are in bijection with a basis of the irreducible representation of highest weight λ + ρ, where λ ∈ Λ is dominant and ρ is the Weyl vector. Vertices of a crystal can be parameterized by arrays of integers in various ways (using Gelfand-Tsetlin patterns, Γ-arrays, or Berenstein-Zelevinsky-Littelmann paths). To state Theorem 1 we need two functions on the vertices of a crystal C λ+ρ : the weight function v → wt(v) ∈ Z r+1 and the Gelfand- . This is the usual Gelfand-Tsetlin coefficient, described for nonmetaplectic and metaplectic cases in [BBF11b]. It depends on the degree n of the metaplectic cover via Gauss-sums. Furthermore, for every w beginning segment of the long word, we shall define C (w) λ+ρ , the Demazure crystal corresponding to w. This is a subgraph of C λ+ρ spanned by certain vertices depending on w (see Definition 6).
2.1.3. Demazure operators. Demazure operators D w and Demazure-Lusztig operators T w correspond to elements of the Weyl group, and act on C(Λ). The definitions of the nonmetaplectic operators involve the natural action of the Weyl group W on C(Λ), inherited from the action of W on the weight lattice. In the metaplectic setting, this normal permutation action can be replaced by the Chinta-Gunnells action, and one may define metaplectic Demazure and Demazure-Lusztig operators, whose meaning depends on n. The definitions and properties are recalled -in the notation specific to type A r -in section 3; these play a key role in the proof of Theorem 1.
2.1.4. Tokuyama's theorem. Strictly speaking, this section is not necessary to understand the statement of Theorem 1; however, it provides motivation, and crucial guidance to the shape of the statement. As mentioned in section 1.1.3 generalizing Tokuyama's theorem to the metaplectic setting and linking the constructions of metaplectic Whittaker functions is closely related; in fact the constructions give rise to the statement of a metaplectic version.
We explain this briefly here; the theorem will be discussed in detail in Section 5. Tokuyama's theorem is a deformation of the Weyl character formula in type A : where s λ is the Schur function. The left hand side essentially agrees with the Casselman-Shalika formula for Whittaker functions (with the deforming parameter v specialized to q −1 ). The Schur function can be expressed by the Weyl character formula as Chinta-Offen [CO13] show what a correct metaplectic analogue of the right hand side in (2.2) is, replacing the action of the Weyl group W on C(Λ) by the Chinta-Gunnells metaplectic action. One may use the results of [CGP14] to reformulate the "left-hand side" in terms of Demazure-Lusztig operators, i.e. as acting on a monomial. The necessary background will be covered in detail in Section 3. On the right hand side of (2.1), the Gelfand-Tsetlin coefficients G(b) = G (1,λ+ρ) (b) appear. This reproduces the construction of the same Whittaker function as a sum over a crystal base (Brubaker-Bump-Friedberg [BBF11a]) in both the nonmetaplectic case (for n = 1) and the metaplectic setting (for higher n).
Statement of Main
Theorem. The main result of the paper is a crystal description of sums of Demazure-Lusztig operators in type A. More precisely, we prove the following.
The statement (2.4) provides the combinatorial link between the approaches to constructing metaplectic Whittaker functions described in section 1.1.2. The special case of this statement for w = w 0 and n = 1 is exactly Tokuyama's theorem (See section 5). The statement is formally stronger than Tokuyama's theorem even in the nonmetaplectic setting, and provides a metaplectic analogue for higher n. We state this analogue on its own. Theorem 2. (Tokuyama's Theorem, Metaplectic Version.) Let λ = (λ 1 , . . . , λ r+1 ) be any dominant, effective weight and ρ = (r, . . . , 1, 0). Then This special case of the identity is present when work of Brubaker-Bump-Friedberg-Hoffstein [BBF11a], Chinta-Gunnells-Offen [CG10,CO13], and McNamara [McN11,McN16] are combined, but Theorem 1 provides a much more direct connection. In addition, as mentioned in 1.3.1, the operators (2.6) u≤w T u are related to the construction of Iwahori-Whittaker functions; in this sense Theorem 1 may be interpreted as a crystal description of Iwahori-Whittaker functions. We shall make the connection between Theorem 1 and Whittaker and Iwahori-Whittaker functions more explicit in section 9.
2.3. Methods and Outline. We give an overview of the methods and structure of the proof of Theorem 1.
Tokuyama's proof of the identity (2.1) uses Pieri rules, i.e. is by induction on the rank r of the type A r of the root system Φ. Pieri rules are not available in the metaplectic setting, so instead we "refine" the induction. Theorem 1 interprets Tokuyama's formula in type A r as a statement about the (favourite) long word w (r) 0 . This is the following reduced decomposition of the long element.
This word has the property that it begins with w (r−1) 0 , the favourite long word in type A r−1 . Since Theorem 1 formulates an identity for every beginning section of the word w (r) 0 , if we want to prove this statement by induction, the step from A r−1 to A r is now not one step, but r "smaller" steps. This is the main idea -the proof is in fact by induction; the main tools are branching properties of type A highest weight crystals, and identities of Demazure(-Lusztig) operators.
2.3.1. Structure of the induction. The edges of a highest weight crystal C λ+ρ of type A r are labelled by indices 1, 2, . . . , r. Removing the edges labelled with r breaks the crystal into the disjoint union of highest weight crystals of type A r−1 . The same is true for a Demazure crystal C (w) λ+ρ as long as w is a beginning section of the favourite long word w (r) 0 . This fact is crucial to the proof. Let w be a beginning section of long word w (r) 0 that is not a beginning section of w (r−1) 0 , i.e. has the form Call the statement of Theorem 1 for this particular w and fixed n (but for any λ) IW (n) r,k . For k = r − 1, i.e. w = w (r) 0 , Theorem 1 specializes to Theorem 2, and thus we use the notation T ok r,k can be reduced to T ok (n) r−1 , and statements describing the action of simpler operators on a monomial. The full reduction argument will be explained later; for now we only say that in addition to IW r,k , for example, concerns the action of the operator T r T r−1 . . . T r−k on a monomial. We show in Section 7 that to prove IW (n) r,k for any r and any 0 ≤ k < r it suffices to prove the statement N (n) r,r−1 for any r. This reduction of IW (n) r,k to N (n) r,r−1 essentially follows from the branching of Demazure crystals and Gelfand-Tsetlin coefficients, and some properties of the metaplectic Demazure-Lusztig operators.
By the end of Section 7, the only thing remaining from the proof of Theorem 1 is to prove N (n) r,r−1 , a statement about the action of the operator T r T r−1 . . . T 1 on a monomial. This statement is proved by a (somewhat technical) induction in Section 8, with a rank one auxiliary computation included in Appendix A.
2.3.2. Outline. The necessary background is summarized in three sections. Section 3 explains the Chinta-Gunnells action and metaplectic Demazure operators; Section 4 describes parameterizations and branching of type A highest weight crystals and contains the definition of Gelfand-Tsetlin coefficients; Section 5 contains the re-phrasal of Tokuyama's result into the language of Demazure-Lusztig operators and crystals.
The proof of Theorem 1 spans three sections. Section 6 is preparation: it defines Demazure crystals and examines the Gelfand-Tsetlin coefficients on these in terms of the branching properties discussed in Section 4. Some helpful conventions, designed to make the notation of the proof lighter, are also introduced here. Section 7 contains the proof of Theorem 1 through reduction to a sequence of simpler statements (from IW (n) r,k to N (n) r,r−1 ), as explained above. The final statement of the sequence, N (n) r,r−1 is then proved in Section 8 (and Appendix A).
Section 9 relates Theorem 1 to metaplectic Whitaker functions and Iwahori-Whittaker functions. The constructions mentioned in the Introduction are recalled in a little more detail to demonstrate how the formulas line up with the expressions in Theorem 1.
Metaplectic Demazure and Demazure-Lusztig operators
Theorem 1 describes the action of metaplectic Demazure-Lusztig operators on a monomial. As mentioned in 1.2.1 the metaplectic analogues of the classical Demazure and Demazure-Lusztig operators were introduced in [CGP14]. In this section, we briefly review the results of that paper, specializing to type A root systems. The definition, elementary properties, and the identities Theorem 6 and Theorem 7 will be necessary for the proof. The metaplectic operators are built on the Chinta-Gunnells action; we recall the definition in Section 3.2. We restrict our attention to type A, hence some of the machinery that is necessary in [CGP14] can be spared.
3.1. Notation. The following is standard notation for root systems and the Weyl group. The reader may refer to [Hum78] as a source.
Let Φ be an irreducible reduced root system of type A r with Weyl group W . We may view Φ as embedded into R r+1 . Let e 1 , . . . , e r+1 denote the standard basis of R r+1 , and take Let Φ = Φ + ∪ Φ − be the decomposition into positive and negative roots (e i − e j ∈ Φ + if i < j). Let {α 1 , α 2 , . . . , α r } be the set of simple roots; α i = e i − e i+1 (1 ≤ i ≤ r). and let σ i be the Weyl group element corresponding to the reflection through the hyperplane perpendicular to α i . Set Consider the weight lattice Λ = {λ = (λ 1 , λ 2 , . . . , λ r , λ r+1 ) ∈ Z r+1 }; then Λ ⊂ R r+1 contains Φ as a subset. Let A = C[Λ] be the ring of Laurent polynomials on Λ. Let K be the field of fractions of A. The action of W on the lattice Λ induces an action of W on K: we put and then extend linearly and multiplicatively to all of K. We denote this action using the lower dot (w, f ) −→ w.f (to distinguish it from the metaplectic W -action on K constructed below in (3.10)) and refer to this as the "nonmetaplectic" group action. Let x = (x 1 , . . . , x r , x r+1 ). We may identify K with C(x 1 , . . . , x r+1 ) = C(x) by writing x i = x e i . In general, for λ = i λ i e i ∈ Λ as above, we write r+1 . Note that the Weyl group W ∼ = S r+1 , and the nonmetaplectic action (3.2) of σ i on C(x) is by swapping x i and x i+1 .
Since all roots are of the same length, m(α) = n/ gcd(n, Q(α)) is the same for every root. In particular, with the choice of Q above, Q(α) = 1 and hence m(α) = n. 3.2. The Chinta-Gunnells action. The Chinta-Gunnells action is a "metaplectic" action of a Weyl group on a ring of rational functions; the action depends on the metaplectic degree. We use the same definition as in [CGP14], which in turn is the same as the one defined in Chinta-Gunnells [CG10] and specializes to the type A action in Chinta-Offen [CO13].
For all other j we define g j := g rn(j) , where 0 ≤ r n (j) < n − 1 denotes the remainder upon dividing j by n. The parameters g 0 , . . . , g n−1 will be chosen in section 3.3 to be Gauss sums.
We may now recall the definition of the metaplectic action of the Weyl group W on K.
Definition 1. [CGP14, Section 2, (7)] For f ∈ Kλ and the generator σ α ∈ W corresponding to a simple root α, define where λ is any lift ofλ to Λ. Here the quantity in brackets depends only onλ. We extend the definition of σ α to K by additivity. Then (1) extends to an action of the full Weyl group W on K, which we denote Using the notation specific to type A, Definition 1 can be rewritten for f = x λ as follows. (3.11) The following Lemma is crucial in computations; it is used repeatedly, if implicitly, in the proof of Theorem 1. It relies on the fact that the quantity in brackets in (1) depends only onλ and not λ.
Lemma 2] Let f ∈ K and h ∈ K 0 . Then for any w ∈ W , Here w.h means the non-metaplectic action, while · denotes multiplication in K.
The significance of Lemma 3 is due to the fact that the action of W on K defined by (1), though C-linear, is not by endomorphisms of that ring, i.e. it is not in general multiplicative. The point of Lemma 3 is that if we have a product of two terms hf , the first of which satisfies h ∈ K 0 (e.g. the exponents of h are divisible by n), then we can apply w to the product hf by performing the usual permutation action on h and then acting on f by the metaplectic W -action.
The following lemma shows a symmetric monomial with respect to σ i . It will be of use in computations. We omit the (straightforward) proof.
x a+n i+1 for every n.
3.3. Gauss sums. The complex parameters v, g 0 , . . . , g n−1 of the Chinta-Gunnells action are chosen to be Gauss sums in applications. Similar Gauss sums (g ♭ and h ♭ ) are used in [BBF11b] to define Gelfand-Tsetlin coefficients on a crystal graph (see section 4). We make the choice of parameters explicit here. We start by describing the functions g ♭ and h ♭ , following [BBF11b, Chapter 1] for notation and definitions. We will then choose the parameters v, g 0 , . . . , g n−1 to satisfy the conditions of (3.8). For facts about the power residue symbol we refer the reader to [BBF06].
3.3.1. Notation. Let F be an algebraic number field containing the group µ 2n of 2n-th roots of unity. Let S be a finite set of places of F , large enough that it contains all the places that are Archimedean or ramified over Q, and the ring of For any m, c ∈ o S , c = 0, consider the n-th power residue symbol m c n . Recall that m c n is zero unless m is prime to c. It is multiplicative, i. e. m c n · m b n = m bc n . If p is a prime and m is coprime to p, then m p n is the element of µ n satisfying m p n ≡ m Np−1 n mod p. With the notation above, define the Gauss sum Fix a p prime in o S , and let q be the cardinality of the residue field o S /po S . We assume q ≡ 1 modulo 2n. Define g(a) = g(p a−1 , p a ) and h(a) = g(p a , p a ) for any a > 0. In this case we have 3.3.2. Choice of parameters. We are ready to define the functions g ♭ and h ♭ . These appear in Section 4 in the definition of Gelfand-Tsetlin coefficients, and the proof of Theorem 1 depends on computations that use g ♭ and h ♭ . Let The following identities imply that the value of both g ♭ (a) and h ♭ (a) only depend on the residue of a modulo n.
If a is divisible by n then and if 0 < a < n then Recall the conditions (3.8) imposed on the parameters v, g 0 , . . . , g n−1 . The parameters must satisfy g 0 = −1 and g i g n−i = v −1 for 1 ≤ i ≤ n − 1. We can choose these parameters by modifying the functions g ♭ and h ♭ . Take v = q −1 and Then (3.15) implies g 0 = q · (−q −1 ) = −1 and(3.16) implies We summarize the choices of parameters in the following claim. The notation t n = v = q −1 is introduced for later convenience.
3.4. Metaplectic Demazure and Demazure-Lusztig operators. The definitions below follow [CGP14], making use of the identification of K and C(x) and the Chinta-Gunnells action introduced in section 3.2. Both the Demazure operators and the Demazure-Lusztig operators are divided difference operators on K.
Let 1 ≤ i ≤ r and f ∈ C(x). We define the Demazure operators by . When there is no danger of confusion, we write more simply that is, a rational function h in the above equations is interpreted to mean the "multiplication by h" operator. The rational functions here are in K 0 (see Remark 1). The operators D i and T i satisfy the same braid relations as the σ i [CGP14, Proposition 7.]. Consequently, one may define D w and T w for any w ∈ W : let w = σ i 1 · · · σ i l be a reduced expression for w in terms of simple reflections. Then We also introduce a metaplectic analogue of the Weyl denominator. Let If v = 1 we write simply ∆ v = ∆. Now we are ready to state the metaplectic Demazure formula and Demazure-Lusztig formula. (As before, the notation is specific to type A) Theorem 6. [CGP14, Theorem 3.] For the long element w 0 of the Weyl group W we have The following technical lemmas about polynomials annihilated by Demazure operators are of use in the proof of Theorem 1.
Lemma 8. We have the following.
Proof. The proof of (i) is obvious from the definition of D i and Lemma 3. For (ii), let u = w 0 w −1 , so that w 0 = u·w. Since w 0 is the longest element, we have ℓ(w 0 ) = ℓ(u)+ ℓ(w), and as a consequence The following is a trivial corollary of Lemmas 8 and 4, as the action of σ i only involves the exponents of x i and x i+1 .
Highest weight crystals and Gelfand-Tsetlin patterns
We turn our attention to the "crystal side" of Theorem 1: a sum whose terms involve Gelfand-Tsetlin coefficients, and are summed over a crystal. In this section, we present a primer on objects in this picture. Crystals can be parametrized in more than one way, we shall see that moving back and forth between parameterizations is not particularly difficult, hence one may choose the language that is most convenient in any given context. Gelfand-Tsetlin coefficients are defined in terms of these parameterizations.
We describe, in turn, highest weight crystals (4.1), Gelfand-Tsetlin patterns with "Γarrays" (4.2), and Berenstein-Zelevinsky-Littelmann paths (4.3). Gelfand-Tsetlin patterns are arrays of integers; the ones with a fixed top row are in bijection with vertices of a highest weight crystal. The bijection is via Berenstein-Zelevinsky-Littelmann paths, and the Γ-array corresponding to a pattern. The precise statement of this bijection is the content of Proposition 10. Gelfand-Tsetlin coefficients are defined in section 4.4. Finally, in section 4.5, we recall the branching property of type A highest-weight crystals. This will be revisited for Demazure crystals in Section 6, and is a crucial ingredient in the proof of Theorem 1.
Throughout the section, we follow the presentation of Chapter 2 of Brubaker-Bump-Friedberg [BBF11b], in less detail. We (implicitly) rely on other sources as well. In particular, for the combinatorial definition of a crystal graph, we use Hong-Kang [HK02] and Kashiwara [Kas95]. For the correspondence between Gelfand-Tsetlin patterns and highest weight crystals, Berenstein-Zelevinsky [BZ93,BZ + 96], Littelmann [Lit98], or Lusztig [Lus90] are further references. 4.1. Highest weight crystals. The general definition of a crystal can be found in Kashiwara [Kas95]. Here we only consider type A highest weight crystals.
Recall the notation introduced in section 3.1 for root systems of type A r . In particular, recall that the weight lattice Λ is identified with Z r+1 ; α i are the simple roots for 1 ≤ i ≤ r.
. . , e * r+1 denotes the standard dual basis of R r+1 . (We use e * i here to distinguish the basis vectors e i from the Kashiwara operators e i below.) We have (·, ·) : Λ × Λ → Q a bilinear symmetric form, and let ·, · : Elements of B are called elements or vertices of the crystal. A crystal satisfies the following axioms.
Recall that the weight λ = (λ 1 , λ 2 , . . . , λ r , λ r+1 ) is called dominant if λ 1 ≥ λ 2 ≥ · · · ≥ λ r+1 ; strongly dominant if λ 1 > λ 2 > · · · > λ r+1 ; λ is effective if λ r+1 ≥ 0. There is a partial ordering on Z r+1 where µ λ if and only if λ − µ lies in the cone generated by simple roots. For every dominant weight λ there is a corresponding crystal graph C λ with highest weight λ. The function wt maps the vertices of C λ to weights of the representation V λ of gl r+1 (C) of highest weight λ. The Kashiwara operators determine a directed graph structure on C λ : there is an edge v i − → w if and only if f i (v) = w = 0. We say this edge is labeled with i. The number of vertices in C λ with weight µ is equal to the multiplicity of the weight µ in the representation V λ . In particular, C λ has exactly one element v highest with weight λ. If w 0 denotes the longest element of the type Weyl group W ∼ = S r+1 , then w 0 λ = (λ r+1 , λ r , . . . , λ 2 , λ 1 ), and C λ has exactly one element v lowest with weight w 0 λ (this is the "lowest" element).
The edges labelled with the same index i (for 1 ≤ i ≤ r) determine disjoint "i-strings" in the crystal. These are themselves isomorphic to type A 1 highest weight crystals. The functions ε i and ϕ i determine where a vertex is within an i-string: . We conclude this section by an example.
4.2.
Gelfand-Tsetlin patterns. We recall the definition of Gelfand-Tsetlin patterns, the Γ-array and the weight associated to a pattern from [BBF11b, Chapter 2].
For every 1 ≤ i ≤ r, let This gives the Γ-array of T Remark 2. Note that given the top row, the entries of the Gelfand-Tsetlin pattern T can be recovered from the entries of Γ(T). That is, given a 0,i and Γ i,j for 1 ≤ i ≤ j ≤ r, one can compute each a i,j .
Since the entries of the Gelfand-Tsetlin pattern so the rows in Γ(T) are nonnegative, non-increasing and there is an upper bound on the difference of consecutive entries in a row. The Gelfand-Tsetlin coefficient assigned to T depends on the decoration of T, i.e. whether these inequalities are strict or not. We recall the relevant terminology here.
Definition 3. (Decorations of the entries of Γ(T) and T.) An entry of Γ(T) may be undecorated, circled, boxed, or both. The table below shows the ("right-leaning") rules for decorating Γ(T). (If j = r, take Γ i,r+1 = 0.) (4.5) We may phrase this as decorating the entries (below the top row) of the Gelfand-Tsetlin pattern T itself. The decoration of a i,j is the same as that of Γ i,j . (4.6)
circled and boxed
Let d i denote the sum of the entries in the i-th row of T, that is, Then we may define the weight of a Gelfand-Tsetlin pattern T.
We conclude by an example.
The sums of elements in the rows of the pattern T are d 0 = 4, d 1 = 4 and d 2 = 2, hence
4.3.
Berenstein-Zelevinsky-Littelmann paths. To a vertex v in the crystal C λ and a choice of reduced decomposition for the long element w 0 ∈ W corresponds a Berenstein-Zelevinsky-Littelmann path. This is a path in the graph theoretic sense. It starts from v, steps along the directed edges of the crystal, and ends in the lowest element, v lowest . The steps correspond to applying successive Kashiwara operators f i to v. The direction of steps is dictated by the choice of a long word w 0 . The notation follows [BBF11b]; an explicit type A 2 example is included after the definition.
(iv) Elements of the crystal C λ are in bijection with Gelfand-Tsetlin patterns with top row λ. The correspondence is given by assigning Proof. Parts of this proposition are proved throughout Chapter 2 of [BBF11b]. In particular, [ 4.4. Gelfand-Tsetlin coefficients. In this section, we define the coefficients appearing on the right-hand side of Theorem 1. The definitions depend on a positive integer n (the degree of the metaplectic cover), the corresponding Gauss sums g ♭ (a) and h ♭ (a) defined in section 3.3, and the decorations of arrays introduced in Definition 3.
By remark 2, a pattern T can be recovered from Γ(T) and the top row λ. Since many computations in the sequel involve a fixed n and λ, we often suppress these from the notation. We write G (n,λ) (T) = G (n) (T) = G(T) when T is understood to be a pattern with top row λ. We write G(T) = G (n,λ) (Γ) = G (λ) (Γ) = G(Γ) when Γ = Γ(T), and when v ∈ C λ corresponds to T by Proposition 10.
Definition 4. Let T be a Gelfand-Tsetlin pattern with top row λ, Γ(T) = (Γ ij ) 1≤i≤j≤r its Γ-array as in (4.3). Then the degree n Gelfand-Tsetlin coefficient corresponding to T is
a ij is circled and boxed
The coefficient depends strongly on n. To elucidate this, we give the examples of the nonmeatplectic case (n = 1) and the simplest metaplectic case (n = 2) explicitly below. Recall from section 3.3 that t n = v = q −1 , where q is the cardinality of a residue field o S /po S . Example 4. When n = 1, the factors g (n) ij (T) of the Gelfand-Tsetlin coefficient G (n) (T) are as follows.
Proposition 11. When all the edges of a highest weight crystal C λ+ρ labelled by r are removed, the connected components of the result are all isomorphic to highest weight crystals C µ of type A r−1 . Omitting the last component of wt : C λ+ρ → Z r+1 , and restricting it to a connected component gives the weight function on that component: The highest weights µ that appear in this decomposition are dominant and interleave with λ + ρ. We identify the highest weight crystal C µ with the appropriate subcrystal of C λ+ρ .
Proposition 12. Let C µ be one of the components in in the decomposition (4.20) of C λ+ρ , i.e. suppose µ and λ + ρ interleave. Let v be any element of C µ . Then we have the following. (a) If wt µ : C µ → Z r denotes the weight function on C µ , then .
(b) Let v * denote the lowest element of C µ (as a type A r−1 crystal). (4.23) Proof. Let T(v) = T λ+ρ (v) be the Gelfand-Tsetlin pattern corresponding to v ∈ C λ+ρ as in Proposition 10. By Proposition 11, as an element of C µ , v corresponds to the pattern T µ (v), and T µ (v) is the same as T λ+ρ (v) with its first row omitted. In particular, the second row of T(v) is µ. Thus by (4.7), d 0 (T(v)) = d(λ) and d 1 (T(v)) = d(µ). By (4.8), the last coordinate of wt(T(v)) is d(λ) − d(µ). This implies (a). For (b), we restrict our attention to weights µ that are strongly dominant, i.e. we would like to assume µ 1 > µ 2 > · · · > µ r . We can do this because of the following remark.
Remark 3. The statement (4.23) is trivial if µ is not strongly dominant. By Remark 5, the Gelfand-Tsetlin coefficient of a non-strict pattern is zero. The second row of T(v) is µ, hence if µ is not strongly dominant, then T(v) is non-strict for any v ∈ C µ , and G (n,λ+ρ) (v) = G (n,λ+ρ) (v * ) = 0.
Assume that µ is strongly dominant, hence the first two rows of T(v) are strict for every v ∈ C λ+ρ . The next remark describes the BZL path of elements in C µ .
Remark 4. Recall from section 4.3 that the BZL path of v is a path in C λ+ρ from v to v lowest ∈ C λ+ρ . The jth segment of the path is along an edge of C λ+ρ labelled by Ω j , where Ω j is defined in (4.10). The chosen long word w (r) 0 starts with w (r−1) 0 ; in particular the first r 2 out of the r+1 2 segments are along edges not labelled by r. This implies that these segments are contained in C µ , and in fact are the BZL path corresponding to v as an element of C µ , a crystal of type A r−1 . Hence the end of the first r 2 segments is the lowest element of that crystal, v * . Consequently, the first r 2 segments of the BZL path of v * are trivial. If b j (v) = b j (T(v)) denotes the length of the jth segment of the BZL path, then Thus BZL(v * ) = Γ(T(v * )) has zeros everywhere below the first row. By (4.2), this means that for any 1 < i ≤ j ≤ r + 1, the entry a i,j of T(v * ) satisfies a i,j = a 1,j = µ j . It follows that we have a i−1,j−1 > a i,j = a i−1,j . According to Definition 3, these entries are all circled, but not boxed. By Definition 4 this implies 1.
Tokuyama's Theorem
Tokuyama's theorem, in its original form, relates a Schur function to a generating function of strict Gelfand patterns. This is easily rephrased to relate a sum over a Weyl group to a sum over a highest weight crystal. This second form is more convenient for the purposes of generalizing the theorem to the metaplectic setting.
In the previous section, we followed notation from Brubaker-Bump-Friedberg [BBF11b], because that is most convenient to use for metaplectic definitions of Gelfand-Tsetlin coefficients. The notation and approach in Tokuyama's paper [Tok88] is slightly different.
As in Tokuyama [Tok88], we say a pattern T is strict if a i−1,j−1 > a i−1,j holds for every 1 ≤ i ≤ j ≤ r. Following notation there, let G(λ) denote the set of Gelfand-Tsetlin patterns with top row λ, and let SG(λ) be the set of strict Gelfand-Tsetlin patterns with top row λ.
Remark 5. Note that by Definition 3, a Gelfand-Tsetlin pattern T is strict if and only it has no entries that are both circled and boxed. In every version of Gelfand-Tsetlin coefficients, such an entry corresponds to a factor of zero. Hence as long as each term of the sum involves the Gelfand-Tsetlin coefficients, summing over G(λ) is the same as summing over SG(λ).
Recall that if d i is the sum of elements in the i-th row (4.7), then by (4.8) the weight of a pattern T is wt(T) = (d r , d r−1 − d r , . . . , d 0 − d 1 ). In Tokuyama [Tok88], we have For a weight µ = (µ 1 , µ 2 , . . . , µ r+1 ) write Recall the definition of the (nonmetaplectic) Gelfand-Tsetlin coefficient G(T) = G (1) (T) as a product of g ij (T) from (4.15) and (4.17). Let us treat t as an indeterminate for the time being. Then the factor g ij (T) corresponding to an entry a ij is as follows.
a i−1,j = a ij = a i−1,j−1 a ij is circled and boxed.
In Tokuyama [Tok88], the entry a ij is called "special" if a i−1,j < a ij < a i−1,j−1 and "lefty" if a i−1,j = a ij . By Defintion 3 "special" entries are undecorated, and "lefty" entries are boxed. (For strict patterns, "lefty" entries are boxed and not circled by Remark 5.) We are now ready to state Tokuyama's theorem in both the notation of [Tok88] and in ours.
In the notation introduced in previous sections of this chapter, this can be re-written as The first form of the equation, (5.2) is Theorem 2.1 of [Tok88], substituting −t for t. We explain why (5.3) is equivalent. Note that (thinking of t as an indeterminate) for any strict Gelfand-Tsetlin pattern we have G(T) = (1 − t) s(T) · (−t) l(T) Furthermore, by Remark 5 we have G(T) = 0 if T ∈ G(λ + ρ) \ SG(λ + ρ). From (4.8) and (5.1) we see that the components of wt(T) are exactly the components of M (T) in reverse order. So if we write x 1 = z r+1 , x 2 = z r , . . . , x r = z 2 , x r+1 = z 1 , we have x wt(T) = z M (T) . Hence the right hand sides of (5.2) and (5.3) agree. It remains to check that the left hand sides agree as well.
Note that with the choice x i = z r+2−i , we have s λ (x) = s λ (z) and 1≤i<j<r+1 (z i − t · z j ) = 1≤i<j<r+1 (x j − t · x i ). In the remainder of this section, we reformulate Theorem 13 in terms of Demazure-Lusztig operators and a sum over a highest-weight crystal. This is done separately for the two sides.
5.1. The right hand side of Tokuyama's theorem as a sum over a crystal. The correspondence between elements of a crystal of highest weight λ + ρ and Gelfand-Tsetlin patterns of top row λ+ ρ was established by Proposition 10. Following that parametrization and notation, we may write G(v) = G (1) (v) := G (1) (T(v)); we have wt(T(v)) = wt(v). Thus we may write the right hand side of (5.3) as
5.2.
The left hand side of Tokuyama's theorem in terms of Demazure-Lusztig operators. Recall notation for type A root systems in Section 3.1. This notation and the Weyl Character Formula for Schur functions allows us to rewrite the left hand side of (5.3) first as a sum over the Weyl group. Then we use Theorems 6 and 7 to write it in terms of Demazure-Lusztig operators.
The Weyl group W ∼ = S r+1 acts on Λ = Z r+1 by permuting the coordinates. Thus for the long element w 0 we have
Now by the Weyl Character Formula we have
.
Lemma 14. Let λ be a dominant, effective weight. Then we have Lemma 14 is the form of Tokuyama's theorem that is convenient to generalize, as we will see in Section 7.
Demazure crystals and branching properties
The statement of Theorem 1 involves, on one side, a sum over a Demazure crystal. In this section we give the definition of Demazure subcrystals C (w) λ+ρ within a type A highest weight crystal C λ+ρ , for certain elements w of the Weyl group. In preparation for the proof of Theorem 1, we discuss how the branching properties of section 4.5 restrict to Demazure crystals (Proposition 15 and Propositon 16). Finally, we introduce some terminology that will allow for lighter notation in the proof of Theorem 1 in Section 7 and Section 8.
We start by specifying the set of Weyl group elements w that appear in the statement of Theorem 1.
Sometimes it is convenient to assume that w is a beginning section of w (r) 0 , but not of w (r−1) 0 . In this case w is of the form For such elements w ∈ W, we write k := ℓ(w) − ℓ(w (r−1) 0 ) − 1. Now we are ready to define Demazure crystals corresponding to a beginning section (6.1) of our favourite long word. Definition 6. Let C λ+ρ be a crystal of highest weight λ + ρ, and let w be a beginning section of the long word w (r) 0 , as in (6.1). Then the Demazure crystal corresponding to w is the crystal C (w) λ+ρ with vertices 2 ) denotes the i-th entry of the Berenstein-Zelevinsky-Littelmann array BZL(v) of an element v ∈ C λ+ρ .
To define a crystal structure C (w) λ+ρ , we contend that as a directed graph it is a full subgraph of C λ+ρ . That is, the edges of C Remark 7. Definition 6 means that an element v ∈ C λ+ρ belongs to C λ+ρ if and only if the last ℓ(w The definition is illustrated by the following example, in type A 2 . Example 7. Recall the crystal C 3,1,0 of highest weight (3, 1, 0) from Example 1. The Demazure subcrystal corresponding to w = σ 1 σ 2 is the highlighted part of the crystal in Figure 5. For the remainder of this section, we assume that w is a beginning section of w (r) 0 , but not of w (r−1) 0 i.e. it is as in (6.2). Then the Demazure crystal C (w) λ+ρ inherits some of the branching properties discussed in section 4.5, and in particular, Proposition 12. In particular, the type A r−1 subcrystals C µ from the decomposition (4.20) are either disjoint from, or contained in C (w) λ+ρ . The set of µ such that C µ is contained in C (w) λ+ρ is easy to characterize from λ and w.
We make this precise in the proposition below.
Proposition 15. Let w = w (r−1) 0 · · · σ r · · · σ r−k and C (w) λ+ρ the corresponding Demazure crystal. Let C µ ⊂ C λ+ρ be a subcrystal of type A r−1 from the decomposition (4.20). Then C µ has either no vertices in C (w) λ+ρ , or it is contained in it. Furthermore, for a µ that interleaves with λ + ρ, we have C µ ⊆ C (w) λ+ρ if and only if µ j = λ j+1 + r − j for j > k + 1. Taking the union over µ like this, we have Proof. Most of the work for the proof has been done in Section 4. Note that by Proposition 11, C µ is a component of C λ+ρ in the decomposition (4.20) if and only if µ and λ + ρ interleave. Further, it was noted in Remark 4 that the last r segments of the BZL path agree for every element v ∈ C µ . Thus, by Remark 7, either every vertex of C µ is contained in C (w) λ+ρ , or none of them are. To characterize the weights µ such that C µ ⊆ C (w) λ+ρ , recall that by Proposition 11, v ∈ C λ+ρ belongs to C µ if and only if the top two rows of T(v) are λ + ρ and µ. Further, by Proposition 10, Γ(T(v)) = BZL(v), so the top row of BZL(v) is given by b ( r 2 )+j (v) = λ j+1 + r − j − µ j for every 1 ≤ j ≤ r. In particular, we have b i (v) = 0 for every i > ℓ(w) = r 2 + k + 1 if λ j+1 + r − j = µ j holds for every j > k + 1.
The following proposition relates the right-hand side of Theorem 1 to similar sums over complete highest-weight crystals of a lower rank. It will be key in the proof of that theorem, but is at this point a straightforward consequence of parts (b) and (c) of Proposition 11, and Proposition 15.
The following notation and terminology serves to facilitate these computations.
Recall that the first row of the Γ-array, (Γ 11 , . . . , Γ 1r ) is the same for every element of a component C µ ⊆ C (w) λ+ρ . With λ fixed, we phrase our notation in terms of this r-tuple. Lemma 17 justifies the choices made in the following definition.
Parts (i) − (v) of the following lemma justify the choices in Definition 7. Part (vi) will be convenient in later computations.
(i) The tuple Γ is λ-admissible if and only if the weights λ + ρ r and µ interleave.
(vi) With the notation as above,we have (6.14) Proof. Note first that the condition (6.11) is satisfied exactly if Γ = (Γ 11 , . . . , Γ 1r ) is the first row of the array Γ(T), When T is a pattern with top rows λ + ρ and µ. With this observation, the proof is straightforward from Propositions 12, 15 and 16. For (i), we have that by (6.11), Again by (6.11) we have that Γ is (λ, k)-admissible if and only if for any k + 1 < j we have By Proposition 15, this is equivalent to C µ ⊆ C (w) λ+ρ . This proves (ii). Part (iii) is true because µ j−1 = µ j ⇐⇒ µ j−1 = λ j + r − j + 1 = µ j and by (6.11) For part (iv), recall that by part (c) of Proposition 10, if v lowest is the lowest element of C λ+ρ , then wt(v) − wt(v lowest ) can be expressed from the entries b i (v) of BZL(v). We have wt(v lowest ) = w . Now (4.14) implies that To prove (v), recall that by Remark 3 if µ is not strongly dominant (i.e. Γ is nonstrict), then G (n,λ+ρ) (v * ) = 0. Furthermore, if µ is strongly dominant, then by (4.25), the Gelfand-Tsetlin coefficient corresponding to v * only depends on the first row of the BZL-array: Now since the first two rows of T(v * ) are λ + ρ and µ, and Γ is the first row of Γ(T(v * )), the statement follows immediately from comparing (6.10) and (4.16).
Reduction and proof of the Main Theorem
We are ready to begin the proof of Theorem 1. The expression on the left-hand side involves a large sum of Demazure-Lusztig operators, The idea behind the proof is that one may replace this expression by progressively simpler ones, eventually reducing the statement of Theorem 1 to the description of the polynomial (7.2) (T r T r−1 · · · T 1 )(x w 0 (λ) ).
The statement describing the polynomial (7.2) is then proved by induction in Section 8. In the proof we restrict our attention to the case where the Weyl group element w (a beginning section of w (r) 0 ) has length at least r 2 + 1. This leads to no loss of generality by the following remark.
Remark 8. The statement of Theorem 1 in type A r but for ℓ(w) ≤ r 2 is equivalent to an instance of the theorem in type A r−1 . Let λ be as in the statement of the theorem, λ ′ = (λ 2 , λ 3 , . . . , λ r+1 ) and suppose ℓ(w) ≤ r 2 . Then in fact w is a beginning section of w (r−1) 0 . The statement (2.4) for type A r , λ and w is the analogous statement for type A r−1 , λ ′ and w, except both sides are multiplied by x λ 1 r+1 . On the left-hand side, T 1 , . . . , T r−1 all commute with multiplication by x r+1 . As for the right-hand side, in the decomposition (4.20), C (w) λ+ρ is contained in the component C λ ′ +ρ r−1 of the lowest element, v lowest ∈ C λ+ρ . We have G λ+ρ (v lowest ) = 1. The statement now follows from Proposition 11.
Hence from now on, we shall assume that w is as in (6.2). Recall that w is fixed by a choice of the pair r, k where 0 ≤ k < r. Call the statement of Theorem 1 for such a fixed w and fixed n (but for any dominant, effective weight λ) IW (n) r,k . Proving IW (n) r,k for any pair 0 ≤ k < r proves Theorem 1.
Theorem 2 is the special case of Theorem 1 where w = w (r) 0 , i.e. k = r − 1. We will sometimes use the notation T ok (n) r = IW (n) r,r−1 . Remark 9. Much, but not all of the notation introduced above, in previous chapters, and in what follows, depends on the value of n. In particular the meaning of D i , T i , (the action of) σ i , G (λ,w) (v), T ok (n) r and IW (n) r,k depend on n, but w 0 , W, C (w) λ+ρ and wt(v) do not. We will usually suppress n from the notation. When reading the statements and proofs below, one should keep in mind that the meaning varies with n. The entire argument of the proof is about a(n arbitrarily) fixed n.
The reduction of IW r,k to the simpler statement is itself an induction by r. We will phrase two more statements, M r,k (Proposition 20) and N r,k (Proposition 21). These involve smaller expressions of Demazure-Lusztig operators on the left hand side, and make use of the notation of Definition 7. N r,r−1 describes the polynomial in (7.2).
The technical ingredients of the reduction are stated as lemmas or auxiliary propositions along the way. The proof, using these auxiliary statements, is in section 7.2. The last ingredient is the proof of Proposition 24, which is a rather technical induction, and forms the contents of Section 8. 7.1. Auxiliary statements. First we rewrite both sides of IW r,k in terms of the operator appearing in T ok r−1 , of the form (7.1) For the left-hand side, this is accomplished by the following lemma. It is really just a statement about the Bruhat-order; we omit the proof.
The first factor on the right-hand side of (7.3) is the operator on the left-hand side of T ok r−1 . By Theorem 6, it is equal to ∆ The second factor will appear as the operator in M r,k (Proposition 20).
The following Proposition reproduces the right-hand side of IW r,k in terms of the operator (7.4). It is a consequence of Proposition 16 and Lemma 17. It is proved in section 7.3.
Proposition 19. Assume IW r−1,r−2 (= T ok r−1 ) holds. Then we have Lemma 18 and Proposition 19 together produce both sides of IW r,k as the operator in (7.4) applied to a polynomial. The fact that the "inputs" are the same up to annihilation by this operator is the statement that we will call M r,k . The next proposition phrases the statement M r,k explicitly for any 0 ≤ k < r.
Call this statement (that (7.6) holds for any λ dominant weight) M r,k .
The statement M r,k lends itself to an obvious simplification. On the left hand side, there is a sum of k + 1 strings of Demazure-Lusztig operators. The statement N r,k involves only one of them.
Call this statement (that (7.7) holds for any λ dominant weight) N r,k .
Remark 10. Note that in both M r,k and N r,k , λ is not required to be effective, i.e. it may have negative components. We may however always assume that it is effective, replacing λ by κ = (λ 1 + K, . . . , λ r + K, λ r+1 + K). This can be done because as an operator, multiplication by (x 1 · x 2 · · · x r+1 ) K commutes with T i and D i for any 1 ≤ i ≤ r, and The following lemma is straightforward.
Lemma 22. Proposition 21 implies Proposition 20. That is, we have As a last step in the sequence of replacing Theorem 1 with simpler statements, we note that in the statement N r,k , the parameter k is the interesting one. This is the content of Lemma 23 below. The proof is straightforward by renaming variables, and keeping in mind that multiplication by x i commutes with T j and D j if i / ∈ {j, j + 1}.
Lemma 23. If N k+1,k is true, then N r,k is true for every r > k. In fact, N k+1,k implies a slightly stronger statement than N r,k : the difference of the left-hand side and the right-hand side is annihilated not just by D w (r−1) 0 , but by the Demazure-operator corresponding to the long word in the group σ r−k , σ r−k+1 , . . . , σ r−1 .
The statement N k+1,k will be proved in Section 8 as Proposition 24. We are now ready to give the proof of Theorem 1. 7.2. The proof of Theorem 1. By Proposition 24 (proved in Section 8), we have that N k+1,k holds for any nonnegative k. By Lemma 23, this implies that N r,k holds for any pair of integers 0 ≤ k < r, i.e. Proposition 21 is true. By Lemma 22, this proves Proposition 20, i. e. M r,k for any pair of integers 0 ≤ k < r.
We prove IW r,k for any pair of integers 0 ≤ k < r by induction on r.
To start, notice that both M 1,0 and IW 1,0 state that if λ 1 ≥ λ 2 , then where the sum is over all Gelfand-Tsetlin patterns T of the form Thus IW 1,0 is the same as M 1,0 , and in particular, IW r,k is true if r = 1. Now let r > 1, 0 ≤ k < r and assume that IW r−1,r−2 = T ok r−1 is true. We know M r,k holds, hence (7.10) ( i.e. the difference of the two sides of (7.10) is annihilated by D w (r−1) 0 . By Theorem 6, the difference is then also annihilated by That is, we have (7.11) Rewriting the left hand side of (7.11) by Lemma 18, and the right hand side by Proposition 19 we arrive at This is exactly the statement IW r,k . Thus IW r,k is true for any pair of integers 0 ≤ k < r. By Remark 8, this completes the proof of Theorem 1. 7.3. The proof of Proposition 19. We prove that if T ok r−1 (equivalently, IW r−1,r−2 ) holds, then By Proposition 16, we have (7.14) Here the sum is over all µ = (µ 1 , µ 2 , . . . , µ r ) that interleave with λ + ρ and µ j = λ j+1 + r − j for j > k + 1. We claim that Since µ interleaves with λ + ρ, it is dominant and effective. We distinguish between two cases according to whether µ is strongly dominant or not.
If µ is strongly dominant, then µ − ρ r−1 is dominant and effective. In this case (7.15) is the statement T ok r−1 (IW r−1,r−2 ) for the weight µ−ρ r−1 , hence it is true by the assumption that T ok r−1 holds.
8. Proof of the statement N r,r−1 In Section 7, the proof of Theorem 1 and Theorem 2 was reduced to describing the action of the string of Demazure-Lusztig operators T r . . . T 1 on a monomial, i.e. the statement N r,r−1 . This section consists of the proof of the statement N r,r−1 . We recall the statement in Proposition 24 below.
Proposition 24. Let λ = (λ 1 , . . . , λ r , λ r+1 ) be any dominant weight. Then we have Here ≡ means that the difference of the left and right hand side is annihilated by D w (r−1) 0 . Recall that the relevant notation has been introduced in section 6.2. In this section, we use v for denoting v = t n = q −1 .
Let us abbreviate both sides of the equation (8.1): The proof is by induction on r. The base case is fairly straightforward using the definitions and Claim 5. We omit this rank one computation and contend that both sides turn out to be equal to the following expression: For the induction step, we assume that N k+1,k holds for k < r − 1. The goal is to prove Claim 25. It suffices to show that if N k+1,k holds for k < r − 1, then Proof. The proof is straightforward using the fact that multiplication by x λ 1 r+1 commutes with the operators T 1 , . . . , T r−1 , multiplication by x λ 1 r+1 and T r both commute with D w (r−2) 0 , and Lemma 8.
The remainder of this section will consist of proving (8.4) from the assumption that N k+1,k holds for k < r − 1. The argument will proceed as follows. After introducing some convenient notation (in 8.1), we shall simplify the induction step (in 8.2). Computing r−1 (y) directly, and comparing the result with R (λ) r (x), we find that there is a polynomial "left over". The fact that this polynomial is annihilated by D w (r−1) 0 is called F r (Proposition 26). In fact, the computation shows that assuming N r−1,r−2 , the statements F r and N r,r−1 are equivalent (Lemma 27). Thus it remains to prove Proposition 26 by showing the statement F r ; this is done in section 8.3. This will also (partially) be a proof by induction: by Lemma 27, the assumption of N k+1,k for k < r − 1 implies in particular that F j holds for j < r.
We will make repeated use of the following function on pairs of (positive) integers: We are now ready to tackle the induction step.
The argument in the present section amounts to the following lemma.
Lemma 27. If N r−1,r−2 holds, then N r,r−1 (for λ, x as above) is equivalent to the statement ∀a F µ,a (y) ≡ 0, or, equivalently, ∀a D w (r−1) 0 F µ,a (y) = 0. Now to complete the induction step, it remains to prove Proposition 26, i.e. that F µ,a (y) is annihilated by D w (r−1) 0 . This is the content of section 8.3. 8.3. Proof of Proposition 26. We distinguish between the cases where a is divisible by n or not. The case when it is not is significantly easier to handle. 8.3.1. The non-divisible case. The goal is to prove that if n ∤ a, then D w (r−1) 0 F µ,Γ 11 (y) = 0.
The strategy to prove (8.18) is the following. Using the conventions introduced in Section 8.1, in particular (8.7) and (8.8), we will rewrite the sum defining F µ,a (y) into smaller pieces according to Γ 0 . Then we write F µ,a (y) as a difference of two pieces. One piece is annihilated by D w (r−2) 0 as a consequence of F r−1 , the other is annihilated by D r−1 . By Lemma 8, this implies that F µ,a (y) is indeed annihilated by D w (r−1) 0 .
The proof of this lemma is a rank one computation using the definition of the group action. For completeness, it is included in Appendix A.
To summarize, we have The first term here is annihilated by D r−1 ; the second is annihilated by . This completes the proof of the statement F r (in the divisible case), and hence the proof of Proposition 24.
Whittaker functions
We mentioned in the Introduction that Theorem 2 establishes a combinatorial link between metaplectic analogues of the Casselman-Shalika formula. Furthermore, the more general Theorem 1 gives a crystal description of certain Iwahori-Whittaker functions. In this section, we make these statements more explicit.
We recall results of [CGP14] to compare the "Demazure-Lusztig side" of Theorem 2 to constructions in [McN16] and more specifically in type A to [CO13] (section 9.1). We relate the "crystal side" of Theorem 2 to Whittaker functions via comparison to [McN11] (section 9.2). In section 9.3 we compare the Demazure-Lusztig expression from Theorem 1 to constructions of Iwahori-Whittaker functions. We recall a relevant statement in three different contexts. For finite dimensional and affine Kac-Moody groups in the nonmetaplectic setting, through comparison with [BBL14] and [Pat14] respectively, and in the metaplectic setting by recalling results of [PP15]. 9.1. Metaplectic Whittaker functions via Demazure-Lusztig operators. Formulae about Demazure operators of the long word, in particular Theorem 6 [CGP14, Theorem 3.] and Theorem 7 [CGP14, Theorem 4.] allow us to interpret the left-hand side of Theorem 1 as the value of a Whittaker function W, constructed as a sum over the Weyl group in terms of the Chinta-Gunnells action. Such a construction can be found in [CO13] for type A, and in [McN16] in greater generality. The connection with results of [McN11] are made explicit in [CGP14, Section 6]. We recall that result here, and give the translation to results of [CO13] in the type A case. 9.1.1. Metaplecic Whittaker functions. We only sketch the definition of the Whittaker function W and refer the reader to [CGP14, Section 6] for details and precise conditions in our notation. Let F be a non-archimedean local field containing the 2n-th roots of unity, ̟ the uniformizer of its ring of integers O. Let G be a split, connected reductive group over F that arises as a special fibre over a group scheme G defined over Z. Let K = G(O) be the maximal compact subgroup of G, T a maximal split torus, B a Borel containing T, B − its opposite, U the unipotent radical of B and U − of the opposite Borel B − . If Λ is the group of cocharacters of T then we may define a sublattice Λ 0 of Λ as in (3.6) (for the definition in general type, see [CGP14, Section 2, (3)]). Let G be an n-fold metaplectic cover of G. This in particular means that there is a short exact sequence where µ n is the group of n-th roots of unity. We think of µ n as being identified with a subgroup of C * and let T , the metaplectic torus (respectively, B) be the preimage of T (respectively, of B) in G. We shall give W as a complex-valued Whittaker function corresponding to an unramified principal series representation of G. Let χ be a character of Λ 0 and ψ : U − → C an unramified character. Then χ determines an extension χ to T as well as a representation ι(χ) of T (induced from its centralizer). In turn, ι(χ) determines an unramified principal series representation of G. Let φ K be a spherical vector; then there is a ξ χ ∈ (ι(χ)) * a complex valued linear functional corresponding to χ and φ K . From these we arrive at the Whittaker function 9.1.2. Comparison. It is a consequence of the construction (see [CGP14, Section 6]) that W satisfies W(ζugk) = ζφ(u)W(g), ζ ∈ µ n , u ∈ U, g ∈ G, k ∈ K. This fact and the Iwasawa decomposition G = U T K together imply that it suffices to compute W on T . The identification χ(̟ λ ) = x λ interprets the action of the Weyl group on Λ as W acting on χ. Setting v = t n = q −1 where q is the order of the residue field O/̟O, it makes sense to talk about a value (δ −1/2 W χ )(̟ λ ) in terms of the expressions produced by metaplectic Demazure and Demazure-Lusztig operators acting on monomials. (Here δ is the modular quasicharacter of B.) In particular, by [ Note that the equality of the two lines on the right-hand side is a consequence of [CGP14, Theorem 4.] (i.e. Theorem 7). Theorem 9.3 is valid in any type.
We finish this comparison by arriving at the same statement in type A using results of [CO13]. As in [CO13, Section 9] define .
We have the following formula of Chinta-Offen: Theorem 30. [CO13, Theorem 4] Let λ be a dominant coweight. Then where w acts on x λ as in Definition 1.
We may rewrite j(w, x) in a more familiar form. A simple computations shows that for any w ∈ W we have (9.5) j(w, x) = sgn(w) · α∈Φ(w −1 ) x m(α)α . Here λ is a dominant weight and I −w 0 λ is the same as (δ −1/2 W χ )(̟ λ ) above, up to a relatively trivial constant factor. (It follows from our argument below that the value of this factor is in fact one.) The crystal C λ+ρ is parametrized in [McN11] in terms of Lusztig data; to compare results we sketch the translation of Mcnamara's result into the notation of Gelfand-Tsetlin arrays.
9.2.1. Lusztig data and McNamara's result. We recall notation from [McN11,Section 8.]. Note first that the long word chosen in loc. cit agrees with our choice of w 0 from (4.10). Let λ = (λ 1 , . . . , λ r+1 ) and let us use the notation for a root system of type A r as before (see section 3.1). In particular, recall that we have Φ + = {α i,j = e i − e j | 1 ≤ i < j ≤ r + 1}. We write m ∈ C λ+ρ for tuples as above. For an α = α i,j ∈ Φ + we say m α := m i,j is circled if m i,j = 0, and boxed if equality holds in (9.6). Furthermore, define (9.7) r i,j = r α = k≤i m k,j .
Now we may use the functions h ♭ and g ♭ defined in Section 3.3.2 to define a coefficient corresponding to m ∈ C λ+ρ . Let With the notation as above, one has Theorem 32. [McN16,Theorem 8.6] The value of the integral I λ which calculates the metaplectic Whittaker function is zero unless λ is dominant; and for dominant λ it is given by (9.9) I λ = m∈C λ+ρ α∈Φ + w(m, α) · x mαα .
As before, we write x such that that χ(̟ λ ) = x λ for the unramified χ used to define the principal series representation. 9.2.2. Translation into Gelfand-Tsetlin language. It is convenient to compare the Lusztig data of C −w 0 λ+ρ to the Gelfand-Tsetlin arrays for C λ+ρ . For C −w 0 λ+ρ , the condition (9. and m i,j is boxed if (9.10) is satisfied with an equality.
Consider the following bijection between m ∈ C −w 0 λ+ρ and Γ(T v ) for v ∈ C λ+ρ : Note that (h, k) satisfy 1 ≤ h ≤ k ≤ r if and only if i = r + 1 − k and j = r + 2 − h satisfy 1 ≤ i < j ≤ r + 1. Further, m i,j = r i,j − r i−1,j . The bijection may be expressed in terms of the corresponding Γ-array as (9.12) m i,j = Γ h,k − Γ h,k+1 .
Thus m i,j is circled if and only if Γ h,k = Γ h,k+1 , i.e. Γ h,k is circled by Definition 3. Similarly, one may check that m i,j is boxed if and only if r+1 t=j m i,t = a h−1,k−1 − a h,k i.e. if and only if Γ h,k is boxed. Comparing the definition of w(m, α) in (9.8) with the definition of the Gelfand-Tsetlin coefficients in 4, we find that for any m ∈ C −w 0 λ+ρ and corresponding v ∈ C λ+ρ , we have (9.13) α∈Φ + w(m, α) = G (n,λ+ρ) (v).
The content of Theorem 33 identifies the "crystal side" of Theorem 2 as a metaplectic Whittaker function. 9.3. Constructions of Iwahori-Whittaker functions. The operators T u lend themselves to the study of Whittaker functions not only through expressions of the long word, as seen in section 9.1 but also through their relationship to Iwahori-Whittaker functions. This was mentioned in 1.3.1; in this section, we elaborate on this connection by recalling results of [BBL14], [Pat14] and [PP15]. These sources express Iwahori-Whittaker functions in terms reminiscent of the left-hand side of Theorem 1: (9.16) u≤w T u in the nonmetaplectic finite-dimensional, loop group, and metaplectic finite-dimensional setting, respectively. 9.3.1. The Whittaker functional and Iwahori fixed vectors. Brubaker, Bump and Licata [BBL14] consider the values of a Whittaker functional on Casselman's basis of functions in a principal series representation fixed by the Iwahori subgroup. They work with the classical, nonmetaplectic Demazure-Lusztig operators T w (w ∈ W ). (Their definition [BBL14, (2), (3)] essentially agrees with our Definition 3.19 when n = 1.) We recall some additional notation; see [BBL14] for the precise definitions of the objects involved. LetT be a split maximal torus of the Langlands dualĜ; as explained in [BBL14, Section 2], z ∈T (C) corresponds to an unramified character τ z of T. An element λ ∈ Λ corresponds to a coset T (F )/T (O); let a λ be a coset representative. Consider the principal series representation π = Ind G B (τ ). Let Ω τ : Ind G B (τ ) → C denote the Whittaker functional. Let J be the Iwahori subgroup (i.e. the preimage of B − (O/̟O) in K), the space Ind G B (τ ) J of Iwahori fixed vectors has a standard basis {Φ z w } w∈W , whose elements are supported on Iwahori double-cosets.
We may also consider [BBL14, Section 5] the modification W λ,w (z) = y≤w W λ,y (z). The connection between Iwahori Whittaker functions and Demazure-Lusztig operators is expressed by the following theorem.
Theorem 34. [BBL14, Theorem 1.] For any dominant weight λ, we have W λ,1 (z) = z λ . Furthermore, if w ∈ W and σ i is a simple reflection such that σ i w > w by the Bruhat order, then W λ,σ i w = T i W λ,w (z).
The following straightforward corollary illustrates the relevance of operators (9.16).
Corollary 35. For any dominant weight λ and w ∈ W, we have 9.3.2. Iwahori-Whittaker sums on loop groups. In this section we shift our perspective slightly. We recall results of Patnaik [Pat14] that demonstrate the use of Demazure-Lusztig operators in the study of Whittaker functions in yet an other setting: on p-adic points of an affine Kac-Moody group. Let G be an affine Kac-Moody group over a non-archimedean field; ̟, q, K, U, U − as before. Let W now denote the (affine) Weyl group of G; and Π 0 the basis of the corresponding finite root system. Let I and I − denote the Iwahori subgroups. In [Pat14, Section 4] the Whittaker function W is defined on G. Furthermore, determining W is reduced to the computation of values W(̟ λ ∨ ), for any λ ∨ ∈ Λ ∨ affine coweight. The main theorem [Pat14, Theorem 7.1], a generalization of the Casselman-Shalika formula for the computation of W(̟ λ ∨ ) is proved through the introduction of Iwahori-Whittaker sums W w,λ ∨ and a recursion result on W w,λ ∨ in terms of Demazure-Lusztig operators. The recursion result [Pat14, Proposition 5.5] is recalled in Proposition 36 below.
The definition [Pat14, (2.29)] of Demazure-Lusztig operators T a (a ∈ Π 0 ) essentially agrees with Definition 3.19 when n = 1 and the root system is of type A. The Iwahori-Whittaker sums W w,λ ∨ are defined [Pat14, Definition 4.5] by summing an unramified principal character ψ of U − along fibres of the multiplication map m w,λ ∨ : U wI − × I − I − ̟ λ ∨ U − → G : (For details on how to interpret the unramified character ψ on elements of the fibre, see loc.cit.) The Whittaker function may then be written as a sum of these Iwahori-Whittaker sums [Pat14, (4.21)]: W(π λ ∨ ) = w∈W W w,λ ∨ . The following proposition phrases the recursion of the Iwahori-Whittaker sums in terms of Demazure-Lusztig operators.
(Here the simple reflection in T a acts on W w ′ ,λ ∨ termwise; the ring C σa [Λ ∨ ] is an extension of C[Λ ∨ ] containing T a (W w ′ ,λ ∨ ).) This recursion is used in [Pat14, Section 7.2] to conclude that W w,λ ∨ = q ρ,λ ∨ T w (e λ ∨ ) [Pat14, (7.3)] and to compute the value of W(̟ λ ∨ ), proving the generalization of the Casselman-Shalika formula. 9.3.3. The metaplectic setting. Recent results of Manish Patnaik and the present author [PP15] indicate that Demazure-Lusztig operators can be used to express Iwahori-Whitaker functions directly in the metaplectic setting as well. In particular, the techniques of [Pat14] seen above are applicable in the finite dimensional metaplectic setting. The Iwahori-Whittaker functions W w,λ ∨ can again be defined as a formal generating series using fibres of the map m w,λ ∨ ; an argument similar to that in [Pat14] proves that the value of the metaplectic Whittaker function W(π λ ∨ ) can be expressed as a sum of these: W(π λ ∨ ) = q − 2ρ,λ ∨ w∈W W w,λ ∨ [PP15, Section 5.2]. It turns out that then the W w,λ ∨ satisfy a similar n|Γ 12 − a and g ♭ (Γ 12 − a) = v · g Γ 12 −a . Substituting into (A.6), we get To rewrite this in the form of (A.5), note that by the definition of the Chinta-Gunnells action in type A (3.11), we have Comparing (A.8) to (A.7) we see that This completes the proof of (A.5), and thus of Lemma 28. | 2016-05-17T23:59:33.000Z | 2016-05-17T00:00:00.000 | {
"year": 2016,
"sha1": "1f632579a3b4318401a79eb6ed51390ade74682c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1f632579a3b4318401a79eb6ed51390ade74682c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
259039324 | pes2o/s2orc | v3-fos-license | Prediction of 3D Velocity Field of Reticulated Foams Using Deep Learning for Transport Analysis
Data-driven deep learning models are emerging as a new method to predict the flow and transport through porous media with very little computational power required. Previous deep learning models, however, experience difficulty or require additional computations to predict the 3D velocity field which is essential to characterize porous media at the pore scale. We design a deep learning model and incorporate a physics-informed loss function that enforces the mass conservation for incompressible flows to relate the spatial information of the 3D binary image to the 3D velocity field of porous media. We demonstrate that our model, trained only with synthetic porous media as binary data without additional image processing, can predict the 3D velocity field of real reticulated foams which have microstructures different from porous media that were studied in previous works. Our study provides deep learning framework for predicting the velocity field of porous media and conducting subsequent transport analysis for various engineering applications. As an example, we conduct heat transfer analysis using the predicted velocity fields and demonstrate the accuracy and advantage of our deep learning model.
Introduction
Porous media are typically characterized experimentally to obtain important volume-averaged properties, such as the permeability and the Nusselt number. The permeability provides a measure of the ease for fluid to flow through a porous medium. The Nusselt number provides a measure of ease for energy to exchange between fluid and porous media. For various engineering applications, understanding the volume-averaged behavior guides the design and optimization in engineering applications. However, these properties do not 1 3 capture the complete distribution of the flow and energy at the pore scale, which is crucial in making inferences at larger scales (Santos et al. 2020).
With recent advances in imaging and digital reconstruction of porous media, one can obtain local, pore-scale details of flow through porous media using numerical simulations. The physical structures can be digitally reconstructed from micro-computed tomography ( -CT) images. Direct numerical simulations can then be performed to obtain the velocity and scalar fields by solving the Navier-Stokes equations using various numerical methods such as the finite volume method (FVM) (LeVeque et al. 2002) or the lattice Boltzmann method (LBM) (White et al. 2006). However, the amount of computational resources and time required to obtain accurate solutions are substantial due to the very fine grid sizes necessary to capture the complex microstructures and tortuous flow paths of porous media. This in turn hinders systematic design and optimization of porous media.
Data-driven deep learning models have emerged as an alternative to CFD methods to circumvent these challenges. The physics-informed neural network (PINN), for example, obtains the velocity and pressure fields as a function of position and time by incorporating the governing equations, boundary conditions, and initial conditions directly into the model (Mao et al. 2020;He and Tartakovsky 2021;Jin et al. 2021;Cai et al. 2021Cai et al. , 2022. However, they are often computationally expensive to train, and the autoencoder (AE) network utilized in PINN can experience difficulty in learning high-frequency functions that are present in complex, multiscale problems (Tancik et al. 2020;Karniadakis et al. 2021). Similarly, the PointNet architecture (Qi et al. 2017;Kashefi et al. 2021a) performs an endto-end mapping between the vertices of a computational grid and the corresponding velocity and pressure values using the AE. It has been applied to predict the permeability of sandstones (Kashefi et al. 2021b).
Convolutional neural network (CNN) is another architecture that has been examined for fluid dynamics. CNNs are well suited for image-based learning such as image-to-image translation (Isola et al. 2017;Choi et al. 2018), medical image segmentation (Shen et al. 2017;Kayalibay et al. 2017;Wang et al. 2018), and flow field reconstruction using superresolution (Fukami et al. 2019(Fukami et al. , 2021. They have also been examined for fluid flow in porous media. Wu et al. (2018) utilized CNN and AE to predict the permeability of 2D synthetic porous media that they generated using Voronoi tessellation of randomly scattered points within 10% error. Sudakov et al. (2019) also used a combination of CNN and AE to predict the permeability of certain sandstones within an average error of 4%. Similarly, Kamrava et al. (2020) used a model consisting of CNN and AE to predict the permeability of sandstones with roughly 20% porosity and obtained an R2 score above 0.9. Zhang et al. (2022) used a mix of CNN and AE to predict the permeability from lowresolution images 2D synthetic porous media generated using bicubic interpolation (Keys 1981). They obtained an R2 score above 0.9 when predicting the permeability of porous media between 40% and 80% porosity. Marcato et al. (2022) also used a mix of CNN and AE to predict the permeability and the filtration rate of porous media consisting of disordered spheres. They achieved errors in the permeability and filtration rate as low as 2% and 5%, respectively.
Unlike the above models, which only provided an integrated property, other recent CNN models were designed to predict the pore-scale 3D velocity fields. Santos et al. (2020) presented a CNN model to predict the velocity field, although only in the stream-wise direction, of various sandstones. Their model was trained using disordered packs of spherical grains with porosity between 10% and 30%. Their model showed errors as low as 1% in permeability, but it required four precomputed inputs: the Euclidean distance, the maximum inscribed sphere, and the time of flight in two directions, and was not trained to satisfy the law of mass conservation. Wang et al. (2021d) used CNN to predict the 2D and 3D velocity field of synthetic porous media that they generated using an algorithm described by Liu and Mostaghimi (2017) where they segment a field of uniformly distributed random numbers after applying a Gaussian blurring kernel with different sizes. Their model related the binary image and the Euclidean distance to the velocity fields. The model predicted the permeability within 1% error for 2D synthetic porous media with permeability ranging from 1 × 10 −19 m 2 to 1 × 10 −12 m 2 . However, the error increased by more than 10% for 3D synthetic porous media with permeability ranging from 1 × 10 −14 m 2 to 1 × 10 −12 m 2 . They similarly observed larger voxel-wise errors in the 3D velocity fields, which they did not quantify. Wang et al. (2021a) utilized CNN to predict the permeability and velocity field with errors as low as 0.03% and 14%, respectively. Their model enforced the connectivity among subsampled porous media and directly employed the Navier-Stokes equation, but it was trained only using 2D porous media. Santos et al. (2021) further modified their model to extract the spatial-velocity relationship in the stream-wise direction at different scales to achieve errors in the permeability as low as 1% for input size as large as 512 3 . Zhou et al. (2021) developed a CNN model that showed an L 2 error of 10% for synthetic porous media consisting of disordered spheres with porosity from 25 to 35%. However, the model required precomputation of the Euclidean distance and the low-resolution velocity field.
In this study, we utilize CNN and implement a physics-informed loss function that enforces the mass conservation for incompressible flows to predict the 3D velocity field of real reticulated foams from only its binary images. To our knowledge, a reasonable prediction of the full 3D velocity field at the pore scale using only the binary image has not yet been reported. An ideal model should not require precomputation, which can be cumbersome. Moreover, the prediction of the 3D velocity field in both stream-wise and span-wise directions is essential for accurate transport analysis. The result of heat transfer analysis without the span-wise velocity is erroneous as shown in Fig. 1.
We also study complex, inhomogeneous reticulated foams which have microstructures significantly different from typical porous media considered in previous studies. Reticulated foams have high porosity, large specific surface area, mechanical robustness, and light weight, which are advantageous in many engineering applications including filtration (True Fig. 1 Example of the results from heat transfer analysis. Using only the stream-wise velocity field shows significant error in the temperature field as the upstream information cannot propagate through the tortuous paths of the porous media et al. 2004;Plesch et al. 2012), catalytic reactions (Chen et al. 2012;Lepage et al. 2012;Furler et al. 2014), latent energy storage (Zhao et al. 2014;Huang et al. 2019;Zheng et al. 2021), and heat exchangers (Boomsma et al. 2003;Huisseune et al. 2015). Their unique pore-scale characteristics can increase the learning difficulty for deep learning models. We design our model to have separately trained submodels that learn the 3D spatial relationship of each velocity component and a main model that enforces the law of mass conservation. We demonstrate that our model provides comparable accuracy to the traditional CFD methods both at the macro-scale and pore scale, and our physics-informed loss function significantly reduces the divergence of the velocity fields. The prediction using our deep learning model only takes a few seconds on modern workstations while traditional CFD methods require a few hours. Our design is also memory efficient, so that the training process can be handled by a single Graphical Processing Unit (GPU). We further utilize our model in heat transfer analysis to demonstrate its accuracy and advantage in consecutive transport analysis for various engineering applications. The overall workflow of the present study is presented in Fig. 2.
We describe our deep learning model, physics-informed loss function, and training methodology in Sect. 2. In Sect. 3, we highlight the unique pore characteristics of reticulated foams and demonstrate the accuracy of our model both at the macro-scale and pore scale. We then assess the accuracy and advantage of our deep learning model in heat transfer analysis. Concluding remarks are presented in Sect. 4.
Neural Network Architecture
Our neural network (Fig. 3) uses the convolutional U-Net structure (Ronneberger et al. 2015). It utilizes a series of stacked convolutional layers to extract both high-and low-level features and reconstruct the output. Our model decomposes the velocity field into component-wise submodels. It is similar to a previous work by Ribeiro et al. (2021) where the velocity components were trained separately (Wang et al. 2021c). However, we incorporate the submodels at the encoding branch of the U-Net to extract component-wise hierarchical features. Each submodel processes the input binary data S representing a porous medium and outputs a velocity field V n , where subscript n denotes the velocity component in either x, y, or z direction such that: Fig. 2 Workflow of our study to predict the velocity field directly from the binary image of porous media using the deep learning model. Prediction of the velocity field takes only a few seconds using our deep learning model. Using our deep learning model, we perform heat transfer analysis to assess the accuracy and advantages of our deep learning model Here, F n is the trained submodel with optimized weights w n and biases b n for each velocity component. The outputs of the submodels are combined within the trained model G such that: where weights w and biases b are optimized to enforce the physics-informed loss function as described in Sect. 2.2. We use the batch normalization layer and the dropout layer to help generalize the model and prevent over-fitting. We use the scaled exponential linear unit (SeLu) (Klambauer et al. 2017) as the nonlinear activation function instead of the rectified linear unit because the velocity can be negative in the span-wise direction.
(1) We tuned the hyperparameters using a method of discrete grid search. We used the default parameters for all batch normalization layers. We used a dropout rate of 0.2 for x and y directions and 0.1 for z direction. A lower dropout rate of 0.001 was used for the main model. To train, we used the Adam optimizer for all models with different learning rates of 0.0008 for x and y directions and 0.0006 for z direction. We trained the main model with a default learning rate of 0.001. We used the TensorFlow library (Abadi et al. 2015) to construct and train our model. We used a mini-batch size of four. We also applied early stopping criterion with 50 learning iterations to prevent preemptive stoppage and over-fitting. Each training run did not exceed 8 h. We trained using a single NVIDIA V100 GPU with 16GB of memory.
A major challenge in deep learning is the amount of required memory for training (Santos et al. 2021). All model parameters, including weights and biases, the inputs, and the outputs, must be locally stored during training. Even a single batch consisting of 120 × 120 × 120 tensor for the input and the output can consume considerable amount of the memory. Previous studies had to compensate with either reduction in the resolution of the velocity field or in the number of training data (Santos et al. 2020;Wang et al. 2021d). We believe that our approach of decomposing the velocity into its components helps reduce the memory required to train our model. However, the batch size remains constrained by the limitations imposed by GPU memory. The total number of trainable parameters of our model is approximately 5 million which is an order of magnitude less than previously reported model (Wang et al. 2021d).
Physics-Informed Loss Function
We implement a physics-informed loss function (PIMSE) that incorporates the differential form of the law of mass conservation for incompressible flows as part of the general mean squared error (MSE) loss function: Here, N is the total number of voxels ( L 3 ), superscript * denotes the predicted value, and is a penalty coefficient. The divergence of velocity field is calculated using the secondorder central difference scheme.
Our PIMSE expands upon the previous work by Wang et al. (2021d), which only balanced the 2D planar mass flux. Mohan et al. (2020) have implemented the divergence free condition directly into the neural network. However, we implement the divergence free condition into the loss function to simplify the model. We choose to impose the divergence free condition for all voxels with an equal weight and hence define the divergence loss with L 1 penalization. The weighting factor is chosen to be 3 such that the values of the MSE and the divergence error are comparable at the start of the training and that the learning process is not dominated by one metric.
We choose the MSE loss function over the mean absolute error or the mean absolute percent error which can be problematic when the velocity distribution spans several orders of magnitude with large number of voxels with near-zero velocity. It also puts more emphasis on the voxels with relatively higher velocity that govern the dominant flow paths and transport.
For training the sub-models, we use the general MSE loss function (Eqs. (3) with = 0 ) since the divergence-free condition cannot be imposed on a single velocity component. We also use the MSE loss function to train our model for comparison purposes.
Data Set Generation
For training and validation purposes, we generated synthetic porous media with randomly dispersed spherical pores to emulate the stochastic nature of the reticulated foams (Fig. 4). We used the overlapping spheres function in open-source library PoreSpy (Gostick et al. 2019) to distribute spherical pores with radii of a specified mean value and standard deviation. The final porosity values varied from 70 to 80%. We chose the parameters of the synthetic porous media such that the range of normalized permeability (see Eqs. (7)) is similar to that of our reticulated foams.
To assess our model, we manufactured and characterized reticulated silicon carbide foams with three different target pore densities of 45, 65, and 80 pores per inch (PPI). We imaged their microstructures using X-ray -CT scan (SkyScan 1172, Bruker) with a resolution of 13.35 μm per voxel. We downsampled the images of the 45 and 65 PPI foams using average pooling such that the final resolution is 26.7 μm per voxel. At least five pores were located within a 120 × 120 × 120 tensor, consistent with the representative elementary volume sizes reported in previous works (Zhang et al. 2000;Diani et al. 2014Diani et al. , 2015Santos et al. 2020;Wang et al. 2021b). Downsampling led to negligible changes in the microstructures of the reticulated foams since the length scale is much larger than the image spatial resolution. Relevant pore characteristics of the synthetic porous media and the reticulated foams are summarized in Table 1.
We next performed CFD simulations to obtain the ground-truth velocity fields. We numerically solved the steady-state Stokes equations using ANSYS Fluent 18.2: Fig. 4 Example of (a) digitally reconstructed reticulated foam and (b) generated synthetic porous medium with randomly dispersed spheres. Insert images show the planar binary image of each porous medium where the black and the white represent the solid and the void, respectively Here, P is the pressure and is the dynamic viscosity. We neglected gravitational and body forces and assumed constant fluid transport properties. Periodic boundary conditions were imposed at the inlet and the outlet. The pressure gradient was specified along the streamwise (z) direction. The computational domain was mirrored to impose the periodic boundary condition. We only considered the original cubic domain for our training data. When sub-sampling the computational domain, the loss of the global connectivity and the mismatch of the boundary conditions of the sub-samples can introduce errors. However, we expect this effect to be negligible here because the original domain serves as the representative elementary volume and has consistent boundary conditions. No-slip conditions were applied on the solid surfaces. Symmetry boundary conditions were applied to the side surfaces parallel to the stream-wise direction. Computational domain and boundary conditions are summarized in Fig. 5. We performed a grid independence study such that increasing the number of elements resulted in less than 1% error in the permeability. A typical computational grid consisted of 5 million unstructured elements.
To validate our numerical simulation results, we experimentally determined the permeability of the reticulated foams using a setup shown in Fig. 6. We constructed the setup using polyvinyl chloride (PVC) pipes. The inlet was connected to a pressurized air supply. The main section containing the reticulated foam was connected to the inlet and the outlet by threaded pipe fittings. We used a rotameter to measure and control the air flow rate and a differential pressure transducer to measure the pressure difference across the specimen. Reticulated foam specimens, shown in insert of Fig. 6a, had a diameter Fig. 6b. The values of the pressure difference from our numerical simulations are within 10% of those from our experiments. The nonlinear relationship is due to the fluid properties and the limit of our experimental setup that results in the Reynolds number of approximately 1. We note that, however, different Reynolds numbers in the laminar regime do not affect the accuracy of our numerical simulation. We normalize the velocity fields before they are provided as inputs to our deep learning model. Darcy's law (Whitaker 1986) states that where K is the permeability of the porous media and assumed to be a constant. According to the Carman-Kozeny relation, which is a good approximation for a wide variety of porous media (Zick and Homsy 1982;Larson and Higdon 1989;Kaviany et al. 2012), the permeability scales as the square of the pore length scale. Based on these relations, we normalize the 3D velocity fields using the following equation: where the normalized velocity vector is denoted as Ṽ and the image spatial resolution as R. Our normalization scheme allows us to have a consistent physical length scale across all data which is crucial in image-based learning. Since Ṽ is simply a multiple of V for a given image spatial resolution and flow conditions, we interchangeably refer to the normalized velocity as V . The normalized velocity is then scaled to have a maximum value of 10 for training.
To create our training data, we performed 3D image augmentation on the original solid data and the full 3D velocity fields. We obtained 270 training data for the submodels (270 × 120 × 120 × 120 × 1) . We obtained a second set of training data consisting of 135 samples for the main model (135 × 120 × 120 × 120 × 3) . The total number of training data is reduced to accommodate three velocity components. We applied random shifts along x and y directions. We also applied random flips about x and y axis to ensure that the network sees different orientations of the data during training. We changed the velocity direction for the x and the y components accordingly to ensure the symmetric boundary condition. We randomly assigned 10% of the training data for validation and another 10% for testing.
Microstructure of Reticulated Foams
No previous studies to our knowledge have considered high porosity reticulated foams. Due to their unique pore characteristics, the reticulated foams are very different from typical porous media such as sandstones and bead packs, which can make the training process more difficult. As an example, we compare four pore characteristics of the reticulated foams and of Fontainbleau sandstone (Santos et al. 2020) to highlight the differences in Fig. 7. The four pore characteristics are the Euclidean distance, the diameter of maximum inscribed sphere (MIS), and the detrended time of flight in directions along and against the flow. The Euclidean distance provides a compact representation of space available for fluid flow. The maximum inscribed sphere provides information about the local pores. The detrended time of flight provides information on the tortuous flow path within the porous media.
The histograms of the Euclidean distance and the maximum inscribed sphere are much wider than those of the sandstones. The wider distributions are due to the high porosity and large pore sizes. The reticulated foams have a porosity between 75% and 85%, as summarized in Table 1, while the Fontainbleau sandstones have a porosity between 5% and 40%. The pore sizes of the reticulated foams are on the order of 100 μm, but the pore sizes of Fontainbleau sandstones are on the order of 1 μm. High porosity and large pore sizes of reticulated foams cause considerable amount of the volume to serve as flow paths as shown in Fig. 8. Figure 9 shows the distribution of the absolute velocity in each direction for the reticulated foams.The wide distribution of the magnitude of the velocity component can complicate the learning process. The oscillation in the probability at very low velocity is due to the limited accuracy of the numerical solver. In contrast, the distributions of detrended time of flight are significantly skewed toward the minimum because of the low tortuosity of the reticulated foams. Fontainbleau sandstones have tortuosity above 1.5, and our reticulated foams have it around 1.2, as summarized in Table 1. However, the heterogeneity of the reticulated foams results in high velocities in the span-wise direction, as shown in Figs. 8 and 9. The velocity distributions in both positive and negative directions are also identical, as shown in Fig. 10. The two equal distributions exacerbate the nonunique spatial velocity relationship that exacerbates the learning difficulty.
We believe that incorporating component-wise submodels help the model to learn the spatial velocity relationship considerably. We illustrate the averaged convolutional features at the cross section orthogonal to the z direction after the first and last convolutional blocks of the encoding branch for each velocity component in Fig. 11. The corresponding solid image and velocity field of the reticulated foam are also shown. The features extracted at the end of the first convolutional block mostly retain the binary image of the reticulated foam. We expect this behavior since the first convolutional block needs to guide the reconstruction of the velocity field in the decoding branch through the skip-connection. We note that, however, they still show slightly different weights and biases. Conversely, the low-level features illustrate a clear difference suggesting that the weights and biases of a submodel are optimized to fit the spatial velocity relationship for the corresponding direction. Although the distributions of the x and y velocity components are nearly identical, the extracted 3D features show a difference. Because the two velocity components are rotated 90 • from each other, a different spatial dependency is expected. It may be possible to merge the two submodels, but it would require further modification and is beyond the scope of this study. We also mention that the low-level features do not completely resemble the velocity fields since the majority of the reconstruction is performed within the decoding branch. By incorporating component-wise submodels, we extract unique spatial velocity relationships for each component and ensure efficient training of the model.
Permeability
We first quantify the accuracy of our model at the macro-scale. Figure 12 shows the comparison between the ground-truth and the predicted permeability for the test data of synthetic porous media and reticulated foams. The model shows excellent agreement in predicting the permeability for both porous media. We obtain an average error of 1% and a maximum error of 2% for the synthetic porous media. For the reticulated foams, we obtain an average error of 3% and a maximum error of 6%.
We also compare the scaled total absolute flow error (STAFE) in Fig. 13. The STAFE is defined as (Wang et al. 2021d): Comparison of the permeability of the synthetic porous media and reticulated foams. The predicted permeability is in excellent agreement with an average error of 1% and 3% for the synthetic porous media and reticulated foams, respectively where q n is the planar flow rate in the n direction defined as The STAFE accounts for error in the predicted mass flow rate in each direction and reflects additional details of the flow at the pore scale than the permeability (Wang et al. 2021d). The average STAFE for the synthetic porous media is 0.04, and the average STAFE for the reticulated foams is 0.14. The average STAFE of our model is orders of magnitude smaller than that reported by Wang et al. (2021d) which signifies its accuracy. We note that the STAFE is slightly higher for the reticulated foams likely due to the intrinsic difference in the microstructures between the two porous media. The cellular structure of the reticulated foam is governed by energy minimization (Gibson and Ashby 1997), and the pores typically take a tetrakaidecahedron (14-sided polygon) shape (Richardson et al. 2000) which differs from randomly dispersed spherical pores. Porosity and tortuosity are slightly different even though normalized permeability is similar as summarized in Table 1. The velocity distributions also indicate a slight difference at moderate and high velocity, as shown in Fig. 14. Our results are consistent with the previous studies (Santos et al. 2020(Santos et al. , 2021 where larger errors were reported for porous media that are different from the training data. Our training data can be expanded to cover various microstructures and permeability ranges; however, computational power is often the limitation. Fig. 13 STAFE for the synthetic porous media and reticulated foams. The average STAFEs are 0.04 and 0.14, respectively. The STAFEs for the reticulated foams are slightly larger than those for the synthetic porous media due to the difference in the microstructures
Pore-Level Velocity Fields
We now consider the pore-scale accuracy of our model. Here, we focus on the reticulated foams. Figure 15 visualizes the pore-scale velocity fields. A good agreement is shown between the ground-truth and the predicted velocity fields. Figure 16 illustrates the 2D Fig. 14 Histogram of the ground-truth velocity fields for the synthetic porous media and the reticulated foams. The two distributions show a slight difference at moderate and high velocity suggesting a subtle difference in the microstructures Fig. 15 Visualization of the ground-truth and predicted velocity field for the reticulated foams. The velocity fields show good qualitative agreement planar velocity fields to visualize the details. The planar velocity fields also show good qualitative agreement. We observe that no voxels exhibit a difference in the velocity direction, and only a few voxels show a difference in the velocity magnitude.
A comparison of the velocity distribution for each component is illustrated in Fig. 17. The distributions show excellent agreement at high velocity. However, the accuracy degrades at low velocity, below approximately two orders of magnitude of the maximum. We believe this trend is due to the use of MSE loss function that focuses on the region with larger velocity which allows very accurate predictions in the permeability and the STAFE. We note that most of the errors at very low velocity reside in the solid region where the velocity is zero. The model predicts small finite velocity in the solid region since the difference has negligible impact on the learning process. Hence, we apply a binary mask corresponding to assess the voxel-wise accuracy.
We also plot the velocity histogram in both the negative and positive direction for the x and y components in Fig. 18 to assess the prediction in the direction of the velocity. The velocity fields in both positive and negative directions show excellent agreement at high We emphasize that the required computational time and resources are significantly reduced by using our deep learning model. In this study, a complete CFD simulation Fig. 17 Distributions of the absolute ground-truth and predicted velocity. The comparison shows excellent agreement at high velocity and disagreement at low velocity. We apply a binary mask corresponding to the solid to eliminate errors at low velocity Fig. 18 Distributions of the ground-truth and predicted velocity in span-wise direction. An excellent agreement is shown at high velocity. Model shows larger deviation at low velocity required approximately 6 h of CPU time (Intel Xeon Gold 6128) and maximum of 16 GB of memory. On the other hand, our deep learning model only required 11 s and a negligible amount of memory on the same CPU. With a GPU equipped workstation, the inference time of our deep learning model is significantly reduced, taking less than a second, as compared to the order of ten minutes required by GPU-based LBM-RMT (Wang et al. 2021e). The deep learning model provides a significant advantage in conducting optimization and parametric study of porous media where computational speed is crucial.
Effect of the PIMSE
We train an equivalent model using the MSE loss function to compare and analyze the effect of our PIMSE loss function (Eqs. (3)). Figure 19 shows the divergence of the predicted velocity field for the reticulated foams. We find that it is significantly reduced by almost a factor of 10 when using the PIMSE loss function. Figure 20 shows the STAFE of models with different loss functions. We observe a slight improvement of 6% on average when using the PIMSE. Figure 21 illustrates the distribution of the velocity field. The velocity distributions, however, do not show a significant difference between the two models. We hypothesize that it may be due to the lack of depth and complexity of our model. The loss curve throughout the learning process (Fig. 22a) indicates a slight under-fit. However, increasing the depth and complexity would exceed our available computational limit. Figure 22b shows that the final loss values for the two models are comparable suggesting that the model with MSE loss function may already predict close to the expectation for the given complexity and depth of the model. The values for PIMSE are slightly higher due to the addition of the divergence free condition.
Another possible reason is the coarse uniform used in our deep learning model. In this study, a typical size of the computational domain for CFD simulations required approximately 5 million unstructured grids due to the complex geometry and tortuous paths. On the contrary, the computational domain used for our deep learning model consisted only of approximately 2 million uniformly structured grids. Figure 23 compares the number Fig. 19 Divergence of the velocity field between models with different loss functions. We observe an order of magnitude decrease when the PIMSE is utilized as the loss function 1 3 of elements as a function of velocity. The 120 × 120 × 120 computational domain size of the deep learning model is inadequate to fully represent the computational domain of CFD simulations, especially at moderate velocity magnitudes. The pixelation (Kashefi et al. 2021a) caused by the transition to uniform structured grid and the reduction in the total number of elements may hinder accurate enforcement of the divergence free condition in complex geometries. While increasing the voxel resolution of the training data may enhance the benefits of PIMSE, it would exceed our computational capacity. We note that Santos et al. (2021) proposed a multi-scale approach that performs inferences on a larger domain size than the training data and thereby could circumvent this limitation. They demonstrated accurate predictions of permeability for domain sizes of up to 512 3 from a training dataset of 256 3 . It is worth noting that this approach may impact the effectiveness of PIMSE and requires further detailed investigation.
Nevertheless, we demonstrate that incorporating PIMSE significantly reduces the divergence of the velocity fields and improves the STAFE by an average of 6%. Our results Distributions of the absolute ground-truth and the absolute predicted velocity fields using models with different loss functions. No significant difference is observed between the two models for all velocity components demonstrate that physical laws can be enforced in the loss function to guide the learning process of image-based CNN models.
Heat Transfer Analysis
Here, we capitalize on the advantages of our deep learning model to perform heat transfer analysis of the reticulated foams as presented in Fig. 2. We first performed similar CFD simulations (ANSYS Fluent 18.2) to obtain the ground-truth temperature fields of the reticulated foams. We numerically solved the equation of conservation of energy under steadystate, negligible viscous dissipation, and no volumetric heat generation assumptions: (10) C p (V ⋅ ∇T) = ∇ ⋅ (k∇T) where C p is the fluid specific heat, T is the temperature, and k is the fluid thermal conductivity. We also assumed that the fluid properties are independent of temperature. We chose the fluid properties such that the Peclet number, defined as where D p is the average pore diameter and is approximately on the order of thousands to emphasize the effect of convection. Hot and cold constant temperature boundary conditions were applied at the solid surfaces and at the inlet, respectively. Symmetry boundary condition was applied to the side surface parallel to the stream-wise direction. Computational domain and boundary conditions for the numerical simulation are shown in Fig. 24. We considered a heat transfer problem where the velocity field is fully developed before energy with the porous media. This type of problem is relevant in many thermal applications such as in heat-exchanger systems.
We first compare the Nusselt number, Nu, for each of the reticulated foams defined as where h is the overall heat transfer coefficient defined as where V is the average stream-wise velocity, A c is the cross-sectional area, A s is the solid surface area, T o and T i are the mean fluid temperature at the outlet and inlet, respectively, T ∞ is the cold inlet temperature, and T s is the hot solid surface temperature. The Nusselt number describes the volume-averaged thermal performance of the porous media. Figure 25 presents the comparison of the Nusselt number obtained using the ground-truth and predicted velocity. We find a good agreement in the Nusselt number with an average error of 11.6%. The Nusselt numbers show a slight over-prediction due to the over-prediction in the velocity, especially for the x and y components (Figs. 17 and 18).
Fig. 24
Computational domain for heat transfer analysis. Hot and cold constant temperature boundary conditions were applied at the solid surfaces and at the inlet, respectively. Symmetry boundary condition was applied to the side surface parallel to the stream direction We now consider the accuracy of temperature fields at the pore scale. Figure 26 illustrates the temperature fields at the cross-section parallel to the z direction for reticulated foam 2 which had the largest STAFE error. The corresponding planar ground-truth and predicted velocity fields are also presented. The temperature fields show good qualitative agreement. However, a noticeable number of voxels show a difference due to the errors in Fig. 25 Comparison of the Nusselt number obtained using the ground-truth and predicted velocity. It shows good agreement with an average error of 11.6%
Fig. 26
Temperature profile at the cross-section parallel to the z direction obtained using the ground-truth and predicted velocity fields. Corresponding planar ground-truth and predicted velocity fields are also shown. Temperature profiles show good qualitative agreement the velocity field. To investigate in more detail, we compare the temperature profile at the center line through the computational domain in the stream-wise direction for all reticulated foams in Fig. 27. The temperature fields using deep learning model adequately characterizes the overall transport of energy for all reticulated foams. However, we observe voxel-wise errors where the temperature values are both under and over predicted. This behavior is also reported by Wang et al. (2021d) where the voxel-wise errors in the velocity field cause under-and over-accumulation of species concentration in solving the mass transport problem.
Fig. 27
Temperature distribution at the center line of the computational domain (x = 60 voxels and y = 60 voxels) in the stream-wise direction. Temperature profiles qualitatively agree well. However, the voxel-wise errors in the predicted velocity fields directly translate to the voxel-wise errors in the temperature fields Nonetheless, we drastically increase the computational speed to perform heat transfer analysis of the reticulated foams while demonstrating good agreement in the Nusselt number and temperature fields. Our approach can expand to other transport analysis where the knowledge of the velocity field is crucial such as filtration, solute transport, and mass transfer.
Conclusion
We designed a deep learning model to predict the velocity fields of reticulated foams from only their binary images and incorporated a physics-informed loss function to enforce the law of mass conservation for incompressible flows. We demonstrated that our model, trained only with synthetic porous media, showed excellent accuracy in permeability with less than 6% and in STAFE with less than 0.2 although it decreased at the pore scale. We also showed that our physics-informed loss function significantly reduced the divergence of the velocity fields by a factor of 10 and improved STAFE by 6% although the improvement was minor at the pore scale. We further illustrated that our deep learning model provides accurate velocity fields as inputs to subsequent heat transfer analysis. We obtained an average error of 11.6% for the Nusselt number, but the voxel-wise error in the velocity field directly affected those in the temperature fields.
We also demonstrated a significant reduction in the amount of computational resources and time required to characterize the flow and transport through complex porous media. We shortened the computation time from more than 6 h to 11 s. Our approach is advantageous in parametric and optimization for various engineering applications where hydrodynamic and transport behavior is essential such as filtration, solute transport, multiphase flow, and mass transfer. | 2023-06-03T15:17:02.298Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "53d3b83a465120b9aab4d7ba1801fa865cb32b20",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s11242-023-01961-1",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "33395b3cdaad7aa4bc2d3ed558442224022f67c1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
257737891 | pes2o/s2orc | v3-fos-license | Hybrid De Novo Whole-Genome Assembly, Annotation, and Identification of Secondary Metabolite Gene Clusters in the Ex-Type Strain of Chrysosporium keratinophilum
Chrysosporium is a polyphyletic genus belonging (mostly) to different families of the order Onygenales (Eurotiomycetes, Ascomycota). Certain species, such as Chrysosporium keratinophilum, are pathogenic for animals, including humans, but are also a source of proteolytic enzymes (mainly keratinases) potentially useful in bioremediation. However, only a few studies have been published regarding bioactive compounds, of which the production is mostly unpredictable due to the absence of high-quality genomic sequences. During the development of our study, the genome of the ex-type strain of Chrysosporium keratinophilum, CBS 104.66, was sequenced and assembled using a hybrid method. The results showed a high-quality genome of 25.4 Mbp in size spread across 25 contigs, with an N50 of 2.0 Mb, 34,824 coding sequences, 8002 protein sequences, 166 tRNAs, and 24 rRNAs. The functional annotation of the predicted proteins was performed using InterProScan, and the KEGG pathway mapping using BlastKOALA. The results identified a total of 3529 protein families and 856 superfamilies, which were classified into six levels and 23 KEGG categories. Subsequently, using DIAMOND, we identified 83 pathogen–host interactions (PHI) and 421 carbohydrate-active enzymes (CAZymes). Finally, the analysis using AntiSMASH showed that this strain has a total of 27 biosynthesis gene clusters (BGCs), suggesting that it has a great potential to produce a wide variety of secondary metabolites. This genomic information provides new knowledge that allows for a deeper understanding of the biology of C. keratinophilum, and offers valuable new information for further investigations of the Chrysosporium species and the order Onygenales.
Introduction
The genus Chrysosporium was proposed by Corda to introduce a single species, Chrysosporium corii [1]. However, Saccardo [2] synonymized that genus with Sporotrichum and, consequently, the former fell into oblivion. More than fifty years later, Hughes [3] reintroduced Chrysosporium for C. corii and Chrysosporium pannorum (syn. Geomyces pannorum), restricting the generic concept of Sporotrichum to those species with wide hyphae, dark conidia and the absence of intercalary conidia. In a revision carried out by Carmichael [4], Blastomyces, Emmonsia, Geomyces, Myceliophthora, and Zymonema were synonymized with Chrysosporium, leaving that genus morphologically highly one-sided. Dominik [5] expanded Carmichael's concept of Chrysosporium a little more, including Sepedonium, a genus that, like Sporotrichum, was later demonstrated to have phylogenetic links with basidiomycetous fungi [6]. Van Oorschot [6], in her monograph on Chrysosporium and allied genera, restored the order to the genus, disaggregating Emmonsia, Geomyces, Myceliophthora, and Zymonema from them, and introducing the genus Trichosporiella, based on colony features, conidial morphology, temperature resistance, and keratin degradation, among other phenotypic characters. Van Oorschot [6] also remarked the connection between the species the low-quality reads were removed by Trimmomatic v0.39 [24], using the ILLUMINACLIP, SLIDINGWINDOW and MINLEN options. Hybrid assemblies with short and long reads were performed using the SPAdes v3. 13.0 [25] and MaSuRCA v4.0.5 [26] software, with default settings. All assemblies obtained were evaluated using QUAST v5.1.0rc.1 [27] and BUSCO v5.3.1 [28] to assess the completeness of the genome. Based on QUAST and BUSCO results, only one assembly was considered for downstream analysis. The best result draft assembly was polished using Illumina short-read data with POLCA (from MaSuRCA v.4.0.5).
Genome Information and Comparison with the Closest Species
We present the first hybrid de novo genome sequencing of the ex-type strain of Chrysosporium keratinophilum using short-and long-read technologies. The QUAST analysis showed that the best assembly was obtained with MaSuRCA. The resulting polished genome consisted of 25.4 Mbp, spread across 25 contigs with an N50 of 2.0 Mb and a BUSCO score of 96.0%. This last result is comparable to the C. immitis RS, C. posadasii C735 delta SOWgp and A. verrucosus IHEM 4434 genome assemblies (96.8%, 96.8% and 96.3%, respectively), indicating that our assembly was relatively contiguous (Figure 1). A total of 166 tRNAs, with a length ranging from 67 bp to 129 bp, and 24 rRNAs were predicted in the genome.
Assembly statistics of Chrysosporium keratinophilum, with its closest phylogenetically related species, are referred to in Table 2. A total of 166 tRNAs, with a length ranging from 67 bp to 129 bp, and 24 rRNAs were predicted in the genome.
Assembly statistics of Chrysosporium keratinophilum, with its closest phylogenetically related species, are referred to in Table 2.
Average Nucleotide Identity
Based on the whole-genome alignment, the average nucleotide identity (ANI) showed values between some members of the Onygenales from 96.14 to 72.48%. These results confirmed that Chrysosporium keratinophilum belongs to the family Onygenaceae, showing a close relationship with Aphanoascus verrucosus IHEM 4434, with an ANI value of 81.19%, although it is loosely related to other species that are more closely related to the Onygenales (Amauroascus niger UAMH 3544, Brunneospora queenslandica CBS 280.77, Coccidioides immitis RS, Coccidioides posadasii C735 delta SOWgp, Ophidiomyces ophiodiicola CBS 122,913 and Uncinocarpus reesii UAMH 1704) ( Figure 2). Based on the ANI results, we accept Chrysosporium keratinophilum CBS 104.62 as belonging to the genus Aphanoascus, as has been previously proposed [22].
Prediction of Genes from the Assembled Genome
Gene annotation, using BRAKER2 pipeline, resulted in 34,824 coding sequences (CDS) and 8002 protein sequences. Functional annotation, using Interproscan with Pfam and SUPERFAMILY options, produced a total of 3529 protein families and 856 superfamilies as the results (Supplementary Tables S1-S4). Annotations based on the Pfam and SU-PERFAMILY databases assigned functions to 76.6% and 62.7%of the predicted proteins, respectively. The most prevalent Pfam dominants included WD domain G-beta repeat, protein kinase domain, reverse transcriptase (RNA-dependent DNA polymerase), Ankyrin repeats (three copies), and mitochondrial carrier protein as the most prevalent families. In the case of superfamilies, the analysis showed that the five most prevalent were: a P-loop containing nucleoside triphosphate hydrolases, protein kinase-like (PKlike), ribonuclease H-like, NAD(P)-binding Rossmann-fold domains and DNA/RNA pol- In the present study, the highest ANI value obtained was between Coccidioides immitis RS and Coccidioides posadasii C735 delta SOWgp (ANI value = 96.1%), and the lower values were shown by Ophidiomyces ophiodiicola CBS 122,913 when it was compared with the other analyzed strains (ANI values ≤ 72.7%). Brunneospora queenslandica CBS 280.77 and Amauroascus niger UAMH 3544 showed an ANI value of 83.43%. Our results suggest this ANI value is too high for two strains belonging to different genera, because previous studies have obtained ANI values close to 79% for fungi of the same genus [38]. Therefore, an exhaustive taxonomic review of the Onygenales is recommended in order to look for possible errors in the taxonomic assignment or for limitations of the ANIs in discriminating between the genera of that order.
Prediction of Genes from the Assembled Genome
Gene annotation, using BRAKER2 pipeline, resulted in 34,824 coding sequences (CDS) and 8002 protein sequences. Functional annotation, using Interproscan with Pfam and SUPERFAMILY options, produced a total of 3529 protein families and 856 superfamilies as the results (Supplementary Tables S1-S4). Annotations based on the Pfam and SU-PERFAMILY databases assigned functions to 76.6% and 62.7%of the predicted proteins, respectively. The most prevalent Pfam dominants included WD domain G-beta repeat, protein kinase domain, reverse transcriptase (RNA-dependent DNA polymerase), Ankyrin repeats (three copies), and mitochondrial carrier protein as the most prevalent families. In the case of superfamilies, the analysis showed that the five most prevalent were: a P-loop containing nucleoside triphosphate hydrolases, protein kinase-like (PK-like), ribonuclease H-like, NAD(P)-binding Rossmann-fold domains and DNA/RNA polymerases.
Previous studies have shown a fluctuating number of gene families in some members of the Onygenales [39][40][41]. The genome analysis of C. keratinophilum showed a reduction in the number or an absence of gene families related to the degradation of the plant cell wall, such as the cellulase (glycosyl hydrolase family 5), fungal cellulose-binding domain and glycosyl hydrolase family 61. At the same time, analysis showed a higher number of genes from families related to the degradation of animal material, such as the protein tyrosine kinase and subtilase family. Regarding other protein families, we would like to highlight the high frequency of the LysM domain, with a total of 36 genes, being the largest number of genes reported within the order Onygenales [41][42][43]. The LysM domain is linked to various functions, such as improving fungal-fungal union interactions and chitin and keratin degradation, the latter being fundamental in a keratinophilic fungus.
In recent years, various keratinases have been identified both in bacteria and fungi. In bacteria, these enzymes have been reported in some species of Bacillus, Pseudomonas and Stenotrophomonas, among others, and in fungi in genera such as Microsporum, Onygena and Trichophyton [44]. Keratinases are distributed across various families belonging to the serine proteases and metalloproteases [45]. In the current genome, various families of peptidases that were previously associated with keratin degradation [45][46][47] were identified, such as peptidase family S41, dipeptidyl peptidase IV (DPP IV), peptidase family M16, peptidase family M28, and the fungalysin metallopeptidase (M36), peptidase family M3 and peptidase family M48, which could be linked to the fact that C. keratinophilum has been described as a keratinophilic species. In this way, keratin degradation by C. keratinophilum could go along the following pathway: a rupture of the keratin disulfide bonds bisulfite reductases; then, the endoproteases of the M36 family would act, providing small peptides; next, exoproteases of the M28 family and dipeptidyl peptidase IV (DPP IV) hydrolyze the peptides into oligopeptides; and, finally, the peptidase M3 family of enzymes can hydrolyze these oligopeptides.
The BlastKOALA tool is a KEGG web service that annotates genomes in order to understand the biological functions and interactions of genes [48]. KEGG route-mapping assigned the annotated genes into six levels and distributed them across 22 KEGG categories. Of the six levels, the most prevalent was metabolism (2921, 39.5%), followed by human diseases (1811, 24.5%) and genetic information processing (808, 10.9%). These enzymes were then categorized according to the functional category. The five most prevalent were: genetic information processing (1597, 43%), carbohydrate metabolism (317, 9%), cellular processes (226, 6%), protein families: signaling, cellular processes (178, 5%) and amino acid metabolism (176, 5%) ( Figure 3). assigned the annotated genes into six levels and distributed them across 22 KEGG categories. Of the six levels, the most prevalent was metabolism (2921, 39.5%), followed by human diseases (1811, 24.5%) and genetic information processing (808, 10.9%). These enzymes were then categorized according to the functional category. The five most prevalent were: genetic information processing (1597, 43%), carbohydrate metabolism (317, 9%), cellular processes (226, 6%), protein families: signaling, cellular processes (178, 5%) and amino acid metabolism (176, 5%) ( Figure 3). The carbohydrate-active enzymes (CAZymes) are a broad class related to the breaking down of complex carbohydrates and polysaccharides into small molecules [49]. Analysis of CAZymes showed that the genome of C. keratinophilum encodes a large, varied set The carbohydrate-active enzymes (CAZymes) are a broad class related to the breaking down of complex carbohydrates and polysaccharides into small molecules [49]. Analysis of CAZymes showed that the genome of C. keratinophilum encodes a large, varied set of CAZyme families that resulted in the identification of 421 genes (Table 3 and Supplementary Tables S5 and S6), a lower value compared to human pathogenic species of the same order, such as Blastomyces dermatitidis, C. immitis or C. posadasii [50]. Based on the results obtained with DIAMOND, glycoside hydrolases (GHs) were the most prevalent family, with 61 enzymes. The next most prevalent were glycosyltransferases (GTs) with 41, the third was the carbohydrate-binding module (CBM) group with 22, followed by the families of auxiliary activities (AAs), carbohydrate esterase (CE) and polysaccharide lyases (PLs) with 15, 7 and 2, respectively. The glycosyltransferase enzymes catalyze the formation of glycosidic bonds by the transfer of sugar moieties from activated donor molecules to specific acceptor molecules [51]. In the present study, the most prevalent glycosyltransferases were GT2, GT1 and GT22. The GT2 family was the group with the highest number of genes (with 18), and is one group of enzymes that synthesizes chitin [52]. Previous investigations have shown that the GT2 families are the most common component in most fungal species [51]. The GT1 enzyme encodes sterol glucosyltransferase, which catalyzes the synthesis of sterol glycosides and membrane-bound lipids, and is widespread in some algae, fungi, bacteria, and animals [53]. Finally, the GT22 family is involved in α-1,2-mannosyltransferase activity, which was previously found to contribute to virulence in fungi [54].
The family of glycoside hydrolases (GHs) hydrolyze the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate [55]. In the present study GH18, GH47 and GH125 were the most prevalent in this family. The chitinases from family GH18 have been reported previously in fungi and plants. In the case of fungi, these enzymes relate to nutrition, growth, mycoparasitism and virulence [49]. The enzymes GH47 and GH125 relate to the activity of α-mannosidase, although there has been no information on the function of these enzymes until now [56].
For the other families, the most prevalent were CBM50, CE8, AA3 and PL3_2. The CBM50 family is associated with chitinase catalytic domains, implicated in binding chitin [57]. The CE8 family has pectin methylesterase activity, which is essential for the metabolism of pectin [58]. The AA3 family has FAD-dependent (GMC) oxidoreductase activity, relating to the formation of metabolites such as hydroquinones or H 2 O 2 , required by other AA enzymes [59]. Finally, the PL3_2 family is a pectin lyase that catalyzes the scission of pectin [58].
Previous studies performed on different pathogenic fungi genera related to Chrysosporium, such as Blastomyces, Coccidioides, Histoplasma and Sporothrix, have shown the absence of CAZymes of the PL class [50,60]. In the genome of strain CBS 104.62, the identification of PL3_2 and PL1_7, both related to pectin degradation, was possible [58,61]. Moreover, it was also possible to identify another CAZyme related to pectin hydrolysis, GH28, also absent in the Coccidioides genome. The presence of these families in the analyzed genome could be due to the fact that C. keratinophilum is a saprophyte fungus with soil as its main ecological niche.
The PHI base is a database that contains verified information on virulence-related genes that affect the outcome of pathogen-host interactions [36]. Based on the PHI analysis, we identified a total of 83 PHI putative genes in the C. keratinophilum genome (1.06% of total genes) (Figure 4 and Supplementary Table S7), Aspergillus fumigatus being the species with the highest number of homologous genes (30 genes), followed by Fusarium graminearum (20 genes), Magnaporthe oryzae (15 genes) and other fungal species (20 genes). Among the genes, the reduced virulence group showed a higher number of genes (35 genes), followed by unaffected pathogenicity with 21, and mixed with 21 genes. The high number of reduced virulence and unaffected pathogenicity can indicate that C. keratinophilum CBS 104.62 might be considered to have a weak pathogenic ability. However, various studies consider some strains of Chrysosporium spp. as opportunistic pathogens, causing skin and nail diseases, and deeper infections in immunocompromised patients [62]. The secondary metabolite analysis, using AntiSMASH, classified 27 BGCs into nine types, which, according to the genomic organization principle implicated upon transcriptional regulation, could have a role in the production of secondary metabolites by this strain [63] (Figure 5): six non-ribosomal peptide synthetase (NRPS) clusters, six type 1 polyketide synthase (T1PKS) clusters, tree terpene clusters, one indole cluster, two type 3 polyketide synthase (T3PKS) clusters, one lasso peptide, one non-ribosomal peptide synthetase (NRPS)-like cluster, one beta-lactone and six hybrid clusters. From this, one BGC can be identified as okaramine B, with 85% similarity, and the other three as UNII-YC2Q1O94PT YC2Q1O94PT (ACR toxin I), clavaric acid and dimethyl coprogen, with The secondary metabolite analysis, using AntiSMASH, classified 27 BGCs into nine types, which, according to the genomic organization principle implicated upon transcriptional regulation, could have a role in the production of secondary metabolites by this strain [63] (Figure 5): six non-ribosomal peptide synthetase (NRPS) clusters, six type 1 polyketide synthase (T1PKS) clusters, tree terpene clusters, one indole cluster, two type 3 polyketide synthase (T3PKS) clusters, one lasso peptide, one non-ribosomal peptide synthetase (NRPS)-like cluster, one beta-lactone and six hybrid clusters. From this, one BGC can be identified as okaramine B, with 85% similarity, and the other three as UNII-YC2Q1O94PT YC2Q1O94PT (ACR toxin I), clavaric acid and dimethyl coprogen, with 100% similarity. The UNII-YC2Q1O94PT (ACR toxin I) is associated with the production of leaf spot disease on rough lemon by Alternaria alternata [64], clavaric acid is an antitumor isoprenoid compound that acts as an inhibitor of Ras farnesyl transferase, previously described in Hypholoma sublateritium [65], and finally, dimethyl coprogen is well known as a siderophore to chelate iron during depleted conditions by Alternaria alternata [66].
Conclusions
In this study, we present the only genome of Chrysosporium keratinophilum that has been sequenced and published using a hybrid assembly strategy to date. The genome annotation and the genomic analysis provide new knowledge that will allow us to deepen our understanding of the biology of Chrysosporium keratinophilum, and gather new information for further investigations within the Onygenales. In addition, its genetic capability to produce secondary metabolites was successfully determined by the elucidation of the biosynthetic gene pathways, suggesting that the studied strain has a great biosynthetic potential to produce compounds of biotechnological interest. However, future analysis will be necessary to corroborate the in vitro production of such molecules. A previous study determined transcriptionally active genes, as well as their enzymatic products after classifying the biosynthetic genes, using the fungal genomes of anaerobic fungi from the class Neocallimastigomycetes under laboratory conditions [67]. Although our results suggest the probable production of secondary metabolites associated with C. keratinophilum, more studies are needed to prove the production of these compounds by this strain.
Conclusions
In this study, we present the only genome of Chrysosporium keratinophilum that has been sequenced and published using a hybrid assembly strategy to date. The genome annotation and the genomic analysis provide new knowledge that will allow us to deepen our understanding of the biology of Chrysosporium keratinophilum, and gather new information for further investigations within the Onygenales. In addition, its genetic capability to produce secondary metabolites was successfully determined by the elucidation of the biosynthetic gene pathways, suggesting that the studied strain has a great biosynthetic potential to produce compounds of biotechnological interest. However, future analysis will be necessary to corroborate the in vitro production of such molecules. | 2023-03-25T15:10:40.861Z | 2023-03-23T00:00:00.000 | {
"year": 2023,
"sha1": "086002d888c549d884e8ebaee65d392f60bda867",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/9/4/389/pdf?version=1679545023",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2047a58441c669aa784cfd39d60b4cb9abad608b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255499792 | pes2o/s2orc | v3-fos-license | Immune Checkpoint Inhibitors and Opioids in Patients with Solid Tumours: Is Their Association Safe? A Systematic Literature Review
Background: Immune checkpoint inhibitors (ICIs) represent one of the most effective treatments for patients with cancer. As their activity relies on host immune system reactivity, the role of concomitant medications such as corticosteroids and antibiotics has been extensively evaluated. Preclinical data suggest that opioids may influence the immune system. Methods: a systematic literature revision was performed using specific keywords on the major search engines. Two authors analysed all the studies and provided a selection of the following inclusion and exclusion criteria, respectively: 1. data collection of patients older than 18 years old affected by solid tumours; 2. description of ICIs efficacy in terms of PFS, OS, TTF, and ORR; 3. concomitant ICIs-opioids treatment and 1. language different from English; 2. not pertinent analyses. Results: 523 studies were analysed, and 13 were selected and included in our series. A possible negative interaction between oral opioids and ICIs efficacy was observed. Most evidence was retrospective, and studies were heterogeneous. Conclusions: Even if oral opioids seem to impact negatively on ICIs efficacy in cancer patients, to date there is not sufficient evidence to avoid their prescription in this population.
Introduction
Given the wide spread of immune checkpoint inhibitors (ICIs) as a treatment for several tumours, the definition of possible interactions between ICIs and concomitant drugs has recently gained importance. While the negative interactions of antibiotics and corticosteroids are now well known [1][2][3], other medications are still under investigation. Indeed, some in vitro and in vivo experiments have demonstrated morphine receptors on neoplastic cells [4]. The activation of such receptors may have an impact on both tumour growth and metastatic spread potential. However, a meta-analysis evaluating animal studies concluded that there is no evidence that analgesic, including opioids, increases metastases occurrence [5]. Moreover, prospective studies on patients with cancer failed to show any association between opioid use and the risk of recurrence in breast and colorectal cancer patients [6][7][8]. At the same time, the influence of opioids on the immune system has been widely studied, with controversial results, especially in patients with cancer.
The mechanisms of opioids on the immune system or immune cells have been studied for over 40 years. Some opioids are associated with immunosuppressive effects, with a developing knowledge that morphine, fentanyl, buprenorphine, and methadone suppress innate immunity while having different effects on adaptive immunity. It is similarly apparent that specific opioids have immunostimulatory effects, some exhibit dual effects, and others have no immunomodulatory effect [9]. Indeed, Wybran et al. showed that morphine can reduce T-cell rosettes formation in vitro, an effect that could be reversed by naloxone administration [10]. Conversely, short-term morphine use has been shown to induce IL-2 and IL-6 expression, while chronic use enhances T-reg cell activity, reduces Th17 function and increases µ opioid receptor mRNA expression in T lymphocytes [11][12][13][14]. In vitro studies also showed that morphine reduces T-helpers 1 activation while increasing T-helpers 2 differentiation and IL-4 production, with the latter effect being also present upon fentanyl, buprenorphine, and methadone exposure [15,16]. Such differences may be liganddependent [16]. At the same time, also a dose-dependent effect has been demonstrated with different opioids [17]. Looking at T cell activation, morphine has been also shown to reduce major histocompatibility complex (MHC) class II expression, especially on B-cells, leading to CD4+ cell activation and proliferation inhibition [15].
In this systematic review, we aimed at describing the relationship between the immune system and opioids in patients affected by solid tumours.
Materials and Methods
A systematic literature review was performed by using the following search engines: PubMed, Google Scholar, Cochrane, and Cinahl. The following keywords were used: "opioids" OR "concomitant treatments" AND "neoplasm" OR "tumour" OR "cancer" AND "immunotherapy" OR "immune checkpoints inhibitors" OR "PD-1/PD-L1 inhibitors". We considered reports published from 1st January 2000 to 1st August 2022. It was decided to include only studies, which analysed adult patients affected by solid tumours. Other inclusion criteria were as follows: (1) the description of ICIs efficacy outcomes in terms of progression-free survival (PFS), overall survival (OS), overall response rate (ORR), time to treatment failure (TTF); (2) concomitant ICIs-opioids treatment. Otherwise, (1) language different than English and (2) not pertinent works were excluded. Two authors (MC and PB) reviewed all the studies and approved the selection following the inclusion and exclusion criteria. In case of disagreement a third author (LB) was asked to make the final decision. The following outcomes considered were: PFS, OS, ORR and TTF. Finally, the reviewing process followed PRISMA guidelines [18] even if the authors did not provide its registration to PRIMA website.
Results
Five hundred and twenty-three studies were analysed, including abstracts, posters, and oral presentations for international meetings [such as the European Society of Medical Oncology (ESMO) and the American Society of Clinical Oncology (ASCO) meetings]. Thirteen studies were finally selected ( Figure 1). Of these, four studies were presented at international congresses, one is a preprint report, while the others are published. The studies were conducted from September 2014 to July 2021. The studies' characteristics are summarised in Table 1. All the studies of our series were designed as retrospective collections of clinical data. Among these, 5 out of 13 involved more than one centre. Collectively, data about patients with more than eight different tumour types were enrolled, with non-smallcell lung cancer (NSCLC) being the most frequent. Other tumours included were melanoma, Merkel cell carcinoma, renal cell and urothelial cancers, head/neck tumours, colon, and gynaecological cancers. One study did not specify the type of tumours [19]. The studies included in our series were characterised by different sample sizes (range 64-1012) [3,20]. Although treatment with ICIs was an inclusion criterium for all the studies, only in 8 out of 13 studies the type of ICIs was specified. While nivolumab was the most adopted ICI, others included as follows: pembrolizumab, atezolizumab, ipilimumab (combined with nivolumab), and avelumab. Kostine et al. only specified the main targets of the ICIs included in their series (i.e., PD-1/PDL-1 or CTLA-4) [21]. In almost all studies, ICIs were used both as first-line treatments and as subsequent therapies. Only one series included patients treated as first-line treatment only [20]. Oral opioids can be divided into strong opioids (morphine, hydrocodone, oxymorphone, oxycodone, fentanyl, buprenorphine, tapentadol, methadone, and hydromorphone) and weak opioids (tramadol and codeine). In this regard, 5 out of 13 studies reported the type of prescribed opioids. In particular, Botticelli et al. included only patients treated with strong opioids (oxycodone, morphine, fentanyl), while Kostine et al. patients who used only morphine [21,22]. Taniguchi and colleagues reported data about specific administered molecules (i.e., fentanyl, morphine, hydromorphone, tapentadol, and combined oxycodone-fentanyl), while Mock and colleagues divided the patients into low and high opioids users based on a morphine equivalent daily dose (MEDD; <50 and >50 mg, respectively) [23,24]. Similarly, Weinfeld divided patients into the following three groups: opioids-naïve, those treated with low-dose or high-dose of opioids, considering 60 morphine milligram equivalents per day as a cut-off [19].
Studies were heterogeneous also when accounting for main outcomes, including progression-free survival (PFS), overall survival (OS) and in some cases, time to treatment failure (TTF) or response rate (RR). However, all reports showed a negative correlation in terms of OS, PFS or TTF between opioids and ICIs. Taniguchi et al., by analyzing 38 patients treated with opioids matched with others 38 opioids naïve, showed a decreased mOS for the first group (4.20, 95% CI 2.53 to 6.20 months, vs. 9.57, 95% CI 2.23 to not reached months; p = 0.018) [23]. Similarly, in 167 patients treated with opioids, the overall response rate was compared to not-treated patients (16.2% vs. 33.7%; p < 0.001) [25]. Another study analyzed the concomitant use of opioids in 64 cases of advanced NSCLC treated with single-agent ICIs in first-line setting showing a reduced median progression-free survival (PFS) for patients treated with concomitant opioids as compared to those not receiving opioids (1.7 months versus 12.7 months, HR 4.16, 95%CI 2.15-8.05, p < 0.001). These associations were maintained at a multivariate analysis that included performance status, clinical stage, and a number of metastatic sites [20]. The following results were obtained by Miura et al.: at the multivariate analyses, opioid therapy was associated with a shorter OS (HR 1.54; 95% CI: 1.12-2.11, p = 0.007), together with subsequent lines of treatment and higher ECOG PS [26]. Other studies confirmed this association, especially with high doses of opioids. Indeed, a retrospective study of 212 patients showed a significantly shorter mOS in patients receiving high doses of opioids (at least 60 morphine milliequivalents daily) as compared to those who received low doses (less than 60 morphine milliequivalents daily) or were opioids naïve (mOS 10 vs. 18 vs. 37 months; p = 0.0515) [19]. Similar results were observed by Mock et al. in NSCLC patients treated with high versus low dose opioids therapy (MEDD respectively of >50 and <50 and mOS 3.8 vs. 14.5 months, p = 0.001) [24].
Discussion
Up to 90% of cancer patients experience pain at some stage of their disease journey, with a third rating the intensity of their pain as moderate to severe, and up to half being undertreated [31]. In the management of cancer pain, the prescribed opioids are divided as discussed above into strong and weak. However, weak opioids hold a controversial role in the management of cancer pain and have been demonstrated inferior to low-dose morphine for treating moderate cancer pain [32]. The most common non-opioid drugs prescribed for the treatment of cancer pain are acetaminophen/paracetamol; corticosteroids; non-steroidal anti-inflammatory drugs (NSAIDs); anti-neuropathic agents, which include tricyclic antidepressants and anticonvulsants and, in the end, bisphosphonates. The choice of drug is often driven by the relevance of potential adverse effects (AEs) in the single patient. For example, the toxicity profile for NSAIDs includes gastrointestinal and cardiovascular AEs, nephrotoxicity, and hepatotoxicity. A Cochrane review found 10 studies that compared a NSAID with an opioid, 4 found the NSAID to be more effective, whereas 2 studies showed they were less beneficial. Meta-analyses of four of the studies found no significant difference in pain relief but more AEs with the opioid use [33]. Similarly, corticosteroids have multiple potential short-and long-term Aes, including those on the immune response, behaviour, and carbohydrate/protein metabolism [34]. Despite corticosteroids being used widely to treat cancer pain, there is limited evidence for their efficacy [35]. Opioid analgesia is usually inadequate to obtain neuropathic pain relief, and additional medications are required, mainly antidepressant and anticonvulsant drugs. However, these agents are used mostly in combination with opioids [36]. The evidence for chronic use of most of the non-opioid drugs in the treatment of cancer pain remains scarce. Most studies have methodological limitations and lack long-term follow-up, so that data on the efficacy of use of these drugs remain limited. Some relevant issues persist, for example, whether NSAIDs and corticosteroids can be safely continued long-term in cancer patients, or which non-opioid drugs are best for specific types of pain and in which combinations [31]. A recent systematic literature review stated that evidence on the efficacy and safety of non-opioid drug combinations in the treatment of cancer pain is scant, as few RCTs have been published to date [37]. There is certainly a need to evaluate non-opioid drug combinations in the management of cancer pain. However, further research on this topic is needed to recommend non-opioid drugs to replace or reduce the prescription of opioids for the treatment of cancer pain. Moreover, reduced opioid access could worsen the problem of cancer pain under treatment and threaten decades of progress in the care of patients with advanced cancer. As discussed above, despite the widely spread of ICIs, data about the potential pharmacological interactions of these drugs have been studied only recently. In our systematic review, we collect and summarize the existing evidence between the concomitant use of opioids and ICIs.
Regarding the possible influence of opioids on ICIs efficacy, our systematic review suggests that opioid use may be associated with worse outcomes. None of the studies evaluated specific safety issues when dealing with the concomitant administration of opioids and ICIs. As already stated, our study has several limitations. One of these consists of the retrospective nature of all the included studies. Therefore, concomitant medications have been extracted from prescription files, and some authors did not report the type and/or dosage of opioids. Furthermore, 7 out of 13 studies were designed to include all concomitant drugs during ICIs treatment. Another major limitation bias is the heterogeneity of included population. Indeed, 6 studies enrolled patients affected by different tumours, while 11 studies the included patients treated with different ICIs in different treatment lines.
In almost all the studies both the performance status and the tumour burden were assessed and evaluated in multivariate analyses, generally maintaining statistical significance, with the exception of the study of Gaucher et al. [25]. It should be emphasised in this regard, as pointed out by Cortellini et al. in their multicentre observational retrospective study, that opioid use at baseline could be associated with lower ECOG-PS and higher tumour burden and may therefore represent another confounding factor [3]. Opioids are usually prescribed to treat and relieve pain, which could be associated with advanced and/or progressive disease. This, as suggested by Miura et al. could explain the association with reduced TTF and OS observed in these patients [26].
Moreover, poli-pharmacy treatments identify patients characterised by several comorbidities, a higher tumour burden and already treated with several lines of treatments, with a well-known reduced response to ICIs [22]. In the last years, a new hypothesis emerged, and it is represented by interrupting the gut microbiome composition leads to disruption of gut homeostasis and the whole immune system. On the other side, we now have robust evidence that microbial flora plays a crucial role in modulating ICIs efficacy by influencing the tumour microenvironment. At least part of ICIs failure may be attributed to specific patients' microbiome, which shows great variability through individuals and its composition can be influenced by many factors, such as specific drugs used. While antibiotics have been demonstrated to negatively influence gut microbial flora, even opioid use can be called into question. If we consider, together, both statements according to which, on one side, opioids can cause an alteration in gut homeostasis [38], and, on the other side, the gut microbiome is able to influence ICIs efficacy and sensibility [39], we understand that all this has an important value for everyday clinical practice. Today, with the entry into the therapeutic algorithm of the main advanced solid tumours of ICIs and with the parallel development of simultaneous care the interaction between opioids-microbiome-ICIs assumes a non-negligible dimension. Everyday clinical practice has to face many healthcare needs, especially for cancer patients. In the last years, the widely spread of immunotherapy thanks to its promising results, has led to new possible treatments also for patients affected by NSCLC. The availability of these new therapeutic approaches made it possible to focus not only on patients' prognosis but also on their quality of life. In this regard as stated by international guidelines, pain control should be always evaluated and achieved. In NSCLC patients, it can depend on predominantly bone metastases, which regard almost 30-40% of patients [40] and due to its characteristics and intensity, most patients are treated with oral opioids from the earliest phases of the disease. However, as described above, recent preclinical [4] and clinical evidence may raise doubts about the use of opioids in patients treated with ICIs due to possible interactions. The results achieved from different studies are based on retrospective and heterogeneous data and therefore not proper to describe the phenomenon effectively. Assuming an established influence of opioids in the case of treatment with ICIs, it should be stated if it regards every level of dose prescribed or whether there is an equal level of safety for all patients. These doubts are moreover legitimate considering the results of the works of Weinfeld and Mock, who found differences in PFS considering various doses of opioids in NSCLC patients treated with ICIs [19,24]. Pain relief can be achieved also using palliative radiotherapy (RT), which is often considered due to good results and generally high tolerability. Moreover, recent studies showed a synergistic action between RT and ICIs due to for example the depletion of regulatory T cells by RT in the tumour microenvironment [23,41]. Due to several drugs, which can influence ICIs efficacy, such as antibiotics, corticosteroids, PPIs and in a less clear way, opioids [21], it should be encouraged a mindful use of concomitant therapies during ICIs treatment in terms of timing and dosage, evaluating therapeutic appropriateness and real utility.
Conclusions
Oral opioids seem to impact ICIs efficacy in cancer patients in different and not completely known ways. Possible mechanisms rely on the presence of morphine receptors on cancer cells and the influence of opioids on the immune system; others consider the role of the microbiome. To better understand the relationship between ICIs and concomitant drugs such as opioids, prospective studies with large sample sizes should be encouraged. Concomitant drugs during chemotherapy and ICIs treatment should be carefully prescribed even if, as of today, there is not enough evidence to avoid the prescription of opioids in patients treated with ICIs. | 2023-01-08T05:07:50.157Z | 2022-12-30T00:00:00.000 | {
"year": 2022,
"sha1": "08b2dbe81569798bde7346b779073fa95c39b81f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/11/1/116/pdf?version=1672393173",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "08b2dbe81569798bde7346b779073fa95c39b81f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119101767 | pes2o/s2orc | v3-fos-license | Ferromagnetic transition in a double-exchange model
We calculate the temperature of a ferromagnetic transition in a double-exchange model with classical core spins for arbitrary relation between Hund exchange coupling and electron band width by solving the Dynamical Mean Field Approximation equations.
I. INTRODUCTION
The double-exchange (DE) model [1,2,3] is one of the basic ones in the theory of magnetism. Magnetic ordering appears in this model due to Hund exchange coupling between the core spins and the mobile carriers. The Hamiltonian of the model is where c and c † are the electrons annihilation and creation operators, S n is the operator of core spin, t n−n ′ is the electron hopping, J is Hund exchange coupling between a core spin and a conduction electron,σ is the vector of the Pauli matrices, and α, β are spin indices. The model (the core spins being treated as classical vectors) was thoroughly studied already in the papers [1,2,3]. During the last years, because of a general interest in manganites, the model was brought in the focus of attention, and a lot more was achieved (see reviews [4,5,6] and references therein). However, some basic properties of the model are still known only partially. For example, most of the papers dealing with the DE model, starting from classical paper by De Gennes [3], considered the DE Hamiltonian with infinite exchange (and with the addition of the antiferromagnetic superexchange, which is crucial for the explanation of magnetic properties of manganites).
The other extremity which was studied is the particular case of weak exchange (much less than the electron bandwidth), when the DE Hamiltonian can be reduced to Ruderman-Kittel-Kasuya-Yosida (RKKY) Hamiltonian (see review [7] and references therein).
In this paper we calculate the temperature of a ferromagnet -paramagnet transition T c in a double-exchange model for arbitrary relation between Hund exchange coupling and electron band width by solving the Dynamical Mean Field Approximation equations. Note that we treat the core spins as classical vectors. (When the quantum nature of the core spins is taken into account, the Hamiltonian (1), which in this case is often called the periodic Kondo model, becomes much more complicated; only scanty results were obtained for the model up to now.) The problem of classical spins, as we shall see, combines tractability with rich and interesting physics.
II. HAMILTONIAN AND DMFA EQUATIONS
Like it was said above, we consider spins as classical vectors S n = m n with the normalization |m| 2 = 1, Thus the DE Hamiltonian in a single electron representation can be presented as We have a problem of electron scattered by core spins, the probability of any given core spin configuration depending upon the energy of electron subsystem. To solve the problem we will use the Dynamical Mean Field Approximation (DMFA) (see [8,9] and references therein). In this approach, first we calculate an averaged, with respect to random orientation of core spins, density of states of electron in a random core spins configuration, treating electron scattering in a single site approximation, and considering the probability of any configuration as given. We introduce Green's function In this approximation the averaged locator is expressed through the the local self-energyΣ by the equationĜ where is the bare (in the absence of the exchange interaction) locator. The self-energy satisfies equation where X(m) ≡ X(m)P (m), and P (m) is a probability of a given spin orientation (one-site probability). The quantitiesĜ andΣ are 2 × 2 matrices in spin space.
In PM phase P (m) = const, the averaging in Eq. (7) can be performed explicitly,Σ = Σ1,Ĝ = g1 = g 0 (E − Σ)1, where1 is a unity matrix, and we obtain To first approximation of the DMFA, leading to Eq. (7), has a simple physical meaning. We reduce the problem of electron scattering due to many spins, each with the scattering potential −Jm · σ, to a problem of a scattering due to a single spin with the effective scattering potential −Jm · σ −Σ, embedded in an effective medium, described by the Hamiltonian t k +Σ, and, hence, by lo-catorĜ loc .
The same MF approach leads to the second DMFA approximation -the approximation for the one-site probability P (m), which allows to perform averaging in FM phase. Consider again a single spin with the effective scattering potential in an effective medium. The change in the number of states of the electron gas due to such spin is [10,11] ∆N (E, m) = − 1 π Im ln det 1 + Jmσ +Σ + Ĝ loc + , (9) where Y + ≡ Y (E + i0). So the change in thermodynamic potential is [12,13,14] β∆Ω where f (E) is the Fermi function, the chemical potential is found from the equation and n is the number of electrons per site. The result for the one-site probability reads: Eqs. (7) and (12) are the system of two non-linear (integral) equations forΣ(E) and P (m), which one should solve to find thermodynamic properties of the model. However, in linear approximation with respect to macroscopic magnetization M, we can reduce this complicated system to a traditional MF equation for M [15]: The parameter T c is formally introduced as a coefficient in the linear term of the expansion of ∆Ω(m) with respect to M (the reason for the notation we have chosen and for the numerical coefficient 3 will be clear immediately); it is determined by the properties of the system in paramagnetic phase. Non-trivial solution of the MF equation can exist only for T < T c , hence T c the Curie temperature.
III. TC FOR SEMI-CIRCULAR DOS
For simplicity consider the semi-circular (SC) bare density of states (DOS) N 0 (ε), the bandwidth being 2W . Then For this caseΣ where w = W 2 /8, and Eq. (7) and (8) take respectively the form Expanding Eq. (17) and then Eq. (9) with respect to M, after straightforward algebra, for the T c we obtain Formally speaking because the integral in Eq. (19) contains Fermi function critical temperature enters also into the r.h.s. of the equation. But in all cases T c turns out to be much less the chemical potential, so we can consider electron gas as degenerate, and Eq. (19) is an explicit formula for calculation of T c . Eq. (19) is the main result of our paper. The analysis of this equation let us start from the limiting case J = ∞. In this case integral can be calculated explicitly and we obtain [14] T c = where y is an implicit function of concentration, given by equation n = 1 2 − 1 π sin −1 y + y 1 − y 2 . The result coincides with that known previously [9].
For arbitrary exchange the integral can be calculated only numerically, but before we present the results of calculations we should state that in part of the J/W − n plane Eq. (19) gives T c < 0. In fact, like in any MF theory of a second order phase transition, in our calculation of the critical temperature we started from a high temperature paramagnetic phase and decreasing the temperature were looking for an instability of a model with respect to appearance of small spontaneous magnetic moment that is for the appearance of a nontrivial solution of the MF equation. Negative T c in some part of the J/W − n plane means, that at any temperature, including T = 0, the paramagnetic phase here is stable with respect to the appearance of small spontaneous magnetic moment even at T = 0, and excludes ferromagnetism in that region of the plane. The part of the critical surface corresponding to the region of the parameters plane where T c ≥ 0 is presented on FIG. 1.
IV. DISCUSSION
Let us finally discuss, whether the PM-FM transition observed with decreasing temperature can be preceded by the transition from the the PM phase to some magnetic phase other than FM (say, antiferromagnetic)? We would like to present some heuristic arguments that this is not the case.
First, consider the case of weak exchange J ≪ W . In this case Eq. (19) takes the form In fact, this equation is the MF approximation [15] for the RKKY Hamiltonian [7], to which the original Hamiltonian (1) can be reduced to in the case J ≪ W . So Eq. (21) does not involve either the coherent potential approximation (Eq. (7)), or the SC density of states. Anyhow, for the SC density of states we use, Eq. (21) gives ferromagnetic ground state for n < 0.4, which qualitatively agrees with the result of numerical calculations, giving FM ground state for n < 0.25 for the three principal cubic lattices [16].
To formulate the second argument, let us compare the energies of the PM state and of the saturated FM state where E F is the appropriate Fermi energy. On fig. 2 we plotted simultaneously the curve given by by equation The close vicinity of the abovementioned curves supports the belief that the curve T c = 0 is the quantum critical line (see [17] and references therein), which bounds ferromagnetic phase. Also, the boundary of ferromagnetic phase on FIG. 2 agrees with those obtained on the basis of numerical calculations [18] and from qualitative reasoning [13].
The destruction of ferromagnetic ground state occurs because finite double exchange between the itinerant electrons and core spins, unlike an infinite one, by itself generates effective antiferromagnetic exchange between the core spins (which was absent in original Hamiltonian).
Due to the fact that our main result (equation for the transition temperature) indicates it's own limits of validity, we can find the boundaries of ferromagnetic phase without analyzing what phases are beyond the boundaries.
Finally, we would like to mention, that in the DMFA, as one can easily see from Eq. (7), density of electron states in paramagnetic phase does not depend upon electron concentration. In this case, the derivative of chemical potential with respect to number of electrons is just the inverse density of states at the Fermi level (for the degenerate electron gas), and is always positive. Hence, there is no phase separation in paramagnetic phase.
In conclusion, we explicitly formulated the Dynamical Mean Field Approximation equations for the double exchange model with classical spins for arbitrary relation between Hund exchange and the electron bandwidth. Near paramagnetic-ferromagnetic transition critical point, these equations were reduced to a MF equation, describing a single spin in an effective field, proportional to the macroscopic magnetization. The effective exchange interaction, entering into the MF equation was found for the semicircular electron density of states. We thus calculated the transition temperature T c as a function of Hund exchange interaction and electron density in the whole parameters plane. The results obtained also allow to plot the boundaries of the ferromagnetic region on the model phase diagram. | 2019-04-14T02:01:11.132Z | 2002-11-17T00:00:00.000 | {
"year": 2002,
"sha1": "a6e70fdafcf9bc75fc8354803dd26d8454405b98",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a7597371774619cdfd8146d57e8d56f7937f874e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
42900273 | pes2o/s2orc | v3-fos-license | Urokinase-type Plasminogen Activator Receptor (uPAR) Ligation Induces a Raft-localized Integrin Signaling Switch That Mediates the Hypermotile Phenotype of Fibrotic Fibroblasts*
Background: Fibroblasts from patients with idiopathic pulmonary fibrosis (IPF) overexpress the urokinase-type plasminogen activator receptor (uPAR) and are hypermotile. Results: uPAR ligation increases fibroblast motility by localizing α5β1 integrin-Fyn signaling complexes to lipid rafts. Conclusion: The hypermotile phenotype of IPF fibroblasts is due to lipid raft-localized uPAR-integrin-Fyn signaling complexes. Significance: These unique lipid raft signals may be therapeutic targets for IPF. The urokinase-type plasminogen activator receptor (uPAR) is a glycosylphosphatidylinositol-linked membrane protein with no cytosolic domain that localizes to lipid raft microdomains. Our laboratory and others have documented that lung fibroblasts from patients with idiopathic pulmonary fibrosis (IPF) exhibit a hypermotile phenotype. This study was undertaken to elucidate the molecular mechanism whereby uPAR ligation with its cognate ligand, urokinase, induces a motile phenotype in human lung fibroblasts. We found that uPAR ligation with the urokinase receptor binding domain (amino-terminal fragment) leads to enhanced migration of fibroblasts on fibronectin in a protease-independent, lipid raft-dependent manner. Ligation of uPAR with the amino-terminal fragment recruited α5β1 integrin and the acylated form of the Src family kinase, Fyn, to lipid rafts. The biological consequences of this translocation were an increase in fibroblast motility and a switch of the integrin-initiated signal pathway for migration away from the lipid raft-independent focal adhesion kinase pathway and toward a lipid raft-dependent caveolin-Fyn-Shc pathway. Furthermore, an integrin homologous peptide as well as an antibody that competes with β1 for uPAR binding have the ability to block this effect. In addition, its relative insensitivity to cholesterol depletion suggests that the interactions of α5β1 integrin and uPAR drive the translocation of α5β1 integrin-acylated Fyn signaling complexes into lipid rafts upon uPAR ligation through protein-protein interactions. This signal switch is a novel pathway leading to the hypermotile phenotype of IPF patient-derived fibroblasts, seen with uPAR ligation. This uPAR dependent, fibrotic matrix-selective, and profibrotic fibroblast phenotype may be amenable to targeted therapeutics designed to ameliorate IPF.
The urokinase-type plasminogen activator receptor (uPAR) is a glycosylphosphatidylinositol-linked membrane protein with no cytosolic domain that localizes to lipid raft microdomains. Our laboratory and others have documented that lung fibroblasts from patients with idiopathic pulmonary fibrosis (IPF) exhibit a hypermotile phenotype. This study was undertaken to elucidate the molecular mechanism whereby uPAR ligation with its cognate ligand, urokinase, induces a motile phenotype in human lung fibroblasts. We found that uPAR ligation with the urokinase receptor binding domain (amino-terminal fragment) leads to enhanced migration of fibroblasts on fibronectin in a protease-independent, lipid raft-dependent manner. Ligation of uPAR with the amino-terminal fragment recruited ␣51 integrin and the acylated form of the Src family kinase, Fyn, to lipid rafts. The biological consequences of this translocation were an increase in fibroblast motility and a switch of the integrin-initiated signal pathway for migration away from the lipid raft-independent focal adhesion kinase pathway and toward a lipid raft-dependent caveolin-Fyn-Shc pathway. Furthermore, an integrin homologous peptide as well as an antibody that competes with 1 for uPAR binding have the ability to block this effect. In addition, its relative insensitivity to cholesterol depletion suggests that the interactions of ␣51 integrin and uPAR drive the translocation of ␣51 integrin-acylated Fyn signaling complexes into lipid rafts upon uPAR ligation through protein-protein interactions. This signal switch is a novel pathway leading to the hypermotile phenotype of IPF patient-derived fibroblasts, seen with uPAR ligation. This uPAR dependent, fibrotic matrix-selective, and profibrotic fibroblast phenotype may be amenable to targeted therapeutics designed to ameliorate IPF.
The urokinase-type plasminogen activator receptor (uPAR) 2 is an external cell surface protein receptor that links to the plasma membrane by its glycosylphosphatidylinositol (GPI) side chain (1). GPI linkages, as well as protein acylation, will target proteins to lipid rafts, including specific Src family kinases (SFKs) (2)(3)(4)(5)(6)(7). Lipid rafts are microdomains of the plasma membrane that are rich in cholesterol and sphingolipids (4,6,7). They exhibit unique biophysical properties and unique lipid and protein organization and are now thought to act as platforms for cell signaling (2, 4 -8). However, the consequences of the raft localization of uPAR for cell physiology, cell signaling, and its interaction with integrins have yet to be fully elucidated.
uPAR binds its cognate ligand, urokinase-type plasminogen activator (uPA), with high affinity (K d ϭ 1 nM) and, upon doing so, activates several pathways (e.g. MAPK, JAK/STAT, and focal adhesion kinase (FAK)) with a host of biological responses, including adhesion, spreading, and migration, in a proteolytically independent manner (1, 9 -11). Because uPAR lacks a * This work was supported, in whole or in part, by National Institutes of Health cytoplasmic domain, the intracellular signal transduction of uPAR is effected through its association with other cell surface receptors, including epidermal growth factor receptor, G protein-coupled receptors, and integrins, to transduce signals intracellularly (1,12). However, the regulatory triggers for uPAR signaling are not fully understood.
Previous work from our laboratory and others has shown that uPAR interacts with multiple integrins to influence cell attachment, spreading, and migration, in part through MAPK (10,(13)(14)(15)(16)(17)(18)(19). Importantly, a complete and detailed understanding of the intracellular signaling pathway that mediates these physiologic effects, the role of uPAR ligation on inducing these effects, the location mapping of individual components of the intracellular pathway, and the role of uPAR ligation in cells that express native endogenous levels of uPAR and integrins have yet to be reported. Our current work addresses these questions by describing a novel uPAR ligation-dependent signaling switch.
Fibroblasts contribute to the pathological tissue scarring of the skin, heart, kidneys, and lung through multiple actions. These include their capacity to migrate into the damaged area, synthesize extracellular matrix, and remodel the tissue (9,20,21). Several studies have reported that lung fibroblasts derived from patients with idiopathic pulmonary fibrosis (IPF), a fatal scarring disease of the lung, have enhanced motility compared with their normal counterparts and that pathologic collections of fibroblasts can determine prognosis in IPF (22)(23)(24)(25)(26)(27)(28). However, the mechanisms that drive this hypermigratory fibroblast phenotype have not been fully elucidated.
Prior work implicates uPAR in several important wound healing functions, such as proliferation, adhesion, differentiation, and migration (1,29,30). We and others have shown that fibroblasts derived from patients with fibrotic lesions exhibit up-regulation of uPAR, and we have reported that uPARintegrin interactions mediate selective fibroblast adherence to fibrotic lung tissue (10,24). We therefore sought to determine the molecular mechanism whereby uPAR mediates the pathologically hypermigratory phenotype of fibrotic lung fibroblasts. Our novel signaling switch described herein drives the hypermigratory phenotype of fibrotic lung fibroblasts. These observations probably have implications for fibroproliferative diseases of the lung, skin, kidney, and heart as well as cancer cell invasion and metastasis (29 -34).
EXPERIMENTAL PROCEDURES
Materials-Normal human lung fibroblasts (HLF, 19Lu) were purchased from ATCC (CCl-210). Primary isolates of HLF from IPF patients and normal controls were kindly provided by Dr. Patricia Sime, with the approval of the University of Rochester Institutional Review Board. Plasma from IPF (n ϭ 25) and chronic obstructive pulmonary disease (n ϭ 10) patients was provided by the Lung Tissue Research Consortium and supported by NHLBI, National Institutes of Health. Plasma from ageand gender-matched normal controls (n ϭ 30) was generously provided by Dr. Stanley L. Hazen (Cleveland Clinic). Healthy control subjects gave written informed consent approved by the Cleveland Clinic Institutional Review Board. All heparinized plasma samples (both from the Lung Tissue Research Consortium and from Dr. Hazen) were prepared identically and frozen in aliquots at Ϫ80°C.
Human fibronectin (FN; from plasma) was from Roche Applied Science. HRP-conjugated secondary antibodies were from Jackson Immunoresearch. Fluorochrome-conjugated secondary antibodies as well as the mouse mAb anti-human transferrin receptor were purchased from Invitrogen. The amino-terminal fragment (ATF) of human urokinase was from Molecular Innovations, whereas single chain human urokinase-type plasminogen activator (scuPA) was purchased from American Diagnostica. The SFK inhibitor, PP2, and its inactive analog, PP3, were from Calbiochem. All of the siRNAs were purchased from Dharmacon; the siLentFect lipid transfection reagent was from Bio-Rad; and the integrin homologous peptide, ␣-325, PRHRHMGAV-FLLSQEAG, and the scrambled control peptide, S-325, HQLP-GAHRGVEARFSML, were purchased from Anaspec (10,35). Antibodies 1A8 (non-competitive control), 3C6 (which competes with ␣51 integrin for uPAR binding), and 2G10 (which competes with uPA for uPAR binding) were synthesized as described (36). Mouse mAb anti-human flotillin-1 and caveolin-1 were from BD Biosciences, rabbit polyclonal anti-human Fyn and Lyn were from Santa Cruz Biotechnology, Inc., rabbit polyclonal anti-human phospho-Fyn (Tyr-530, inactive) was from Novus Biologicals, rabbit mAb anti-human phospho-Fyn (Tyr-416, active) was from Cell Signaling Technology, and all other antibodies were purchased from Millipore, as reported previously (10). All other reagents were purchased from Sigma.
Preparation of Lipid Rafts-HLF were plated on FN, serumstarved, and treated with or without the indicated agonists or inhibitors in 1% BSA, serum-free medium. Lipid rafts were prepared by sucrose density centrifugation of cold detergent cell lysates as described (37). The buoyant fractions (lipid raft) and non-buoyant fractions (non-raft) were pooled and analyzed by Western blot or ELISA as described below. The separation method was validated as in Fig. 5A.
ELISA and Western Blot Analysis-Same volume equivalents of lipid raft or non-raft fractions were used in ELISA (R&D Systems) and Western blot analysis, and immunoprecipitates from caveolin-1 or a control antibody (granulocyte colonystimulating factor) were Western blotted for the indicated proteins. Solubilized proteins were separated by SDS-PAGE and then immunoblotted on Immobilon-P membranes as described (10,38). Specific protein bands were detected using ECL Western blot reagents and directly quantified using UVP with VisionWorksLS software (Upland, CA).
Co-localization of uPAR and Lipid Rafts by Immunofluorescence-Lipid rafts were localized by staining for lipid GM1 with cholera toxin as suggested by the manufacturer (Invitrogen). Staining for uPAR was performed as described previously (10). Co-localization of uPAR and lipid rafts at the cell perimeter was determined on a pixel by pixel basis using Compix Software analysis of digital ϫ40 photomicrographs (Ͼ50 cells/condition).
Filipin Staining-HLF were plated on FN, serum-starved, and incubated in 1% BSA, serum-free medium with or without methyl--cyclodextrin (CD; 10 mM) for 30 min followed by treatment with or without CD and with or without ATF (10 nM) for 60 min. Cells were stained for filipin (50 g/ml, 30 min, room temperature) as described (39). Fluorescence intensity pseudocolors were added with ImageJ. The intensity ratio between the cell edge and the perinuclear area was determined for Ͼ40 cells/condition.
Cell Morphology Assay-HLF were plated on FN and treated as described under "Filipin Staining." The percentage of cells with protrusive structures was determined by direct cell counting (Ͼ200 cells/condition) of fluorescently labeled cells (phalloidin or PKH-26) at ϫ20 original magnification.
Fibroblast Bead Attachment Assay-FN-coated beads (5.0 m, Thermo Scientific) were allowed to bind to serum-starved (0.4% SCM, 24 h) adherent HLF monolayers for 20 min at 37°C in attachment assay buffer containing calcium, magnesium, and manganese in triplicate as described (40). Bead number per surface area of cells was determined by counting of beads and manual tracing of cell boundaries in low power photomicrographs using ImageJ software.
In Vitro Migration Assay-In vitro migration assays on tissue culture plates were performed as described (10). Wounded cell monolayers were incubated in 1% BSA, serum-free medium with or without the indicated agonists or inhibitors. Digital pictures of the wounds were taken at 0.5 h (time 0) and 24 h later, and areas devoid of cells ("wounds") were measured using the ImageJ software.
In Vitro Migration Assay on Mouse Lung Tissue-All animal protocols were performed as approved by the Cleveland Clinic institutional animal care and use committee and using methods in the guidelines for the humane care of animals by the American Physiological Society. Lungs from female C57Bl/6 mice (18 -20 g) that had received intratracheal instillation of 4 units/kg bleomycin, which is used to induce lung fibrosis in mice, 2 weeks prior were inflated with OCT (Sakura Finetek, Torrance, CA) as described (10,41,42). PKH-labeled normal HLF were allowed to attach to preblocked, 10 M sections containing areas of normal and fibrotic lung, as published (10). After washing off unattached cells, the sections/cells were observed in 1% BSA, serum-free medium (5% CO 2 at 37°C) with or without the indicated additives on an inverted (Leica DM IRBE) microscope. Time lapse video microscopy was performed, and migration velocity was analyzed using HC Image software (Hamamatsu, Bridgewater, NJ).
Statistical Analysis-All data are means Ϯ S.E. unless otherwise indicated. Continuous variables from more than two groups were compared by means of an analysis of variance with a post hoc analysis (Dunnett's test or Student-Newman-Keuls). Significance was accepted at the p Յ 0.05 level.
RESULTS
uPAR Plasma Levels Are Elevated in IPF Patients and Correlate with Disease Severity-In order to evaluate whether uPAR can be physiologically linked to IPF, we measured the plasma uPAR levels in a small cohort of highly characterized patients with IPF from a national database. All plasma samples (controls and diseased) were heparinized and processed in an identical manner. IPF patients had higher plasma uPAR levels (2-fold; p Ͻ 0.01) compared with age-and gender-matched healthy controls or those with chronic obstructive pulmonary disease (COPD; Fig. 1A). Furthermore, among those with IPF, the extent of the plasma uPAR elevation positively correlated with disease severity, as measured by a low diffusion capacity (diffusion capacity for carbon monoxide (DLCO), a measure of diffusion of gas across the capillary-alveolar tissue) (Fig. 1B, Pearson correlation coefficient (r) ϭ Ϫ0.534). Taken together, these data, if validated prospectively in a larger cohort, suggest that plasma uPAR values may be a useful indicator of disease state.
The Hypermigratory Phenotype in IPF Patient-derived Fibroblasts Is Mediated by uPAR Ligation-We and others have noted that primary isolates of lung fibroblasts from IPF patients overexpress uPAR (10,24). The physiological consequences of this observation for migration have not been elucidated. Herein, we demonstrate that primary isolates of IPF fibroblasts are hyperresponsive to the promigratory effects of uPAR ligation, relative to that of fibroblasts isolated from non-fibrotic control lungs (Fig. 1C). Throughout this work, uPAR ligation occurs either with its cognate ligand, proteolytically inactive uPA, or the uPA ligand binding domain (ATF; 10 nM). We extend these observations to show that the hyperresponsive phenotype in IPF fibroblasts is concordant with the significantly higher uPAR expression levels in the fibrotic lungderived cells (over 31% higher by ELISA; data not shown). Importantly, knockdown of uPAR using siRNA (more than 80% knockdown by ELISA; data not shown), demonstrates that both the hyperresponsive phenotype in IPF fibroblasts and the uPA-induced migration are overwhelmingly dependent on uPAR (Fig. 1C).
uPAR Ligation Induces a Hypermigratory Response in Fibroblasts Plated on Fibrotic Lung Sections-Given the emerging understanding of the significance of cell migration in relation to the biophysical and biochemical composition of the substrate, we have developed a model system to test fibroblast migration on a physiologically relevant substrate. The effect of uPAR ligation on fibroblast migration on unfixed, normal, and fibrotic lung tissue sections from Bleomycin-injured mice was assessed. The migration response to uPAR ligation was greater in fibroblasts interacting with fibrotic areas of lung tissue, as compared with that on normal lung tissue (a 40% increase, p Ͻ 0.01; Fig. 1D). These data demonstrate that uPAR ligation plays a key role in fibroblast migration, selectively on fibrotic lesional matrix, adding additional physiological relevance to uPAR ligation in lung fibrosis.
Ligation of uPAR with uPA Induces a Motile Phenotype in Human Lung Fibroblasts-We have previously shown that uPAR interacts with multiple integrins on the surface of nontransformed human lung fibroblasts and, in doing so, up-regulates the integrin functions of adhesion and migration (10). We now show that ligation of uPAR induces a motile fibroblast phenotype. This was quantified by a greater than 3-fold increase in the percentage of cells with protrusions (3.6 Ϯ 0.12fold; p Ͻ 0.05), a decrease in actin stress fiber density (by 58 Ϯ 8%; p Ͻ 0.05), an increase in integrin ␣51-dependent (10) monolayer motility on FN (75 Ϯ 15%; p Ͻ 0.05), and a redistribution of uPAR to the leading edge of motile cells (80 Ϯ 22%; p Ͻ 0.05) (Fig. 2, A and B). In order to confirm the strict uPAR dependence of the motile phenotype, knockdown of uPAR expression (by 90% as quantified via ELISA) with uPAR-directed siRNA (but not control siRNA) led to an almost complete abrogation of the uPAR ligation-dependent enhancing effect on monolayer motility and also reduced basal monolayer motility in the absence of uPAR ligation (by 40%, p Ͻ 0.05; Fig. 2C). In summary, uPAR ligation induces a motile phenotype in lung fibroblasts.
Ligation of uPAR Enhances uPAR Interactions with ␣51 Integrin-initiated Activation and Migration Signaling through the Src Family Kinase Fyn-We have previously shown basal fibroblast motility to be largely dependent on ␣51 (10). Binding of FN-coated beads to immobilized fibroblasts was used as a more direct and proximate measure of ␣51 activation under conditions with and without uPAR ligation. Ligation of uPAR with ATF enhances ␣51 activation (2-fold; p Ͻ 0.05) in a manner that is completely inhibitable by ␣51 function-blocking antibodies or ␣5 siRNA (Fig. 3, A and B). (Integrin ␣5 only pairs with 1, so ␣5 siRNA is also blocking ␣51 function). Concordantly, siRNA to ␣5 completely abrogates the uPAR ligationinduced enhancement of migration on FN as well as that seen under unligated uPAR conditions (Fig. 3, C and D). We and others have previously found that a peptide homologous to the extracellular domain of the ␣ integrin chain (␣325 peptide) dis- rupts uPAR-␣ integrin interactions either by altering the conformation of the integrin or by competing with a uPAR binding site in both solid phase assays and on the cell surface (10,35). Disrupting uPAR-␣51 interactions with this promiscuously acting peptide (10) completely abrogates the uPAR ligationenhancing effect on motility in a dose-dependent manner while having minimal effect on basal motility (Fig. 3E). In contrast, a control, scrambled peptide had no effect.
To further substantiate that the interaction between ␣51 integrin and uPAR is crucial to the uPAR ligation enhancement of migration, we utilized a second complementary technique.
Namely, we compared the effect of a previously characterized antibody that specifically competes with 1 integrin for uPAR binding with that of control antibodies that only block uPA binding to uPAR or a non-blocking control antibody that has no effect (36). We found that the increase in migration upon uPAR ligation was selectively inhibited by the uPAR-1-blocking antibody (3C6), whereas the non-blocking control (1A8) had no effect (Fig. 3F). The uPAR-1-blocking antibody had no effect on migration in the absence of uPAR ligation (basal migration). As expected, the uPAR-uPA interaction-blocking antibody (2G10) abrogated the uPAR ligation-induced migration. These A and B, FN-coated beads were applied to fibroblasts that were pretreated with or without ATF (10 nM, 30 min) and with or without ␣5 function-blocking antibody (␣5 Ab, 10 g/ml), and the number of beads/cell area was counted. A-D, *, p Ͻ 0.05 denotes loss of uPA-induced attachment or hypermotility versus increase under control conditions (IgG, scrambled siRNA). ϩ, p Ͻ 0.05 denotes comparison among basal (ϪuPA) conditions. A, representative photomicrographs. B, quantification of bead attachment. C, fibroblast monolayer migration with or without ␣5 or scrambled siRNA with or without ATF (10 nM). D, validation of ␣5 siRNA knockdown by Western blot. E, fibroblast monolayer migration with or without ␣325 or scrambled peptide with or without 10 nM ATF. Conditions were as indicated. *, p Ͻ 0.05 denotes loss of uPA-induced hypermotility with ␣325 peptide compared with scrambled peptide under ϩuPA (10 nM ATF) conditions. F, fibroblast monolayer migration with or without antibodies (10 g/ml each) 1A8 (control antibody), 3C6 (which blocks uPAR-1 interactions), or 2G10 (which blocks uPAR-uPA interactions) with or without 10 nM ATF. *, p Ͻ 0.05 denotes loss of uPA-induced hypermotility with 3C6 or 2G10 antibody versus 1A8 under ϩuPA (10 nM ATF) conditions. Error bars, S.E. MAY 2, 2014 • VOLUME 289 • NUMBER 18 data clearly demonstrate the significant contribution of uPAR interactions with ␣51 to uPAR-induced migration.
uPAR Ligation Induces an Integrin Signaling Switch
Because we and others have shown the importance of SFKs to migration signaling downstream of integrins in lung fibroblasts, we hypothesized that they may also be involved in the uPAR ligation enhancement of migration (25,41). The SFK inhibitor PP2 (100 nM) selectively blocked (by 85%, p Ͻ 0.05) the uPAR ligation-induced enhancement of migration, whereas the inactive analog (PP3; 100 nM) had no effect (Fig. 4A). Down-regulation of the three SFKs expressed in our cells (44) individually identified Fyn as the key SFK mediating the increase in migration upon uPAR ligation (Fig. 4, B and C). In contrast, c-Src was the key SFK mediating migration under basal conditions while still supporting uPAR ligation-induced migration (Fig. 4, B and C). In addition, the uPAR ligation-induced increase in migration was also abrogated in IPF fibroblasts that were treated with Fyn siRNA (data not shown). Together, these data identify Fyn to be the key SFK that mediates uPAR ligation-induced migration.
uPAR Ligation Rescues the Migration Defect Due to Lipid Raft Disruption-As documented above, the uPAR ligation response in migration was largely dependent on Fyn, ␣51, and uPAR. Because uPAR is a GPI-linked protein, we determined the effect of lipid raft microdomain disruption by depletion of cholesterol on the uPAR ligation-dependent signal. Lipid raft disruption by CD was validated by the loss of caveolin-1 in buoyant lipid raft fractions (fractions 2-6) (Fig. 5A), whereas validation of cholesterol depletion by CD is shown by the loss of filipin staining in the cell plasma membrane (Fig. 5, B and C). In the absence of uPAR ligation, disruption of lipid rafts inhibited ␣51 integrin activation (FN-coated bead binding by Ͼ90%), and monolayer migration (by 50%) (Fig. 5, D and E). A general cell toxic effect of CD is unlikely to explain these findings because lactate dehydrogenase release into the medium was not detected. Surprisingly, uPAR ligation was able to completely rescue the inhibitory effect of lipid raft disruption on ␣51 activation and partially rescue its effects on migration (Fig. 5, D and E). Cholesterol staining with filipin (Fig. 5, B and C) validates that the rescue effects of uPAR ligation are independent of cholesterol in the plasma membrane. These data demonstrate that uPAR ligation can restore ␣51 function, even under conditions of reduced cholesterol-induced lipid raft disruption.
uPAR Ligation Induces the Translocation of a "Migration Signaling Complex" Consisting of uPAR, ␣51 Integrin, and Fyn into Lipid Rafts-In order to understand the mechanism whereby uPAR ligation restores ␣51 function and signaling in a cholesterolindependent manner, we first determined the biochemical and spatial relationships of the key signaling components before and after uPAR ligation and/or cholesterol depletion. Approximately 30 -50% of the total cellular uPAR was found to be in lipid rafts under basal conditions by biochemical and spatial localization techniques (data not shown). The addition of CD disrupted the co-localization of uPAR with lipid rafts, as measured by the raftlocalized lipid, GM1. However, the addition of CD followed by uPAR ligation restored the co-localization of uPAR with lipid rafts (Fig. 6, A and B). Furthermore, uPAR ligation increased the abso-
uPAR Ligation Induces an Integrin Signaling Switch
lute amount of uPAR in lipid rafts, even under conditions of reduced cholesterol (ϩCD, ϩuPA conditions; Fig. 6C), suggesting that protein-protein or protein-lipid interactions can supersede cholesterol depletion in recruiting GPI-linked proteins to lipid rafts.
In contrast to reports in other cells, integrin ␣51 was all but excluded from rafts under basal conditions. Surprisingly, ligation of uPAR induces a specific translocation of ␣51 integrin (but not ␣V or ␣3 integrins; data not shown) into the raft microdomains, and as above, ␣51 recruitment can also be partially restored in rafts by uPAR ligation, even after cholesterol depletion (Fig. 7, A and B). A similar recruitment effect of uPAR ligation on Fyn, caveolin-1, and flotillin was noted but not Lyn (Fig. 7, A and B). In addition, only the active phosphorylated Fyn (p-Fyn, Tyr(P)-416) was recruited to lipid rafts upon uPAR ligation, whereas the inactive phosphorylated Fyn (Tyr(P)-530) was not (Fig. 7, C and D). Taken together, these data demonstrate that uPAR ligation recruits uPAR itself, Fyn, caveolin-1, and ␣51 integrin into lipid rafts, thereby co-localizing the key migration signaling intermediates, even under cholesterol-depleted conditions.
␣51 Is Recruited to Lipid Rafts through Interactions with uPAR-Because uPAR is a GPI-linked protein and Fyn is a doubly acylated protein, these would be expected to favor lipid raft microdomains. However, because ␣51 integrin is a non-acylated, transmembrane protein, we examined if ␣51 integrin is recruited to lipid rafts through specific and distinct proteinprotein interactions. First, the disruptive effects of cholesterol depletion and the restorative effects of uPAR ligation on the lipid raft-associated proteins, flotillin and caveolin, paralleled that of the GPI-linked uPAR, acylated Fyn, and GM1. These findings suggest that uPAR ligation recruits general raft component proteins and lipids (Figs. 6 and 7 (A and B)). Second, only the caveolin-1-associated Fyn and ␣51 integrin were enhanced in the lipid rafts upon uPAR ligation (Fig. 7, E and F). Third, the integrin homologous peptide (␣325) selectively blocks the recruitment of ␣51 and Fyn (but not flotillin, caveolin-1, or uPAR) into lipid rafts upon uPAR ligation (Fig. 8, A-C). FIGURE 5. uPAR ligation rescues the integrin activation and migration defect upon lipid raft disruption by CD. Lipid rafts were disrupted by cholesterol depletion using treatment with or without CD (10 mM, 30 min) followed by treatment with or without CD and with or without ATF (10 nM, 60 min). A, validation of the lipid raft isolation procedure. Cells were treated with or without CD followed by Western blot for caveolin-1 (raft marker) or transferrin receptor (TsfR) (non-raft marker). B and C, filipin staining of cholesterol. B, pseudocolored photomicrographs of filipin staining (red/yellow, high; blue/green, low). C, quantification of plasma membrane/perinuclear area ratio of filipin staining. *, p Ͻ 0.05 comparing conditions with or without CD. D, effect of CD on the integrin activation/FN-coated bead binding assay as above. ϩ, p Ͻ 0.05 comparing values with or without CD under basal (ϪuPA) conditions. E, monolayer migration assay as above. ϩ, p Ͻ 0.05 comparing values with or without CD under basal (ϪuPA) conditions. Error bars, S.E.
This suggests that raft-localized uPAR recruits ␣51 integrin and Fyn in a ligation-dependent manner.
uPAR Ligation Induces a Raft-localized Migration Signaling Switch to an Acylated Fyn-dependent Pathway-Integrins are known to transmit intracellular signals though their -cytoplasmic tail via Src-FAK-MAPK or via their ␣ subunit through interactions with caveolin-Fyn-Shc (45). We demonstrate that uPAR ligation-induced migration predominantly signals via the caveolin-Fyn-Shc pathway (Fig. 9A). uPAR ligation-induced migration was selectively abrogated upon knockdown of caveolin-1, Fyn, and Shc (Fig. 9, A and B). In contrast, there was no effect of knocking down c-Src (Fig. 4B) or FAK (Fig. 9A) on uPAR ligation-induced migration. As expected, down-regulation of FAK, in the absence of uPAR ligation, attenuates the basal migration (41,42).
We next aimed to determine the importance of lipid raftlocalized (i.e. doubly acylated) Fyn to uPAR ligation-induced migration. Depletion of the palmitoylated form of Fyn using 2-bromopalmitate (46) (2-BP; 50 M) selectively blocks the recruitment of Fyn but not of uPAR, caveolin-1, or ␣51 integrin (Fig. 10, A-C). At the functional level, both depalmitoylation (with 2-BP) or demyristoylation (with 2-hydroxymyristate (2-OHmyr); 50 M) of Fyn blocks the uPAR ligation-induced migration enhancement, whereas the control treatment (palmitic acid; 50 M) shows no effect (Fig. 10C). Taken together, these data indicate that the recruitment of Fyn to lipid rafts upon uPAR ligation requires acylation of Fyn and that raft-associated, acylated Fyn specifically initiates downstream migration signaling under these conditions. Furthermore, these data show that Fyn is not the primary driver of ␣51, caveolin, or uPAR to the rafts upon uPAR ligation.
DISCUSSION
Our study, for the first time, demonstrates that uPAR ligation with uPA in physiological concentrations induces a motile phenotype in fibroblasts through a novel, integrin signaling switch mechanism (see proposed model in Fig. 11). This was demonstrable using fibroblasts migrating on fibronectin and occurred in a protease-independent but lipid raft-dependent manner. This switching function is dependent on uPAR-integrin coupling and results in a unique migration-inducing signal that localizes to lipid raft microdomains. This raft-constrained migration signal is dependent on integrin translocation into lipid rafts and acylation of Fyn and switches to an ␣51 integrin signaling pathway through caveolin-Fyn-Shc. Furthermore, the hypermotile phenotype, long noted in primary isolates of IPF patient-derived fibroblasts, is dependent on this uPAR ligation-generated Fyn signal, and uPAR ligation selectively enhances migration of fibroblasts on fibrotic lung areas. These data suggest that targeting uPAR or Fyn might lead to selective and novel therapies targeted to cell migration in the setting of uPAR ligation (i.e. cancer, IPF, angiogenesis, atherosclerosis) (1,47,48).
Prior work has demonstrated that ␣51 and uPAR co-immunoprecipitate in a uPA-dependent manner in a cell-free system and that chemotaxis to uPA is dependent on integrins in uPARoverexpressing CHO cells (19). Furthermore, uPAR, ␣51, and caveolin have been shown to interact but, upon doing so, inhibit ␣51 function (17,18). Our work builds on these concepts by demonstrating a uPAR ligation-induced, integrin signaling switching mechanism that enhances integrin function. Further, our data indicate that this signaling switch is a consequence of the lipid raft localization of uPAR-␣51 complexes. Furthermore, these effects were both present in untransformed human lung fibroblasts at their endogenous, low level uPAR expression FIGURE 6. uPAR ligation restores uPAR localization to lipid rafts after disruption. Lipid rafts were disrupted with CD as above followed by treatment with or without scuPA (A and B) or ATF (C; 10 nM, 60 min). ϩ, p Ͻ 0.05 comparing values with or without CD under basal (ϪuPA) conditions. A, immunofluorescence micrographs. Lipid rafts were stained with cholera toxin (GM1; green) and uPAR (red). The arrows point to areas of lipid raft/uPAR co-localization. B, quantification of the co-localization of lipid rafts and uPAR in the plasma membrane. C, uPAR ELISA from lipid raft fractions. Error bars, S.E. levels and were up-regulated in a functional manner in diseased fibroblasts. In addition, we observed physiological effects of uPAR ligation on migration in physiologically relevant matrices using physiologically relevant concentrations of non-proteolytically active uPA (ATF or uPAR binding region). Our laboratory and others have shown that uPAR can interact with other integrins to influence cell behavior (10,(13)(14)(15)(16)18). This may extend the significance of our findings to the function of other integrins and potentially other diseases. However, to our knowledge, a uPAR ligation-dependent signaling pathway switch has not been described previously.
Our data indicate that uPAR-␣51 interactions are the driving force for the translocation of the alternate caveolin-Fyn-Shc signaling pathway (see proposed model in Fig. 11). The ␣325 peptide, which disrupts uPAR-integrin interactions, not only abrogated the uPAR ligation-enhancing effect on migration but also selectively blocked the translocation of both ␣51 and Fyn (but not flotillin, caveolin-1, or uPAR) into lipid rafts upon uPAR ligation. However, deacylation of Fyn blocked only the recruitment of Fyn to lipid rafts upon uPAR ligation, with no effects on ␣51, caveolin-1, and uPAR. These data suggest that ␣51 is upstream and drives Fyn translocation into lipid rafts upon uPAR ligation.
uPAR Ligation Induces an Integrin Signaling Switch
In our study, less than 1% of the ␣51 was located in lipid rafts under basal conditions, and this value increased to just 2.4% upon uPAR ligation. It is surprising that such a small increase in lipid raft-associated ␣51 has a dominant effect on uPAR ligation-induced signaling.
Our data demonstrate that uPAR ligation induces a raft with unique molecular composition, enriched in ␣51 integrin and caveolin-Fyn-Shc. Furthermore, we show that uPAR ligation trumps cholesterol depletion in supporting raft-localized signaling/assembly and downstream physiological effects. Prior work has shown similar cholesterol-independent effects through cross-linking of raft proteins by antibodies or raft lipids with cholera toxin (50,51). We speculate that the binding of ATF influences protein-protein interactions (␣51 and uPAR), much like antibody cross-linking. The ␣ integrin peptide results (Fig. 8) suggest that uPAR-␣51 interactions are critical protein-protein interactions for the recruitment of ␣51 integrin and Fyn to rafts. Prior work suggests that uPAR directly binds ␣51, thereby inducing a conformational change in the integrin extracellular domain, with distinct physiological consequences (49,52). We speculate that this conformational change might result in an integrin with increased avidity for rafts. However, recruitment of other raft-localized proteins (i.e. caveolin and flotillin), although increased by uPAR ligation, is less dependent on uPAR-integrin interactions, suggesting that a second distinct mechanism is operative.
uPAR Ligation Induces an Integrin Signaling Switch
Our data suggest that uPAR expression and function may be important to the pathogenesis of IPF. We and others have shown that primary fibroblasts from IPF patients overexpress uPAR as compared with those from normal controls (10,24). Also, bronchoalveolar lavage fluid from patients with rapidly progressing IPF has been reported to induce a higher fibroblast migration rate than that from controls or from slower progressors (26). Although several putative mediators/modulators of this hypermotile/invasive phenotype have been identified, including PGE 2 , LPA, Thy-1, and hyaluronan (20,28,(53)(54)(55)(56), we have shown, for the first time, that IPF fibroblasts are hyperresponsive to the promigratory effects of uPAR ligation, as compared with those from normal controls. The response of the primary fibroblast isolates from normal controls to uPAR ligation is identical to that of the human lung fibroblasts (19Lu) described throughout this work. Furthermore, this response is . Proposed integrin signaling switch model. Under unligated uPAR conditions, migration is mainly initiated by non-raft (pink plasma membrane) signals through FAK-Src (yellow) and requires both ␣51 integrin and uPAR. In contrast, uPAR ligation enhances motility resulting from a recruitment of uPAR-␣51 integrin complexes (solid black lines), along with Cav-1 and Fyn to lipid rafts (blue plasma membrane), and a corresponding increase in the dependence of migration on the lipid raft-localized Cav-1-Fyn-Shc (orange) pathway. The raft-localized Cav-1-Fyn-Shc pathway activation induces a hypermigratory phenotype. MAY 2, 2014 • VOLUME 289 • NUMBER 18 dependent on uPAR signaling through Fyn. This finding indicates a switch from the predominant non-raft, integrin-mediated migration signal (i.e. to PDGF) that we have characterized through FAK in the absence of uPAR ligation (25,41,42) to a raft-dominant, hypermigratory signal through caveolin-Fyn-Shc when uPAR is ligated with uPA. Because the down-regulation of Fyn with siRNA only significantly affected the migration of uPA-ligated cells and not those that were untreated, it is possible that selective targeting of these uPA-ligated cells with a Fyn inhibitor could be a potential therapeutic inhibitor of fibrosis.
uPAR Ligation Induces an Integrin Signaling Switch
Our observation regarding the selectivity of the hypermigratory effect of uPAR ligation to fibroblasts interacting with fibrotic lung matrix further supports the possibility for selective targeting of fibrotic lesional fibroblasts (Fig. 1D). In addition, it extends the uPAR-␣51 interaction to the tissue level because we have shown that ␣51 is the predominant integrin mediating migration in our cells (10).
Plasma soluble uPAR levels have been associated with fibrotic disease of the kidney glomeruli and other acute illnesses, although the precise cellular source of the uPAR in plasma has yet to be identified (31,(57)(58)(59)(60). Our data demonstrate the novel finding that plasma uPAR levels are elevated in IPF and correlate with disease severity, using the standard measure of the DLCO. DLCO is also an important prognosticator in IPF patients, opening up the possibility that plasma uPAR might act as a surrogate marker for prognosis in IPF (61)(62)(63)(64)(65).
uPAR ligation has been demonstrated to have protean effects on tissue repair and disease pathobiology. For example, ligated uPAR can mediate fibronectin matrix assembly by activating Src, epidermal growth factor receptor, and 1 integrin; and ligated uPAR has also been shown to mediate cancer cell migration/transformation and angiogenesis (1,47,48,66). Similarly, Shc-initiated signals have been shown to be involved in multiple illnesses related to aging, including cardiovascular and neurological diseases and cancer (67). Therefore, it is possible that the uPAR ligation-initiated caveolin-Fyn-Shc signaling described herein could be important not only to IPF but to these other diseases of aging as well.
Although we have shown that lipid raft-localized, ligated uPAR signals through ␣5 and Fyn to increase migration and that uPAR may be related to IPF pathogenesis, there are limitations to our work. Although lipid raft isolation results can vary depending on methodology (4, 68), we used an identical, established protocol under all conditions (37) and, importantly, verified the isolation methods and the effects of methyl--cyclodextrin with several known lipid raft and non-raft markers. In addition, our described mechanism of uPAR ligation modulating migration is distinct from the integrin-independent mechanism described previously (69 -71). In that prior work, direct binding of raft-localized, dimeric cell surface uPAR to the somatomedin B region of vitronectin mediates cell adhesion and lamellipodia formation through the MAPK pathway in epithelial cells (69 -71). Also, other work supports the hypothesis that the ameliorative effects of exogenous uPA on lung fibrosis in animal models of human IPF are a consequence of the proteolytic effects of uPA (72)(73)(74)(75).
In summary, uPAR ligation leads to the enhanced attachment and migration of human lung fibroblasts through recruitment of ␣51 integrin and caveolin-Fyn-Shc signaling to lipid rafts. Upon doing so, the predominant integrin-initiated signal for migration switches from a FAK-based signal to a lipid raftlocalized caveolin-acylated Fyn-dependent signal (see proposed model in Fig. 11). Further, we have shown the importance of uPAR signaling to the hypermotile phenotype in IPF fibroblasts, whether using a system with matrix protein-coated plastic wells or physiologically relevant matrix from actual fibrotic lung. Taken together, these observations suggest that the unique lipid raft signaling platforms that are formed under conditions of uPAR ligation mediate the migratory processes in the fibrotic lung. In doing so, they may provide a novel selective target for strategies designed to ameliorate IPF and other devastating fibrotic diseases. In addition, results of this study may be applicable to other diseases in which uPAR signaling has been reported to play a role, including severe sepsis, angiogenesis, and many cancers (1,30,32,43,57). | 2018-04-03T05:28:53.750Z | 2014-03-18T00:00:00.000 | {
"year": 2014,
"sha1": "50c5522d6bb349ee6bf3d96bef34636e570d864e",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/289/18/12791.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "9c336ea872f167cd2e8a7c15b1473679342aee59",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
214708582 | pes2o/s2orc | v3-fos-license | A Cost and Passenger Responsible Optimization Method for the Operation Plan of Additional High-Speed Trains in a Peak Period
In the peak period of a railway system, operators typically add additional trains to provide increased capacity to satisfy the increasing passenger demand. +e paper proposes a new optimization framework for designing the operation plan, which includes the number of additional trains, train type, stop plan, and timetable, for additional trains in a peak period. A space-time network representation is used to obtain a feasible primary operation plan by finding a set of feasible space-time paths in the spacetime network. Considering simultaneously the passenger demand and the trains’ total travel times, we formulate a biobjective integer programming model for generating a cost and passenger responsible primary operation plan. A set of loading capacity constraints are formulated in the model to guarantee a suitable loading capacity for each station’s passenger demand and better service for passengers.+e CPLEX solver is used to solve the proposedmodel and to generate the optimal operation plan. Two sets of numerical experiments are conducted on a small-scale rail corridor and on theWuhan-Guangzhou rail corridor to evaluate the performance of the proposed method.+e results of the experiments show that the primary operation plan can be obtained within an acceptable computation time.
Introduction
With the development of science, technology, and economics, numerous kilometers of high-speed railway have been constructed in some countries to meet increasing passenger demands. e efficient operation of high-speed trains has become an attractive issue in recent years. A plan that is related to the operation of high-speed trains is called operation plan, for example, a plan for the number of trains, a timetable, a stopping plan, a plan for rolling stock, or a crew plan. A well-designed operation plan can reduce traveling times and provide better service for passengers. From the perspective of the railway company, a satisfactory operation plan results in lower operational cost and lower energy consumption. is paper aims to solve the problem of the operation plan for additional high-speed trains in peak periods.
Motivation.
e primary motivation of this study is the insufficient railway transport capacity during peak periods.
Passenger demand for rail transport fluctuates. e demand remains at a high level in some periods, which are called peak periods, while the demand is at a normal level in other periods, which are called off-peak periods. Consider rail transport in China as an example: e passenger demand surge is very strong in the period of the Chinese Spring Festival. e number of passengers in the month of the Chinese Spring Festival was 413 million in 2019. Although the passenger demand fluctuation was considered in the initial stage of the operation plan design, there were still passengers who could not buy tickets to take a high-speed train during the peak period due to the increased passenger demand. In 2019, the number of passengers in the period of the Chinese Spring Festival increased by 8.3% over the previous year. e addition of trains has become a necessary strategy of railway companies for increasing the transport capacity in peak periods. A significant issue that is faced by railway operators in practical operations is the design of a satisfactory operation plan for additional trains in peak periods in a short time horizon that can satisfy large passenger demand and enhance the utilization efficiency of the transport resource. e railway organizational structure differs among countries. In this paper, the rail undertaking (which operates the trains) and the infrastructure manager (which constructs the timetable) are the same organization. e organization is called the railway company in this paper. e railway company formulates a long-term operation plan via a complex planning process at the beginning of the construction of a railway corridor. Once the operation plan is established, the railway company intends to use the operation plan for a long time with little modification. Robenek et al. [1] developed a flow chart, which is presented as Figure 1, that illustrates the railway company's process of designing a complete operation plan according to the description in Caprara et al. [2]. e railway organizational structure differs among countries. In this paper, the rail undertaking (which operates the trains) and the infrastructure manager (which constructs the timetable) are the same organization. e organization is called the railway company in this paper. e railway company formulates a long-term operation plan via a complex planning process at the beginning of the construction of a railway corridor. Once the operation plan is established, the railway company intends to use the operation plan for a long time with little modification. Lusby et al. [3] developed a flow chart, which is presented as Figure 1, that illustrates the railway company's process of designing a complete operation plan. However, operators typically adopt the strategy of adding trains several weeks in advance. Designing the operation plan for additional trains step by step as Figure 1 will take a long time, especially the strategic and tactic stages. Aiming to produce an operation plan for additional trains in short-term horizon, we are interested in how to design a primary operation plan including the number of additional trains, train type, stop plan, and timetable. In the process of Figure 1, the decisions of the number of trains, train type, and stop plan are in the line planning, while timetable generation is in the tactical level of decision-making, which is restricted by the results of line planning. Generating a comprehensive operation plan of them on the tactic level can raise efficiency. e content of the primary plan in our study is highlighted in red in Figure 1. Generating the primary operation plan on the tactic level can handle travel rush in the short-term and assist the existing operation plan. Designing a primary operation plan, which includes the number of trains, train type, stop plan, and timetable, for additional trains, has not attracted sufficient attention, and we hereinafter shall address this issue formally.
Literature Review.
e problem of this paper is the problem of scheduling additional trains. In this literature review, we first overview the literatures on the problem of scheduling additional trains. e literatures on the problem of scheduling additional trains directly are rare because the problem of scheduling additional trains has not been concerned widely. In addition, the purpose of our study is to obtain a comprehensive operation plan for additional trains. e decision variables in our problem are the variables of the number of additional trains, train type, stop plan, and timetable.
ese variables also are studied in the Line planning problem (LPP) and Train timetable problem (TTP). us, our problem falls in the broad category of LPP and TTP. e related literatures on LPP and TTP are also overviewed in this section. e problem of adding additional trains has not attracted much attention. Only a few studies [4][5][6] focused directly on the problem of designing an operation plan or timetable for additional trains. Burdett and Kozan [4] considered the problem of scheduling additional trains as a hybrid job shop scheduling problem with time window constraints. e original timetable was fine-tuned according to the operations and the demand of various customers or operators. A constructive algorithm and a simulated annealing approach were used to solve this problem. Cacchiani et al. [5] studied the problem of scheduling additional freight trains in a timetable of existing passenger trains under the constraint that the timetable of passenger trains cannot be changed. An integer programming model was established, in which an ideal timetable of additional freight trains was specified and adjusted according to constraints that ensure safe operation.
e objective was to add as many trains as possible and to minimize the difference between the actual timetable of additional freight trains and the ideal timetable. Gao et al. [6] considered the problem of scheduling additional trains on a high-speed rail corridor where only passenger trains were run. To add more trains, the timetable of the original trains may need to be modified in that study. A biobjective mixedinteger linear programming model was formulated, of which the objectives were to minimize the total travel times of the additional trains and to adjust the timetable of the original trains. Although all three of these studies considered the problem of adding additional trains, they did not consider the passenger demand. In addition, these studies focused on the timetable of additional trains without considering other operation plans; for example, the number of trains, stop plan, has not been studied. Especially, the suitable number of additional trains according to passenger demand is an important issue, which is not attractive widely. is gap is addressed herein.
Generally, the line planning and timetable generation are designed separately. e line planning operation plan is generated first; the timetable is specified according to it. Pouryousef and Lautala [7] presented the hybrid simulation framework to improve the capacity utilization of the railway and timetable. e hybrid simulation experiments in this paper were implemented based on two types of developed simulation software, that is, timetable simulation system and nontimetable simulation. e hybrid simulation approach is to make use of the complementary features of nontimetable and timetable, and use the output from a simulation system as input for the other simulation system. Although the hybrid simulation framework can obtain an operation plan of timetable and nontimetable simultaneously, the approach needs long running times and a computer with very high performance.
e joint modeling method is used in this paper to obtain the primary operation plan including the number of trains, type of train, stop plan, and timetable. Yang et al. [8] first proposed the collaborative optimization framework for both stop planning and timetable problems.
In the previous research, the timetable needs to be regenerated to meet the prespecified stop plan constraints. In Yang et al. [8], the decision variables of the stop plan and timetable are optimized jointly in a model, which reduces the complexity of the problem. is paper further extends the collaborative optimization method in Yang et al. [8] to the problem of adding additional trains, wherein the variables of the number of passengers, stop plan, train type, and timetable are jointly optimized. Although the specialized operation plan for additional trains is not studied widely, LPP and Train Timetable Problem are two related topics. LPP and TTP are studied separately and two popular problems. LPP is to specify the number of trains, the type of trains, and the stop plan for each train. Two main conflicting objectives exist in the existing optimization models: maximizing the benefits of passengers and minimizing the operational cost of the rail system [2]. In general, there are three types of models proposed in the literature about LPP, namely, (1) cost-responsible models, (2) passenger responsible models, and (3) cost and passenger responsible models. Cost-responsible models are established for the purpose of minimizing the operational cost [9][10][11][12]. Passenger responsible models for LPP focus on maximizing the level of service for passengers, that is, the number of direct travelers, travel times, and waiting times of all passengers [13][14][15]. Cost and passenger responsible models give a trade-off between the cost of the railway company and the satisfaction of passengers [8,[16][17][18]. Passenger demand is always ignored in literatures of adding trains. However, it is the main purpose of adding trains that is to meet passenger demand. To balance the benefits of the railway company and the satisfaction of passenger demand, this paper constructs a biobjective model to design a cost and passenger responsible operation plan for additional trains. e stop planning problem is a subproblem of LPP. Stop plan is to determine stations at which trains stop. e simplest stop plan patterns is all-stop. In the all-stop patterns, trains will stop at all stations along the railway corridor and pick up all passengers at the station. e all-stop pattern can meet all passenger demand but enhance the total travel times and running distance, which does not benefit service for passengers and operational costs. To balance passenger demand, the service for passengers, and operational cost, skip-stop patters are widely applied in practice. Trains may skip the station with low demand to reduce total traveling times and operational cost. Lee et al. [19,20] presented an optimization model under the skip-stop pattern and designed an efficient genetic algorithm to obtain a skip-stop strategy. Niu et al. [21] adjusted the train timetable for a rail corridor under a predetermined skip-stop pattern to minimize the total passenger waiting time at stations. To tradeoff passenger demand and operational cost, this paper studied the problem of adding additional trains also adopting the skip-stop patterns.
e study on TTP is generally to provide an optimal timetable, which both benefits rail company and passengers, for a specific number of trains on a certain rail corridor or network. More and more literatures studied demand-oriented TTP [22][23][24][25]. Two types of models to describe the TTP: (1) Mixed-integer linear programming models (MILP) and (2) Integer programming models (IP). Mixed-integer linear programming models are used to solve TTP [6,24]. In the mixed-integer linear model, the variables of time are represented by continuous variables. Meanwhile, many disjunctive constraints are introduced to describe the relationship between continuous variables and integer variables. For example, the relationships between variables of departure or arrival times and the variables of stop plan are described by the disjunctive constraints. With the scale of cases increasing, the number of disjunctive constraints is increasing more and more. As we know, too many disjunctive constraints will weaken the solution process. e other type Journal of Advanced Transportation 3 of model is the integer programming model in time-space networks. Space-time networks can represent well the spatial and temporal characteristics of railway systems. e problem of railway systems can be transformed into the routing problem in space-time network. e IP models based on the space-time representation method are used in some literatures of TTP [5,26,27]. is paper introduces the IP model based on a space-time network to solve the problem of designing an operation plan for additional trains.
Contributions.
is paper makes three main contributions: First, the traditional process of designing an operation plan was simplified into a new process, as illustrated in Figure 1. is process is suitable for designing an operation plan for additional trains because the operation plan should be established in a short time to quickly provide suitable transport capacity for the passenger demand in peak periods. e joint optimization framework is used to determine the number of additional trains, the stopping plan, the train types, and the timetable simultaneously. However, due to the many variables, the complexity of the construction model is high. us, using the space-time network representation, we transform the problem of designing the primary operation plan into a multiple-train path planning problem in a space-time network. By finding a set of feasible paths for additional trains, a feasible primary operation plan for additional trains is obtained.
Second, we present a new biobjective integer programming model for the design of the primary operation plan for additional trains on a long-trip high-speed rail corridor. Two conflicting objectives, namely, minimization of the total deviation between the provided transport capacity and the passenger demand and minimization of the total travel times of additional trains, are introduced into the model. Via the formation of the objective function of passenger demand and the loading capacity constraint, we can generate the stopping plan successfully for additional trains to provide a loading capacity that is close to the passenger demand of each station. According to the objective function of the total travel times, trains skip stations with small passenger demand. e integration of the two objective functions via linear weighting yields a trade-off between the passenger demand and the travel times. e number of additional trains is also determined according to the weights of the two objectives. In addition, the attendance rate constraints guarantee that the stopping plan for each additional train can provide not only sufficient loading capacity but also satisfactory service for passengers. e train type constraint can maintain a specified proportion of various types of trains to provide more choices for passengers. Safety headway constraints are used to ensure that the additional trains run safely on the rail corridor and do not disturb the operation of the original trains for the safety and stability of the whole rail system. ird, two sets of experiments are conducted on a ninestation rail corridor and Wuhan-Guangzhou high-speed rail corridor to evaluate the effectiveness and efficiency of our proposed methods. We use the CPLEX solver to solve the proposed model. e results on the small-scale example demonstrate the satisfactory performance of the proposed methods. Furthermore, via the adoption of various sets of parameters and strategies, we evaluate the results of the large-scale experiments. e experimental results demonstrate that our proposed methods can generate an optimal primary operation plan within acceptable computation times. e selected parameters and strategies affect the results of the experiments. Operators can select the parameters according to the application requirements. e remainder of this paper is organized as follows: In Section 2, we provide a detailed problem statement. In Section 3, we present the assumptions and notations of the considered problem. en, a biobjective integer programming model for the design of the primary operation plan is formulated. e complexity of the model is also discussed in this section. In Section 4, we conduct two sets of experiments to evaluate the performance of the proposed approach. In Section 5, the conclusions of this study are presented and future work is discussed.
Problem Statement
In this paper, a double-track railway corridor is considered as the physical environment of the problem. e railway corridor consists of stations that are indexed by i ∈ S 1, 2, . . . , S { }, and adjacent stations are connected by a segment, which includes double tracks; see Figure 2. Moreover, in this double-track railway corridor model, trains that travel in the inbound and outbound directions are mutually independent. Without loss of generality, in this study, only the scheduling of additional trains in the inbound segments is considered. e set of trains that are considered in the model is K 1 ∪ K 2 , where K 1 denotes the set of original trains in the offpeak period and K 2 denotes the set of additional trains that the operators expect to add during the peak period. In the off-peak period, only the original trains travel on the railway corridor, while in the peak period, the operators must add additional trains to improve the transportation capacity and to satisfy the passenger demand. e safe and efficient operations of additional trains significantly impact passengers' trips during the peak period. In this paper, we study the problem of designing a primary operation plan that includes the number of additional trains, the train types, the stopping plan, and the timetable for the additional trains.
As above, we aim to obtain the primary operation plan on the tactic levels. us, we here only consider the macro demands at each station instead of counting the number of passengers boarding and alighting. e number of passengers at each station can be approximately obtained through historical passenger demand data. Obviously, the macro demand can cause the loss of accuracy. us, our operation plan in this paper is just a primary operation plan which needs to be adjusted and reoptimized in the next other stages.
e number of additional trains should meet the passenger demand during the peak period as much as possible but not be so large as to cause a waste of transport capacity. Additionally, since the departure and arrival times of an additional train must maintain moderate time intervals, namely, arrival headway and departure headway, with its adjacent trains to ensure the safety of operation, only finitely many trains can be added within a limited time horizon.
In this paper, we consider the China high-speed railway corridor as the decision-making environment, and all the trains in this study are categorized into two types (denoted by G and D) according to their maximum allowed speeds. Here, G and D represent types of trains with maximum velocities of 350 km/h and 250 km/h, respectively. Since (1) there are only finitely many G-trains and (2) the cost and ticket price of D-trains are lower, a specified number of D-trains should always be present in the additional train plan. e stopping plan specifies whether a train stops or not at a station according to the predicted passenger demand. e stopping plan affects the quality of service for passengers and the trains' operation efficiency.
us, we should establish a trade-off between maximizing the travel convenience of passengers and minimizing the operation cost when we design the stopping plan for additional trains. e timetable determines the arrival, departure, and dwelling times of each additional train at each station. In this study, the timetable of the original trains is fixed. us, the timetable for the additional trains must satisfy a headway constraint to ensure that the additional trains do not disturb the operation of the original trains.
Based on the spatial and temporal characteristics of the operation plan, the design of the primary operation plan for trains can be generalized to a space-time decision-making problem. A space-time network considers both physical paths and the time horizon, and it is a powerful tool for our problem. For convenience, the problem is treated with two processes: (1) the rail corridor is simplified as a physical path graph G 1 (V, E), in which stations are represented by nodes in V and inbound/outbound segments are represented by arcs in E, and (2) the time horizon is separated into a set of timestamps, which is denoted by where δ is assumed to be an interval of time length δ and N is a positive integer that is sufficiently large for ensuring that the interval [0, Nδ] covers the planning time horizon. With the timestamps of the time horizon, the physical path graph G 1 (V, E) is extended into a space-time network graph G 2 (S, S ′ , A, A ′ ), in which S and A represent the sets of space-time nodes and space-time travel arcs that correspond to V and E, respectively, in the physical network. In addition, dummy space-time nodes S ′ that are related to V and station arcs A ′ are added into the space-time network.
A small-scale space-time network with the trajectory of one train is illustrated in Figure 3. We consider a physical path that consists of three segments from node 1 to node 4, which represents a rail corridor from station 1 (origin) to station 4 (destination). e time horizon is separated into 21 timestamps, namely, 0, δ, 2δ, . . . , 21δ { }, which are embedded into the physical path for the construction of a twodimensional network that has both spatial and temporal characteristics. Dummy space-time nodes that are related to each physical node at each timestamp are also added into the space-time network to represent the stopping plan. Each physical node of the physical path is associated with two space-time nodes at each timestamp. Travel arcs, which connect two space-time nodes of adjacent physical nodes, are considered as optional paths for the specification of the departure times, the arrival times, the link travel times, and the types of trains. In this paper, two types of travel arcs are defined according to the travel times of the two types of trains on each segment. For example, the travel arcs with shorter travel times represent optional paths of G-trains, and those with longer travel times represent optional paths of D-trains. In addition, two types of station arcs, which connect two space-time nodes of the same physical node, namely, stop and nonstop arcs, are defined to represent the stopping plans of trains, where the stop arcs are used to describe the stopping plans and the dwelling times of trains at the station. For example, in Figure 3, a space-time trajectory of a train is presented to illustrate the operating process. A D-train departs from station 1 (origin) at timestamp 5δ and arrives at station 2 at timestamp 8δ. en, the train does not stop at station 2 and departs from station 2 at timestamp 8δ. Next, the train arrives at station 3 at timestamp 10δ and dwells for a time interval at station 3. After that, the train departs from station 3 at timestamp 11δ and arrives at station 4 (destination) at timestamp 14δ.
Based on the space-time network in Figure 3, an example is presented as Figure 4 to illustrate the process of Journal of Advanced Transportation designing an operation plan for additional trains in the peak period. As shown in Figure 4, there are three original trains (T1, T2, T3), of which the origin station is station 1 and the destination is station 4. In an attempt to satisfy the passenger demand in the peak period, two additional trains, namely, T4 (G-train) and T5 (D-train), are added. In Figure 4, the black lines and red lines represent the space-time paths of the original and additional trains, respectively. According to the trajectories of the additional trains, various train types, stopping plans, and departure and arrival times are selected according to the operation requirements by finding paths in the space-time network. For example, T4 and T5 are designed as different train types because they select different types of travel arcs as paths. In addition, as the passenger demand differs among the stations, both T4 and T5 are scheduled to stop at station 2, while only T5 is scheduled to stop at station 3. According to this example, the problem of designing an operation plan for additional trains can be transformed into a multiple-train path planning problem in a spacetime network. Furthermore, if binary decision variables are introduced as indicators of whether the space-time paths are selected or not, the feasible solutions can be obtained to represent the temporary plan for additional trains; namely, we can model this problem as a 0-1 integer programming model.
Mathematical Formulation
In this section, a mathematical model is constructed for obtaining the optimal operation plan for additional trains. First, several assumptions are made to simplify the problem. e capacity of the station and the rolling stocks are unlimited.
Notations and Parameters.
For the reader's convenience, all the notations and parameters that are used in the study are defined in Table 1.
Decision Variables.
To generate a feasible operation plan, we must specify the space-time trajectory for each train. us, the problem is transformed into an optimal path choice process for multiple trains in a space-time network, which involves three binary decision variables: x k i′,i+1,t,l : selection indicator of the additional train k for space-time travel arc (i ′ , i + 1, t, l), which equals 1 if the train k of type l enters segment (i, i + 1) at time t and equals 0 otherwise; x k i,i′,t,p : selection indicator of the additional train k for space-time station arc (i, i ′ , t, p), which equals 1 if train k chooses stopping plan p at station i at time t and equals 0 otherwise; y k : selection indicator of the additional train k being added, which is also a binary variable. It equals 1 if additional train k satisfies the condition Variables x k i′,i+1,t,l and x k i,i′,t,p are typically associated with the generation of the feasible space-time path, which represents the feasible operation plan, for each train k ∈ K 2 , where K 2 denotes a set of trains: x k i′,i+1,t,l determines the departure time and the train type of train k ∈ K 2 , while x k i, i′, t, p determines the stopping plan of train k ∈ K 2 . In addition, K 2 � 1, 2, . . . , |K 2 | denotes the set of additional trains that operators expect to add, where |K 2 | is the maximum number of additional trains, to provide the origin with the sufficient loading capacity, and it is defined as follows: Not all trains in K 2 are added into the operation plan in the peak period under practical conditions. erefore, we introduce a state variable y k into the model according to whether the train k ∈ K 2 is added in the optimal solution or not, and the number of additional trains can be calculated as follows: 3.3. Formulation of the Constraints. Four sets of constraints are considered in the process of designing the operation plan: (i) unique space-time path constraints; (ii) safety headway constraints; (iii) train type constraints; and (iv) loading capacity constraints. Detailed formulations of each set of constraints are presented in the following parts.
Unique Space-Time Path Constraints.
To guarantee that at most one connecting path is generated from the origin to the destination for each train k ∈ K 2 in the spacetime network, a series of space-time path constraints are presented. We require that if the train k ∈ K 2 is added in the peak period, up to one travel arc is selected that corresponds to each physical link for each train k ∈ K 2 . Similarly, at most, one station arc will be chosen that corresponds to each station for each train k ∈ K 2 . us, constraints (3) and (4) are defined as follows: t∈T l∈L t∈T p∈P Furthermore, to ensure that all the selected travel arcs and station arcs of each train k can constitute a connecting path from the origin to the destination in the space-time network, we balance the incoming travel arc and the outgoing station arc for each space-time node (i, t) ∈ S/ (1, t) { } and the incoming station arc and the outgoing travel arc for each dummy space-time node Index of dummy physical nodes that correspond to i, j, i′, j′ ∈ V′ l Index of train types, l ∈ L p Index of stopping plans, p ∈ P (i, t) Index of space-time travel nodes, (i, t) ∈ S (i′, t) Index of dummy space-time travel nodes, (i′, t) ∈ S′ (i, i + 1) Index of physical links that represent segments, Travel times of trains of type l on segment (i, i + 1)
Safety Headway Constraints.
To ensure safe operation, additional trains must be scheduled under operational restrictions on the departure and arrival times to avoid collisions. Since in the space-time network the departure and arrival times of trains are represented by space-time arcs, the operation plans of additional trains can be controlled by implementing restrictions on the selection of the space-time paths. us, additional trains must avoid selecting incompatible arcs as paths. Incompatible arcs are travel arcs that have been already selected as paths by other trains; the selection of such an arc could cause a collision. As a result, in this model, the sum of all incompatible arcs should be less than 1. ese constraints are expressed in detail as follows: (1) Departure and Arrival Headway Constraints. For the security of interstation operations, if two consecutive trains depart from or arrive at a station, we should set a time interval between the departure/arrival times of these two trains in preparation for each train's arrival or departure operations. In this study, the departure and arrival time intervals are called the departure and arrival headways. en, for each added train, the departure and arrival headways should not only be considered in the arrival/departure operations of other adjacent additional trains but also be applied to the original trains.
An implementation case of the departure headway constraint is illustrated in Figure 5. For two consecutive trains that depart from station i in succession, if the first train departs at time t 0 , to satisfy the specified headway constraint, the second train should not depart from station i earlier than t 0 + h d min , where h d min is the minimal departure headway.
For example, in Figure 5, for the travel arc that is associated with space-time node (i ′ , t 0 ), the next compatible arc must be the travel arc that is associated with space-time node (i ′ , t 0 + h d min ); namely, all travel arcs that are related to the space-time nodes within the range of (i ′ , t 0 ), . . . , (i ′ , t 0 + h d min − δ) are incompatible, where δ is a unit time interval. As a result, the sum of the decision variables and other parameters of the original trains that are involved with these incompatible arcs should be less than 1. e departure headway constraint is expressed as follows: Similarly, an implementation case of the arrival headway constraint is illustrated in Figure 6. If the station arc that is associated with space-time node (i, t 0 ) is selected as the path for the first train, which arrives at station i at time t 0 , then the next train must select the station arc that is associated with space-time node (i, t 0 + h d min ) as its path to avoid conflict; namely, the station arcs that are related to the space-time nodes within (i, t 0 ), . . . , (i, t 0 + h d min − δ) are incompatible. e arrival headway constraint on the incompatible arcs is expressed as follows: (2) Tracking Headway Constraint. e tracking headway constraint is applied to avoid collision between two consecutive tracking trains that are traveling on the same segment. ere are three types of tracking operation: two trains of the same type; a D-train goes after a G-train; and a G-train goes after a D-train. Based on the assumption that trains of the same type share the same velocity and travel times, trains of the first type of tracking operation can operate safely under departure and arrival headway constraints. In addition, since a G-train is much faster than a D-train, collisions never occur in the second type of operation. However, two consecutive trains in the third type of operation are at risk of collision; see Figure 7.
An example of the third type of tracking operation is illustrated in Figure 7, where t 2 i,i+1 − t 1 i,i+1 denotes the difference between the G-train's travel time and the D-train's travel time on the segment (i, i + 1). For this segment, if t 2 i,i+1 − t 1 i,i+1 > δ, collisions may occur between these two consecutive trains: If the D-train departs from station i at time t 0 , and the following G-train departs from station i at time t 0 + δ or t 0 + 2δ, train collision would result. To avoid a collision, the following train should depart from station i no earlier than t 0 + t 2 i,i+1 − t 1 i,i+1 . Hence, the arc (i ′ , i + 1, t 0 , 2) is incompatible with other arcs that are associated with space- e decision variables of the additional trains and the parameters of the original trains that are involved with these incompatible arcs should satisfy the following condition:
Train Type Constraints.
In practice, once a train has been determined to run on a specified rail corridor from origin to destination, its train type never changes. In this paper, a train type constraint is applied to ensure that on all travel arcs that are chosen as space-time paths for each additional train k ∈ K 2 , the train type remains the same: Although the D-train is slower than the G-train, the ticket price of the D-train is lower. To provide more choices for passengers, we impose a train type constraint that ensures that at least a specified number of D-trains are involved in the additional train plan: where N D min is the threshold value of the required minimum number of D-trains.
Loading Capacity Constraints
(1) Constraint on the Loading Capacity for Each Station. e objective of this paper is to provide sufficient loading capacity for the satisfaction of the increasing passenger demand during the peak period. In this paper, only long-trip trains on the rail corridor are considered, which not only can provide abundant loading capacity to satisfy most of the passenger demand but also avoid waste of transportation resources. Passengers whose demand is not satisfied by this operation plan can commute between their origin and destination by taking other short-trip trains and by transferring several times.
However, since the track capacity of the rail corridor and the number of additional trains that can be added into the operation plan are both finite, it is possible that the maximum loading capacity that is provided by these long-distance trains will still fail to satisfy the passenger demand of each station in the peak period. Mathematically, the total loading capacity of the additional trains and the original trains that is provided to each station i ∈ S does not exceed the passenger demand at each station i ∈ S: The minimal departure headway where q k i represents the loading capacity of train k at station i, which is determined mainly by the type of train, the level of the stations, and empirical historical data on passenger demand, among other factors. Q i is the estimated passenger demand at station i, which is obtained based on historical travel data. As specified in Assumption 5, instead of considering the exact numbers of passengers who get on and off each train at each station, the passenger demand at each station is considered from a microscopical perspective.
(2) Attendance Rate Constraints. ese constraints are used to restrict the loading capacity of each additional train in the operation plan. e attendance rate is an important factor for measuring the level of utilization of a train's capacity. e attendance rate is calculated by dividing the total passenger load of the train by the maximum capacity of the train. Based on this method, we define the attendance rate of train k as follows: Under an excessive attendance rate, too many passengers are loaded by a train, which leads to low-quality service.
us, an attendance rate constraint is imposed to limit the attendance rate of the additional train k ∈ K 2 to avoid overload of the train: where μ max denotes the threshold value of the maximum attendance rate of each train. In contrast, under a low attendance rate, the train's capacity is underutilized. To avoid the waste of the train's capacity, the attendance rate of the added train that is derived from the optimal solution should exceed the minimum attendance rate μ min . In addition, if the train k is not involved, μ k will be set to zero, since not all trains in K 2 will be added in the final optimal solution. us, we use the binary variable y k to formulate the model, and these two disjunctive constraints are expressed as follows: i′,i+1,t,l ( )∈A x k i′,i+1,t,l ≤ M 1 − y k , ∀k ∈ K 2 ∀i ∈ V, (16) where μ min is the threshold value of the required minimum attendance rate of each train and M is a sufficiently large number. At least one of these two constraints should be satisfied.
Objective.
To improve the quality of service for passengers, operations with shorter travel times and dwelling times are preferred. In addition, to minimize the cost to the railway company, the number of additional trains should be minimized in the operation plan under the condition that as much of the passenger demand be satisfied as possible. us, the first objective is to minimize the additional trains' total travel times: e first term on the right-hand side of equation (18) is the total link travel time, which determines the number of additional trains and the types of the train since the link travel times are the same for each train type. e second term is the total train dwelling time, which is related to the stopping plan of the additional trains.
To satisfy as much passenger demand as possible in the peak period, the difference between the total passenger demand and the total supply capacity of all trains, namely, both the original trains and the additional trains, should be minimized. us, the second objective function minimizes the difference: Here, Q is the total passenger demand. e second term on the right-hand side of equation (18) is the sum of the loading capacities of the original trains, and the third term on the right-hand side of equation (18) is the sum of the loading capacities of the additional trains. e two objectives that are specified above may conflict with each other in the scheduling process; that is, more additional trains and stops can decrease the number of unsatisfied passengers but inevitably increase the total travel times. To resolve this potential conflict, we use a linear weighting method to trade off between these two conflicting objectives. Due to the difference in the dimensions of the travel times and passenger demand, we normalize the two objective functions: where T total and Q d are the normalized values of T total and Q d , respectively; T min total and Q min d are the minimum values of T total and Q d , respectively, under constraints (3)-(16); and T max total and Q max d are the maximum values of T total and Q d , respectively. Based on the discussion above, we model the linear weighted objective function as follows: where θ 1 and θ 2 are the prespecified weights of these two normalized objective functions. According to the practical operation scenario, we empirically determine two suitable parameters to obtain the optimal temporary plan, in which the total travel times and the difference between the total passenger demand and the total loading capacity is minimized.
Complexity of the Model.
Two types of binary decision variables are proposed in this model: e decision variables of the first type, namely, x k i′,i+1,t,l and x k i,i′,t,p , determine spacetime paths of additional trains in the space-time network.
e decision variables of the other type, namely, y k , indicate whether each additional train is added or not, which are defined to formulate constraints (3), (4), (15), and (16). In addition, all constraints in the model are linear equalities or inequalities.
e objective function of this model is a combination of two normalized linear objective functions with linear weighting. Consequently, our model is a multiobjective 0-1 integer linear programming model.
In the following, the complexity of the model is discussed. e total numbers of decision variables and constraints are listed in Table 2, where the values are the possible maximum values. According to Table 2, the complexity of this model depends on the number of stations on the rail corridor |S|, the numbers of time intervals |T|, the expected number of additional trains |K 2 |, the number of types of trains |L|, and the number of stopping plans |P|.
An example is presented to illustrate the complexity of this model more concretely. Five additional trains on the rail corridor with 20 stations, 2 types of trains, and 2 stopping plans for each train on each segment are considered in these experiments. When a space-time network with a time horizon of 250 timestamps is constructed, there are 47,500 variables with respect to x k i′,i+1,t,l , 50,000 variables with respect to x k i,i′,t,p and 5 variables with respect to y k . is results in a large-scale 0-1 integer linear programming model with a total of 97,505 binary decision variables.
Numerical Experiments
In this section, several numerical experiments are conducted to evaluate the effectiveness and efficiency of our proposed model, and the IBM ILOG CPLEX 12.5 solver is used to solve the 0-1 interprogramming model. All experiments run on a computer with an Intel Core i7 4790K CPU and 8 G RAM.
Small-Scale Case Study.
In this case study, we consider an inbound single-track rail corridor with 9 stations and 8 segments; see Figure 8. ese stations are numbered consecutively from 1 to 9 along the same inbound direction, and two types of high-speed trains that differ in terms of speed, namely, G-trains and D-trains, are considered in this experiment. e link travel times of the two types of highspeed trains on each segment are presented in Figure 8. For simplicity, the dwelling times of all trains at each station are set to 2 minutes. e minimum departure and arrival headways are both set to 2 minutes to ensure the safe operation of additional trains and to ensure that the operation of the original trains is not disturbed.
In this experiment, there are also 6 original G-trains, which are labeled from G1 to G6 according to their departure time and type, in the off-peak period, and the resulting operation plan is presented in Figures 9 and 10. e loading capacity of each train at each station is listed in Table 3. e passenger demand in the peak period and the supply capacity of the original trains in off-peak period at each station are also listed in Table 3. To satisfy as much of the passenger demand in the peak period as possible, in this case, the expected number of additional trains is 5 according to the passenger demand at the origin; see equation (1). In addition, the maximum capacity of each train that is considered in this experiment is assumed to be 400, and we set the minimal number of D-trains to N D min . e value of the attendance rate for each train should fall within the range of [0.9, 1.2] to avoid the waste of transport capacity and to improve the service. In this experiment, the time interval δ in the space-time network is set to 1 minute, and we conduct this experiment with a time horizon of [0, 160] minutes. e weight coefficients of the objective function are set to 0.1 and 0.9, namely, θ 1 � 0.1 and θ 2 � 0.9.
Based on the discussion above, we design codes on the MATLAB platform for obtaining the optimal solution by using the CPLEX solver. According to the resulting optimal solution, the objective value of the total travel time T total is 476, and the value of the unsatisfied passenger demand Q d is 245. e optimal timetable and stopping plan for the trains are presented in Figure 9, in which the black lines indicate original trains and the red lines denote additional trains. In Figure 10, the solid dots denote the train stopping at the corresponding station for passenger boarding/alighting, while the hollow dots indicate that the train does not stop at that station. e supply capacity for each station in the peak period that is derived from the resulting optimal solution is listed in Table 3. As a result, as presented in Figure 9, a temporary operation plan is obtained by adding four additional trains in the peak period. In addition, departure and arrival time intervals of these additional trains ensure their safe operation and noninterference with the operation of the original trains.
In this case, according to the passenger demand at the origin, the expected number of additional trains is set to 5. However, according to the resulting optimal solution, only four additional trains are added into the operation plan, which can balance the total travel times and the passenger demand. ese four additional trains lead to an increase in the level of the satisfaction of the passenger demand from 54% to 95%. However, one more additional train will not only increase the total travel times of trains but also decrease the attendance rates of the trains, thereby resulting in wasting of the transport resources of the rail company. e number of additional trains that is derived from the resulting optimal solution differs from the expected number of additional trains at times because the optimal number of additional trains is influenced by the passenger demand not only at the origin station but also at the intermediate stations. In addition, according to Figure 10, the passenger demand at the intermediate stations influences the stopping plan for the additional trains. All additional trains are scheduled to stop at two stations; namely, station 2 and station 3, to provide sufficient loading capacity because the passenger demands of stations 2 and 3 are much larger than those of the other stations. At the other stations, not all trains are scheduled to stop; for example, only D9 is scheduled to stop at station 5.
Large-Scale Experiments on the Wuhan-Guangzhou High-Speed Rail
Corridor. In this study, a large-scale experiment on the operational environment of the Wuhan-Guangzhou high-speed railway corridor in China is conducted in which a temporary plan is designed for additional trains in the peak passenger demand period.
Basic Experiment on the Wuhan-Guangzhou High-Speed Rail
Corridor. e experiment is conducted within the time horizon of [0, 600] minutes on the Wuhan-Guangzhou high-speed rail corridor, and two types of trains, namely, G-trains and D-trains, are considered in these experiments. e length of each segment and the travel times of these two types of trains are listed in Table 4. For simplicity, the dwelling times of the trains are set to 4 minutes at Changsha South station and to 2 minutes at the other stations.
In this study, we discuss passenger demand only from a macroscopic level, and we do not track the number of passengers who get on and off. e daily passenger demand at the station i is derived from historical data on the daily origin-destination (OD) passenger flow [28], which equals the sum of the OD passenger flow from station i to other stations. In addition, since only high-speed trains, namely, G-trains and D-trains, are scheduled and we set the time horizon to within [0, 600] minutes in this experiment, we multiply the daily passenger demand that is derived from the historical data at each station by a coefficient of 0.5. In addition, we set the passenger demand at Wulongquan East, Lechang East, and Yingde West stations to 0 since the passenger flow is omitted from the historical data [28]. e passenger demand at each station is listed in Table 5. e rolling stocks on the Wuhan-Guangzhou high-speed rail corridor are mainly CHR380 or CR400, which are typically composed of 16 vehicles or 8 vehicles. To fully utilize the transportation capacity, we assume that all additional trains are composed of 16 vehicles and that each additional train has a maximum loading capacity of 800 people in these experiments. Furthermore, according to the level of the station and the historical data on the passenger demand of each station, the loading capacity of each train at each of these stations is assigned; see Table 5.
We design 20 original trains, namely, 19 G-trains and 1 D-train, in the off-peak period from Wuhan to Guangzhou North in the time horizon of [0, 600] minutes in this experiment. e supply capacities of these original trains at each station are listed in Table 5. However, these original trains are far from sufficient for providing a capacity that satisfies the passenger demand at each station in the peak period. According to the passenger demands and the loading capacities in Table 5, we expect to add five additional trains to these original trains based on equation (1). To provide passengers with more options, at least one D-train must be included in these additional trains, namely, N D min � 1. e minimum arrival and departure headways in this experiment are both set to 4 minutes, namely, h amin � 2δ and h d min � 2δ. e weight coefficients of the total travel times and the difference between the total passenger demand and the total loading capacity are set to θ 1 � 0.1 and θ 2 � 0.9, respectively. To fully utilize the trains' capacities, we set the maximum attendance rate and the minimum attendance rate as μ max � 1.2 and μ min � 0.9, respectively.
An experiment is conducted on the MATLAB platform with the CPLEX solver to obtain the optimal temporary operation plan for additional trains in the peak period on the Wuhan-Guangzhou rail corridor. According to the resulting optimal solution, a total of 5 additional trains, with the optimal objective value of 893.1, should be added into the operation plan. Under the optimal objective value, the total travel time is 1212 minutes, and the difference between the 14 Journal of Advanced Transportation total passenger demand and the supply capacity is 319 people. e resulting optimal timetable and stopping plan of the original trains and the additional trains are presented in Figures 12 and 13, in which the original trains and the additional trains are numbered according to their departure times from the origin; namely, the original trains from left to right are numbered as G1, G2, . . ., G20, and the additional trains from left to right are labeled as G21, G22, D23, G24, and G25. e supply capacities of the trains that are derived from this operation plan at each station are listed in Table 5.
According to the resulting optimal solution, all additional trains can maintain safety departure and arrival headways at each station of the rail corridor. In addition, they do not disturb the operations of the original trains; see Figures 12 and 13. Since the link travel times of D-trains are longer than those of G-trains, only one D-train is among the additional trains, which facilitates the minimization of the total travel times of all additional trains. In addition, as discussed above, the passenger demands at Wuhongquan East, Lechang East, and Yingde West stations are set to zero in our experiments, and none of the additional trains stop at these three stations, which further decreases the total travel times of the additional trains. According to the results, the stopping plan of these additional trains is mainly influenced by the passenger demand, for example, all five additional trains stop at Changsha station due to its huge passenger demand, while none of the additional trains stop at Hengshan West station. e total passenger demand of this rail corridor in the peak period is 19065 (see Table 5), and the original trains only offer a capacity of 14676. e five additional trains add a capacity of 4071, which satisfies approximately 98.3% of the passenger demand along this rail corridor. Although the operation plan can satisfy most of the passenger demand, there are two stations, namely, Yueyang East and Chenzhou West, to which sufficient capacity is not provided. As a result, a minority of the passengers, namely, approximately 319 passengers, must travel to their destinations by taking or transferring to short-trip trains.
Additional Experiments on the Wuhan-Guangzhou
High-Speed Rail Corridor. Several additional experiments are conducted by adopting various parameters on the Wuhan-Guangzhou high-speed rail corridor to further evaluate the performance of our model. Unless stated otherwise, the parameters are the same as for the experimental data in Section 4.2.1.
(1) Additional Experiment with Respect to the Attendance Rate Constraints. In this paper, two objective functions on the travel times and passenger demand are considered, which partially avoids the wasting of the loading capacity of the additional trains. In addition, better control over the attendance rates of trains can provide better service for passengers and improve the efficiency of the utilization of the transportation resource. In this study, to further examine the influences of the attendance rate constraints, an experiment in which the attendance rate is not considered is conducted. 10 12 According to the optimal solution of this experiment, five additional trains remain to be added. e total travel time is 1212 minutes, and the difference between the total passenger demand and the supply capacity is 319 people. To demonstrate the difference between this experiment and the previous basic experiment, the timetable and the stopping plan of this experiment are displayed in Figures 14 and 15. Statistical analyses of each additional train that is added in the previous basic experiment and in this experiment are presented in Table 6.
Compared with the previous basic experiment, the number of additional trains and the objective value that is derived from the resulting optimal solution of this experiment do not change. However, differences are identified between these two operation plans, especially in the stopping plans. e loading capacity or attendance rate can be used to measure the efficiency of the stopping plan. Overload leads to poor services, while low-load typically results in a waste of available resources. e results demonstrate that the stopping plans of the additional trains in this experiment are unreasonable; see Table 6. For example, the loading capacity of G23 is 1108, while the loading capacity of D22 is only approximately 480. In addition, overload enviably leads to longer dwelling times. For example, the dwelling time of G23 is 12 minutes longer than that of D22 in this experiment. A comparison of the loading capacity and the attendance rate between these two experiments shows the impact of the attendance rate constraint on the stopping plan; see Figure 16. As shown in Figure 16, the stopping plan in the previous basic experiment is more balanced, in which the loading capacities of additional trains are much closer to the maximum capacity of 800, and the attendance rates are much closer to 1. e phenomenon supports the efficacy of imposing the attendance rate constraint to increase the efficiency of stopping plan and to generate a more reasonable operation plan that balances the loading capacities of the additional trains.
(2) Additional Experiments with Respect to the Weight Coefficients. θ 1 and θ 2 . In this section, we examine the influences of the weight coefficients θ 1 and θ 2 in objective function on the performance of the proposed model. We set θ 1 and θ 2 to 0, 1 which T total and Q d display various tendencies as functions of θ 1 , θ 2 . For instance, when θ 1 , θ 2 varies from 0.4, 0.6 { } to 0.5, 0.5 { }, T total decreases from 381 to 262 minutes, while Q d increases from 1299 to 2124 people, respectively. e weight coefficients determine the strategy that is applied in decision-making. In the experiment, the weight coefficients significantly influence the number of additional trains in the optimal solution.
Since more additional trains can provide a larger capacity to improve the level of satisfaction of the passenger demand while increasing the total travel times, the number of additional trains depends on the parameter pair θ 1 , θ 2 . According to Table 7, when θ 1 varies from 0.2 to 0.3, the number of additional trains decreases from 5 to 4. Meanwhile, the total travel time decreases from 606 to 494 minutes, and the number of unsatisfied passengers increases from 319 to 759 people. Moreover, when θ 1 is set to 0.6, 0.7, 0.8, 0.9, or 1.0, only one additional train can be added into the operation plan, which satisfies the minimum required number of D-trains. When θ 1 , θ 2 is set to 0.4, 0.6 { } and 0.5, 0.5 { }, the unsatisfied passenger demands are 759 and 1299 people, respectively, with 4 additional trains, which demonstrates that the weight coefficients influence not only the number of additional trains but also the stopping plan. Hence, the results of these experiments demonstrate that the weight coefficients significantly affect the operation plan.
In addition, from Table 7, the weight coefficients significantly influence the variation of the computation times. e computation times of the CPLEX solver are approximately 3,000 s for most cases except for 0.2 0.8 , for which the computation time is 20,000 s. us, a heuristic algorithm with faster arithmetic speed should be designed in our future research.
(3) Additional Experiments with Respect to the Minimum Number of. D-Trains N D min . As two types of high-speed trains that differ in terms of velocity, G-trains and D-trains are considered. It is too necessary to investigate the influences of various values of N D min on the experimental results. We conduct experiments under the condition that the minimum number of D-trains is 1, 2, 3, 4, or 5. e attendance rate is required to be within 0.7 1.5 in this set of experiments. Table 8 lists the experimental results for various values of N D min .
Because two types of high-speed trains that differ in terms of velocity, namely, G-trains and D-trains, are considered in this study, it is necessary to investigate the influences of various values of N D min on the operation plans. We conduct experiments under the condition that the value of N D min is set to 1, 2, 3, 4, or 5. In addition, the attendance rate is set to within the range of 0.7 1.5 in this set of experiments. e weight coefficients of the total travel times and the passenger demand in these experiments are set to 0.1 and 0.9, respectively. e experimental results are presented in Table 8.
e results demonstrate that the number of D-trains, N D , that is derived from the resulting optimal solution is equal to the minimum number of D-trains, N D min , that were required in the model. e reason is that the longer link travel times of the D-trains would increase the total travel times of additional trains as the number of D-trains When the value of N D min equals 0, 1, and 2, the objective values of the unsatisfied passengers are the same. However, we cannot identify feasible solutions when the value of N D min is 4 or 5. is is because compared with the G-train travel arc, there would be more travel arcs that are incompatible with the D-train travel arc due to its longer travel times.
us, if the value of N D min is large, it is difficult to find a feasible path for a D-train in the space-time network. Additionally, according to Table 8, the computation times increase as the value of N D min increases, which is expected because more D-trains would cause more conflicts between D-trains and G-trains.
Conclusions
is paper solved the problem of designing an operation plan for additional trains on a high-speed rail corridor. A specialized optimization framework for the design of an operation plan for additional trains is proposed. In the framework, all plans of the number of trains, stop plan, train type, and timetable are jointly optimized. In this paper, we assume that the original trains have priority for rail infrastructures; thus, the operation plan for original trains is fixed, and additional trains cannot disturb the operation of the original trains. To provide sufficient transport capacity, the objective of minimizing the deviation between the passenger demand and the transport capacity was proposed, which was not considered in previous studies on adding trains. Meanwhile, a conflicting objective of minimizing the total travel times of the additional trains to minimize the cost for the railway company was constructed. To obtain the number of additional trains, the train types, the stopping plan, and the timetable for the additional trains simultaneously, several decision variables were introduced, which increase the complexity of the model. By employing a spacetime network diagram and determining the space-time characteristic of the operation plan, we transformed the problem of designing the operation plan into a multipletrain path planning problem in a space-time network to increase the efficiency of the modeling. us, a biobjective integer linear programming model was constructed. Although we design a primary operation plan, various practical details were considered in this model. For example, headway constraints were imposed to avoid collisions and to avoid disturbing the original trains, and constraints on the attendance rate were imposed to ensure the utilization of the capacity of each train. Two sets of experiments were conducted to evaluate the performance of the method. A small experiment was conducted to evaluate the performance of the proposed approach. In addition, using real data from the Wuhan-Guangzhou rail corridor in China, a set of largescale experiments were conducted to evaluate the applicability of the proposed method. e experimental results demonstrate that the proposed method can be used to obtain a reasonable primary operation plan for additional trains efficiently.
In future research, we will consider this problem on the microlevel. Various operations in stations, for example, train acceleration, deceleration, and overtaking in stations, and the station capacity will be considered. We will also consider the passenger demand on the microlevel and the passenger demand at the origin and destination (OD). In addition, we will extend the model to the high-speed rail network, which will require additional variables and constraints in the model. us, the design of an efficient and intelligent algorithm for solving more complicated models is necessary. e efficient design of a more practical and flexible operation plan for additional trains on the rail network will be our research direction.
Data Availability
Some or all data, models, or code generated or used during the study are available from the corresponding author upon request (list items).
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 2020-03-12T10:23:46.929Z | 2020-03-10T00:00:00.000 | {
"year": 2020,
"sha1": "ce49fc359051ef33facb4db9e273c9767a13d41b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jat/2020/3602727.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7eb60c3cf7f67230ce37901975ff319000b736fc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
12318070 | pes2o/s2orc | v3-fos-license | A New Framework for Determination of Excitatory and Inhibitory Conductances Using Somatic Clamp
The interaction between excitation and inhibition is crucial for brain computation. To understand synaptic mechanisms underlying brain function, it is important to separate excitatory and inhibitory inputs to a target neuron. In the traditional method, after applying somatic current or voltage clamp, the excitatory and inhibitory conductances are determined from the synaptic current-voltage (I-V) relation --- the slope corresponds to the total conductance and the intercept corresponds to the reversal current. Because of the space clamp effect, the measured conductance in general deviates substantially from the local conductance on the dendrite. Therefore, the interpretation of the conductance measured by the traditional method remains to be clarified. In this work, based on the investigation of an idealized ball-and-stick neuron model and a biologically realistic pyramidal neuron model, we first demonstrate both analytically and numerically that the conductance determined by the traditional method has no clear biological interpretation due to the neglect of a nonlinear interaction between the clamp current and the synaptic current across the spatial dendrites. As a consequence, the traditional method can induce an arbitrarily large error of conductance measurement, sometimes even leads to unphysically negative conductance. To circumvent the difficulty of elucidating synaptic impact on neuronal computation using the traditional method, we then propose a framework to determine the effective conductance that reflects directly the functional impact of synaptic inputs on action potential initiation and thereby neuronal information processing. Our framework has been further verified in realistic neuron simulations, thus greatly improves upon the traditional approach by providing a reliable and accurate assessment of the role of synaptic activity in neuronal computation.
Introduction
The interplay between excitation and inhibition gives rise to rich functions of the brain, for instance, to stabilize and shape neural activities [51,50], to enhance feature selectivity in sensory neurons [41,57,58], and to modulate neural oscillations [8,17]. In the meantime, imbalance between excitation and inhibition can induce various neuropsychiatric diseases such as schizophrenia [59,30]. In order to understand synaptic mechanisms underlying the brain function, a fundamental approach is to quantify pure excitatory and inhibitory components received simultaneously by a target neuron in a neuronal network in the brain. Among electrophysiological recording techniques, somatic current clamp and voltage clamp have become a popular choice to measure excitatory and inhibitory conductances both in vitro and in vivo studies over the last thirty years [38]. For instance, current clamp has been applied in studies of various brain areas, such as the visual cortex [1,2,23,36,41,42], the barrel cortex [21,53,54], and the prefrontal cortex [19]. Meanwhile, voltage clamp has also been applied in studies of the visual cortex [6,7,26,4], the auditory cortex [60,51,52,58], the prefrontal cortex [44,19], and the somatosensory cortex [12].
To reveal the quantitative information of excitatory and inhibitory conductances, the traditional method to process data collected under somatic clamp mode is summarized as follows. In this approach, a neuron is viewed as a point neuron with its membrane potential dynamics governed by [13] where C is the membrane capacitance, V is the membrane potential measured at the soma, G L , G E , and G I are the leak, excitatory, and inhibitory conductances, respectively, ε E and ε I are the excitatory and inhibitory reversal potentials, respectively, and I inj is the externally injected current from the somatic clamp. Here all potentials are relative to the resting potential. Based on the point neuron assumption (1), by clamping the somatic current I inj or voltage V at different levels, one can record the corresponding synaptic current obtained by measuring the intracellular trace of somatic membrane potential V (under current clamp mode) or the injected clamp current I inj (under voltage clamp mode) and thereby obtain the linear synaptic current-voltage (I-V) relation. An important assumption in Eq. 1 is that G E and G I are independent of I inj in this point neuron. Under this assumption, the excitatory and inhibitory conductances can then be solved for from the slope and the intercept of the I-V line by casting −G E (V − ε E ) − G I (V − ε I ) as I syn to obtain I syn = −kV + b, where the slope k equals the total conductance defined as the direct sum of the excitatory and inhibitory conductances, and the intercept b equals the reversal current defined as the weighted sum of the excitatory and inhibitory conductances, i.e., Despite the extensive application of current and voltage clamps to extract excitatory and inhibitory conductances, various important issues related to the validity of the above approach remain to be clarified. As pointed out by theoretical and experimental studies, the voltage distribution across an entire neuron can be highly nonuniform [33,46], and a somatic clamp can only exert a limited control of the membrane potential across the dendritic arbor [55,40]. Therefore, it is important to address the crucial question in which sense a neuron can be viewed as a point as described by Eq. 1. In addition, due to the space clamp effect, the conductance measured using a somatic clamp can significantly deviate from the local synaptic conductance on the dendrite [55], sometimes could even yield an unphysically negative value [55]. Therefore, it is necessary to carry out a critical assessment of the traditional method of determining conductance.
To address the validity of the point neuron assumption in the traditional method, we begin with the analysis of an idealized passive ball-and-stick model. In the presence of a somatic current injection, say, the clamp current, we demonstrate that the spatial-dependent ball-and-stick model can asymptotically reduce to a point-neuron model to describe the dynamics of the somatic membrane potential. In light of this, we consider the soma rather than the entire neuron as a point and introduce the concept of effective conductance, which is defined by Ohm's law as the ratio of the synaptic current arriving at the soma I (0) syn to the driving force (difference between the reversal potential ε and the somatic membrane potential V ) in the presence of either excitatory or inhibitory input on the dendrite, i.e., G eff = I (0) syn ε−V . we emphasize that, in order to distinguish from the synaptic current measured using current or voltage clamp below, here I (0) syn with the superscript "0" is the synaptic current in the absence of any injected current I inj . As will be demonstrated below, our defined effective conductance is a proportional indicator of the local synaptic conductance and reflects directly the functional impact of synaptic inputs on action potential initiation, hence neuronal information coding.
Our theoretical analysis further demonstrates that it is invalid to simply replace the synaptic conductance in the traditional point neuron model (1) with the effective conductance, and the subthreshold dynamics of the neuron (1) should be corrected by the following dynamics where, due to the nonlinear interaction of the injected somatic clamp current with the synaptic current from the dendrites, G inj E and G inj I depend on the injected current I inj to the soma by the clamp. It is only under the condition I inj = 0 that the effective conductance G eff , in contrast to the traditional method, in which the conductance is assumed to be independent of the injected current. Therefore, the conductance determined by the traditional method is close to neither the local conductance on the dendrite nor the effective conductance at the soma. Indeed it has no clear biological interpretation.
By applying the somatic clamp, one is purported to measure the effective conductance rather than the local conductance, because the clamp can control the current and voltage sufficiently well only at the soma and its nearby locations. To overcome the inherent difficulty of determining the effective conductance using the traditional method, we present a new method derived from our analysis. Notwithstanding the nonlinear interaction between the synaptic and injected currents, we can obtain the linear I-V relation by changing the injected clamp current at multiple levels of magnitude. Our analysis further shows that the nonlinear current interaction causes the slope of the I-V line to deviate greatly from the sum of two conductances (Eq. 3), however, the intercept of the I-V line remains a good approximation to the reversal current. Therefore, the effective excitatory and inhibitory conductances can be solved from the intercepts (Eq. 4) by varying excitatory or inhibitory reversal potentials at different levels. Finally, we verify our proposed method by employing numerical simulations of the ball-and-stick model and a biologically realistic pyramidal neuron model of complex dendritic morphology and broad ionic channel distribution. In general, our method greatly improves upon the traditional approach in current or voltage clamp by providing a more reliable and accurate assessment of synaptic impact on neuronal computation.
Methods
The ball-and-stick neuron model We consider an idealized passive ball-and-stick neuron model whose isotropic spherical soma is connected to an unbranched cylindric dendrite with finite length and diameter. The spatiotemporal dynamics of the membrane potential v(x, t) along the dendritic cable is governed by [49,13] where c is the membrane capacitance density, g L is the leak conductance density, g E and g I are excitatory and inhibitory conductance densities, respectively, ε E and ε I are excitatory and inhibitory reversal potentials, respectively, I inj is externally injected current density, d is the dendritic diameter, and r a is the axial resistivity. Here, all potentials are measured relative to the resting potential. When excitatory and inhibitory inputs are elicited at dendritic sites, we have the synaptic input where q = E, I. M E (M I ) is the number of dendritic sites for excitatory (inhibitory) inputs. For an individual synaptic input of type q, f ij q is the input strength of the j-th input at the i-th location x i q with its arrival time t ij q . Here u q (t) is the unitary conductance density pulse modeled as u q (t) = N q (e , where Θ(t) is the Heaviside function, σ qr and σ qd are the rise and decay time constants of individual synaptic conductance, respectively [13].
The assumption that one end of the dendrite is sealed yields where l is the dendritic length. At the other end that connects to the soma, the law of current conservation gives rise to where S is the somatic surface area. Eqs. 7 and 8 constitute the boundary conditions of the cable model (6). Before the arrival of synaptic inputs, the neuron stays at the resting state with the initial condition set as v(x, 0) = 0.
The realistic neuron model
The realistic pyramidal neuron model is adapted from our previous studies of dendritic integration [20,27,29] (see Ref. [20] for details). The morphology of the reconstructed pyramidal neuron, which contains 200 compartments, is obtained from the Duke-Southampton Archive of neuronal morphology [9]. The passive cable properties and density distribution of active conductances in the model neuron are based on published experimental data for hippocampal and cortical pyramidal neurons [47,35,24,37,32,34,3,45,39]. The model also contains AMPA, NMDA, GABA A and GABA B receptors with kinetic properties derived from Refs. [14,15,16]. We use the NEURON software Version 7.3 [11] to simulate the model with time step 0.1 ms.
Geometrical reduction
The traditional method for extracting excitatory and inhibitory conductances is based on a crucial assumption that a neuron can be considered as a point with its membrane potential dynamics described by Eq. 1. However, because of the highly nonuniform distribution of membrane potential across a neuron [33,46], the entire neuron is not electrically compact, thus cannot be modeled as a point. On the other hand, in experiment, it has been shown that the membrane potential dynamics of the soma of a neuron (relative to the resting potential) can be well captured by a point leaky integrator [10,5] C dV dt = −G L V + I inj (9) in response to the externally injected current I inj into the soma. Therefore, one should model the soma rather than the entire neuron as a point. However, there is a lack of theoretical demonstration of how to obtain a point characterization of the soma from a spatial-dependent neuron model in general. For a large class of neurons, the tree-like passive dendrites can be shown to be mathematically equivalent to a single cylindric cable [43]. To demonstrate the validity of the point characterization (Eq. 9), without loss of generality, we therefore start with the ball-and-stick neuron introduced in the Materials and Methods section. Given a current pulse input I δ at the soma, the ball-and-stick model possesses the following response kernel (Green's function) that captures the somatic membrane potential response V (t) ≡ v(0, t) [49,27,29], where constant coefficients H n and time constants k n are determined by the geometry and biophysics of the passive neuron. Asymptotically, the response kernel can be well approximated by its leading order with a single time constant, i.e., where k 0 = g L /c, H 0 = [γ/(γλ + 1)πd](4r a /c 2 d) −1/2 , with γ = πd 2 /2S (r a d) −1/2 and λ = l 4r a /d. Note that Eq. 11 is precisely the response kernel for the point-neuron model (Eq. 9) with the following relation linking the parameters in the point-neuron model and those in the ball-and-stick neuron, C = 1/H 0 , and G L = k 0 /H 0 . For any time-dependent somatic current input, the somatic response of the ball-and-stick neuron can then be described by the convolution of the response kernel (11) of the point-neuron model with the input, thus reducing the somatic membrane potential dynamics of the balland-stick neuron with somatic input to its equivalent dynamics of the point-neuron (9). Our asymptotic analysis has been further verified numerically to demonstrate that the point-neuron characterization is sufficiently accurate to represent the somatic membrane potential dynamics of the ball-and-stick neuron.
Effective conductance
By considering the soma rather than the entire neuron as an electrically compact point, the concept of effective conductance naturally arises following the Ohm's law, which casts, say, the effective excitatory conductance, in the form where syn is the synaptic current arriving at the soma in the absence of any injected current to the soma, ε E is the excitatory reversal potential relative to the resting potential, and V (t) is the somatic membrane potential change in response to an excitatory synaptic input from its dendrite. The synaptic current arriving at the soma is determined from the point-neuron model by which will be referred to as the effective synaptic current below (note that I inj = 0 in Eq. 13). A similar definition holds for the effective inhibitory conductance. The effective synaptic current measured at the soma can be significantly different from the synaptic current measured at the synapse localized on the dendrite. This arises because the local synaptic current induced at the synapse will be filtered by the dendritic cable property and further modified by the interaction with active ion channels along dendrites before reaching the soma [47,56,33,22,31].
Conceptually, it is evident that the effective conductance at the soma and local conductance on the dendrite are rather different. As demonstrated by our numerical simulation of the ball-and-stick neuron, there is indeed a significant quantitative difference between them. Shown in Figures 1A and 1D are the numerically measured effective conductances, which are significantly smaller than the local conductances upon an excitatory or inhibitory Poisson input of rate 150 Hz on the dendrite at the location 420 µm away from the soma. The effective excitatory and inhibitory conductances are found to be strongly correlated with the corresponding local conductances with correlation coefficient ρ = 0.95 and ρ = 0.99 respectively, as shown in Figures 1B and 1E. This correlation suggests that the effective conductance is indeed a good indicator to reflect the synaptic activity on the dendrites. It is reasonable to expect that the effective conductance decreases gradually with the increase of distance between the input location and the soma. This is confirmed in Figures 1C and 1F. Therefore, a strong input at a site on a distal dendrite and a weak input at a site on the proximal dendrite may induce a membrane potential change of similar magnitude at the soma. That is, local conductances on the dendrite can differ greatly for different inputs while the corresponding effective conductances could exert a similar impact on the somatic membrane potential dynamics. In this sense, the effective conductance reflects directly the functional impact of synaptic inputs originated from the dendrite on somatic membrane potential, action potential generation, hence neuronal information coding. In short, the effective conductance plays a central role in quantifying the synaptic influence on neuronal computation.
Interaction between clamp current and synaptic current
As is shown in the Geometrical reduction subsection, for an injected current to the soma of the balland-stick neuron, the point-neuron model (Eq. 9) is quantitatively accurate for describing its somatic membrane potential change. However, contrary to the conventional belief, the point-neuron model (Eq. 1) becomes invalid in the presence of both clamp current at the soma and synaptic current from the dendrite. As revealed in our analysis below, because of nonlinearity in the interaction between the somatic clamp current and the synaptic current, no longer can the traditional point-neuron model (Eq. 1) provide a conceptually correct and quantitatively accurate description of the true somatic voltage dynamics.
In the point-neuron model (Eq. 1), all the synaptic currents as well as the injected current are assumed to be summed linearly at the soma. However, the synaptic current is voltage-dependent and the injected current at the soma can change the membrane potential on the dendrite, thus resulting in nonlinear interactions between the injected current and the synaptic current. Therefore, these two currents can no longer be summed directly at the soma. We now present the detailed analysis of the origin of the nonlinear interaction using the ball-and-stick neuron model. For the sake of illustration, we discuss the case of the excitatory input. At the time t = 0, given an excitatory input at the dendrite x = x E and a constant injected current I inj at the soma x = 0, we have In the physiological regime, in general, an individual synaptic input can only effect a small change of somatic membrane potential. This gives rise to an asymptotic expansion of the somatic response V (t) ≡ v(0, t) with respect to the input strength f E [27,29], Using the Green's function method, the zeroth and first order solution at the soma can be cast into the following form where the coefficients are a 0 = Γ(0, 0, t) * Θ(t)/πd, Here Γ(x, y, t) is the response kernel (Green's function) of the cable , trace of the effective and local excitatory synaptic conductances measured respectively at the soma and at a synapse on the dendrite of the balland-stick neuron. An excitatory Poisson input with rate 150 Hz is applied on the dendritic site that is 420 µm away from the soma. (B), strong correlation between the effective and local synaptic conductance. Each blue dot is sampled at one time point from the corresponding time series in A. The red straight line is the linear fitting with correlation coefficient ρ = 0.95. (C), The dependence of the ratio of the effective to the local excitatory conductance on the distance between the input location and the soma. The ratio of the effective to the local conductance is defined as the slope of the linear fitting in B. (D-F) are for the inhibitory case. An inhibitory Poisson input with rate 150 Hz is applied on the dendrite at the location 420 µm away from the soma. equation whose explicit expression has been derived in Refs. [27,29], Θ(t) is the heaviside function, " * " is the temporal convolution, and "·" is the multiplication. Using the synaptic current as measured by I syn = C dV dt + G L V − I inj , together with Eq. 14, we can readily derive an expression for the synaptic current at the soma to the first order of f E , where the prime stands for the derivative with respect to time. In the derivation the equality CV 0 + G L V 0 − I inj = 0 is used because V 0 is the zeroth order membrane potential change, which responds to the injected current I inj only (see Eq. 9). The first term in Eq. 15 of the synaptic current describes the interaction between the injected current and the current from the dendrite, and the second term describes the effective synaptic current I from which it is evident that G inj E depends on the injected current I inj . The superscript of G inj E emphasizes the fact that this excitatory conductance is measured in the presence of an injected current. In the conventional approach, one assumes that G E in Eq. 1 is not affected by the injected current and asserts that the injected current merely induces a somatic membrane potential change to yield the corresponding synaptic current ∆I syn = G E (ε E − ∆V ) with an I inj -independent G E . Therefore, the effect of the interaction between the two currents is not accounted for. It can be clearly seen from Eq. 16 that the value of excitatory conductance G inj E is modified by the injected current in contrast to the case of the presence of a purely excitatory synaptic current. Similarly, the injected current will interact with an inhibitory synaptic input, resulting in a modified value of the inhibitory conductance G inj I in comparison with the value for a purely inhibitory synaptic current case. It is only under the condition I inj = 0 that the effective . We next solve the ball-and-stick model numerically to further confirm the validity of the asymptotic results above. Our numerical results again affirm the important role of the interaction between injected current and synaptic current in the determination of conductance value. Given an individual excitatory synaptic input at a dendritic site away from the soma together with an injected constant current at the soma of the ball-and-stick neuron, we can numerically obtain the somatic membrane potential of the ball-and-stick neuron V and measure its synaptic current I syn based on the point neuron model, By varying the strength of the injected constant current while maintaining the strength of the local synaptic input on the dendritic site, we can obtain synaptic current I syn of various amplitudes. An example is displayed in Figure 2A for particular realizations of this procedure. As has been predicted by our theoretical analysis (Eq. 15), the linear dependence of the measured peak amplitude of the synaptic current I syn on the injected constant current is confirmed in Figure 2B. We are now ready to determine the excitatory conductance Contrary to the common belief that the synaptic conductance measured at the soma is independent of the injected current, here the excitatory conductance strongly depends on the injected current. As shown in Figures 2C-2D, the difference can range from 7% to a substantial 36% between the peak excitatory conductance G inj E in the presence of the injected constant current and G eff E in the absence of the injected current. In fact, the difference can be arbitrarily large as the magnitude of the injected current further increases. Similarly, as shown in Figures 2E-2H, given an individual inhibitory synaptic input at a dendritic location away from the soma together with an injected constant current at the soma of the ball-and-stick neuron, the inhibitory conductance G inj I also strongly depends on the injected currents. The difference can be rather significant, ranging from 17% to 153% between the peak inhibitory conductance G constant current and G eff I in the absence of the injected current. As shown in Figure 2H, a negative value of the inhibitory conductance can even be observed under certain magnitude of the injected current. It is demonstrated in Figures 2D and 2H that the nonlinear dependence of the conductance G inj on the injected constant current can be accurately predicted by our analysis (Eq. 16).
Because of the nonlinear dependence of G inj on the injected current, no longer can the synaptic current arriving at the soma and the injected current be simply summed linearly in the point neuron model. As a consequence, the point neuron model should be reformulated by in which the conductances G inj E and G inj I are function of I inj . It is worthwhile to point out that in the presence of even only one type of the synaptic input, the conductance determined based on the traditional point neuron model already requires a significant correction due to the interaction between the clamp current and the synaptic current.
Theoretical analysis of conductance measurement
In general, a neuron receives a mixture of excitatory and inhibitory inputs from neighbouring neurons. As mentioned in the Introduction section, the traditional way of extracting the excitatory and inhibitory conductance is to apply the current or voltage clamp technique. After injecting current at the soma with different magnitudes, one can measure the corresponding synaptic current and somatic voltage to obtain thereby a linear I-V relation. By assuming the neuron as an electrically compact point (Eq. 1), and the excitatory and inhibitory conductances G E and G I are independent of the injected current, G E and G I can then be determined by solving two equations (Eqs. 3 and 4) involving the slope and the intercept of the I-V line. However, as shown in our analysis above, the interaction between the somatic clamp current and the synaptic current arriving at the soma renders the point neuron assumption (1) invalid for measuring the conductance even in the presence of a single type of synaptic input, hence the failure of the traditional method to measure the effective conductance. In the following, we demonstrate that, even if the magnitude of the injected clamp current is sufficiently small, the traditional method can still induce an arbitrarily large error of conductance measurement, sometimes even leads to an unphysically negative conductance.
For the ball-and-stick neuron, in the presence of a pair of excitatory and inhibitory synaptic inputs on the dendritic site x = x E and x = x I , respectively, the dynamics of its membrane potential is governed by when the clamp current I inj is applied at the soma. As mentioned above, within the physiological range, the strength of each individual input f E and f I is usually small, we can expand the somatic membrane potential V (t) ≡ v(0, t) as an asymptotic series with respect to the input strengths f E and f I , The Green's function method yields the zeroth and first order solutions in which the first term describes the interaction between the synaptic current and the injected current and the second term describes the effective synaptic current in the absence of the injected current. The corresponding somatic voltage to the first order of f q (q = E, I) is Similarly, the first term on the right hand side of Eq. 18 arises from the interaction between the synaptic current and the injected current, and the second term comes from the effective synaptic current in the absence of the injected current. Because both the membrane potential V and the synaptic current I syn are linearly related to the injected current I inj , simple linear algebra yields the linear dependence of the synaptic current on the membrane potential, i.e., I syn = −kV + b, which holds for arbitrary I inj . The corresponding slope and intercept to the first order of f q (q = E, I) become and respectively. Note that the slope here is not equal to the total conductance G eff E + G eff I as in the traditional method. The prefactors of G eff E and G eff I in the slope expression (Eq. 19) can be much smaller than unity [28], thus leading to the failure of the traditional method (using Eqs. 3 and 4) in determining the effective conductances. Apparently, the conductance determined by the traditional method has no clear biological interpretation. It is worthwhile to stress that these prefactors are independent of the magnitude of the injected clamp current. As a result, the measurement error of the effective conductance by the traditional method cannot be eliminated even if the magnitude of the clamp current is sufficiently small. This will be further confirmed by numerical results below. In general, the prefactors are dependent of time and input locations, thus rarely can they be determined in advance without the knowledge of the synaptic location and the corresponding response kernel to the synaptic location -the situation could become even more complicated when a neuron receives numerous inputs across its dendrites. Importantly, the relation of the intercept and the reversal current, however, can now provide a basis for determining the effective excitatory and inhibitory conductances. Because there are two unknowns G eff E and G eff I , we need obtain at least one more intercept value. This can be achieved by varying one of the reversal potentials. For example, we can change the inhibitory reversal potential from ε I toε I to obtain a second intercept equationb thereby, the effective excitatory and inhibitory conductances can be obtained from Eqs. 20 and 21. In physiological experiment, to change reversal potential, one needs to effect a change of the extracellular or intracellular fluid environment. From now on, we refer to this new method as the intercept method, and the traditional method as the slope-and-intercept method.
Numerical verification of the intercept method
We next perform numerical simulations of the ball-and-stick neuron to demonstrate the validity of the intercept method by contrasting its error with that of the traditional slope-and-intercept method. Given an individual excitatory pulse input at a dendritic location away from the soma but without the injected current at the soma, we can numerically record the corresponding EPSP at the soma, and using Eq. 12, determine the value of the effective excitatory conductance pulse from the point-neuron model. A similar procedure can be carried out for the effective inhibitory conductance pulse at the soma in response to an inhibitory pulse input at a location on the dendrite away from the soma. A pair of numerically measured effective excitatory and inhibitory conductance pulses determined this way as displayed in Figures 3A and 4A will be used below as the reference values, against which we evaluate the performance of the intercept method and the slope-and-intercept method. Note that these reference conductances are true effective conductances without the distortion induced by the clamp current.
Because we need to tune the reversal potential to a different value at least once in the intercept method, it is important to verify that the effective excitatory and inhibitory conductances are independent of the change of synaptic reversal potential values. This is indeed the case for the ball-and-stick neuron: The independence of the effective conductance on the synaptic reversal potential can be analytically shown from our asymptotic analysis by setting I inj = 0 in Eq. 16 to the first order of f q (q = E, I). In our simulation, the change of G eff E is less than 5% in value when the excitatory reversal potential varies from 20 mV to 120 mV, while the change of G eff I is less than 0.7% in value when the inhibitory reversal potential varies from 0 mV to −20 mV. We will change only the inhibitory reversal potential to determine the effective conductance using the intercept method below.
To evaluate the performance of the intercept method in comparison with the slope-and-intercept method, we now apply an injected constant clamp current at the soma and simultaneously elicit the same excitatory and inhibitory pulse inputs as the reference ones, i.e. inputting at the same dendritic location with the same strength. By measuring the membrane potential at the soma, we can use Eq. 2 to obtain a set of the corresponding synaptic current traces in response to the injected current of different amplitudes. At each moment of time, we observe that the linear relation between the synaptic current and the membrane potential persists as usual. We then determine the value of the excitatory and inhibitory conductance pulses from the linear I-V relation moment by moment. At each time point, by using the slope-and-intercept method, we can obtain one pair of the values of G E and G I . Meanwhile, by tuning the inhibitory reversal potential from −10 mV to −20 mV, and repeating the above procedure, we can obtain a second linear I-V relation. Then using the intercept method, we can determine the pair of the values of G E and G I . By repeating the same procedure at different time points, we can determine the temporal profiles of the conductance pulses by both methods for the comparison of conductances in the presence of the current clamp with those of the reference conductance pulses measured in the absence of the current clamp. As demonstrated in Figure 3A, it is evident that the value of the conductance pulse determined from the intercept method is much more accurate than those reconstructed from the slopeand-intercept method, in particular, for inhibitory case. The effective conductance pulses measured using the intercept method have a relatively small error, with the maximum error value of 2% for excitatory and for inhibitory conductance, in contrast with those determined using the slope-and-intercept method, which yields an error as large as 6% for excitatory conductance and 35% for inhibitory conductance. We have also numerically confirmed that the large error of the slope-and-intercept method cannot be significantly reduced even when the magnitude of the injected clamp current decreases to 1% of the original one.
In the brain, neurons dynamically receive synaptic inputs all the time. This requires us to address the issue of how to determine conductance under this condition. We consider the case of two excitatory and inhibitory Poisson-train inputs at two locations on the dendrite. We first give either an excitatory or an inhibitory Poisson input alone with a duration of 1000 ms to measure the reference effective conductances from Eq. 12 in the absence of the current clamp. In the presence of an injected clamp current at the soma with various magnitudes, we then give simultaneously the excitatory and inhibitory Poisson inputs which are identical to those in the reference case. By recording the synaptic current and the membrane potential, we can obtain a linear I-V relation at each moment of time. By changing again the inhibitory reversal potential from −10 mV to −20 mV and following the same procedure above, we can obtain another linear I-V relation for the corresponding time. Using either the intercept method or the slopeand-intercept method, we can determine the corresponding excitatory and inhibitory conductances from the I-V relations. We now compare the values of the conductances measured by the two methods in the presence of the clamp current with those of the reference conductances measured in the absence of the clamp current. From Figure 3B, it is evident that the performance of the intercept method is significantly superior to the slope-and-intercept method. The effective conductances measured using the intercept method have a relatively small error -with a time averaged error of 20% for excitatory conductance and 10% for inhibitory conductance (the origin of the error will be discussed in the Discussion section). In contrast, those determined using the slope-and-intercept method yield a time averaged error as large as 25% for excitatory conductance and 36% for inhibitory conductance. To study the spatial effect, we next scan the input locations on the dendrite for a pair of excitatory and inhibitory input of constant synaptic conductances. For both excitatory and inhibitory inputs, the input site is scanned from 50 µm to 600 µm away from the soma. For each pair of input locations, we first measure the reference effective conductances in the absence of the injected clamp current. In the presence of the clamp current, we then determine the conductances using the intercept method in contrast to the slope-and-intercept method. As shown in Figures 3C-3F, the relative error of both methods increases as the stimulus location moves away from the soma to the distal dendrite. However, the intercept method produces relatively reliable results even for distal inputs with an error less than 10%, whereas the error of the slope-and-intercept method can increase as drastically as to 60%, giving rise to a rather unreliable value of conductance. We further note that the intercept method is also valid to determine conductance when voltage clamp is applied to the soma of the neuron. Figure S1 illustrates an example of the case of voltage clamp.
Because a biological neuron possesses, in general, a complicated dendritic morphology with rich active ionic channels, it is necessary to use a realistic active neuron to further validate our method, as with the passive neuron model above. To address this, we deploy a biologically realistic pyramidal neuron model with tree-like dendritic morphology and broadly distributed active ionic channels. The data collection procedure is the same as that used in the case of the ball-and-stick neuron. Once more, we first numerically determine the value for a pair of transient conductance pulses as a reference true value when excitatory and inhibitory pulse inputs are given alone at locations on the dendritic trunk without an injected current at the soma. Now under the somatic current clamp mode, as shown in Figure 4A, the conductances measured using the intercept method have a relatively small error compared with the reference ones with a maximum value of 11% for excitatory conductance and 6% for inhibitory conductance. In contrast, those determined using the slope-and-intercept method yield an error as large as 33% for excitatory conductance and 72% for inhibitory conductance. To model the situation in vivo, we distribute 15 excitatory inputs and 5 inhibitory inputs across the entire dendritic tree of the pyramidal neuron. At each synapse location, the arrival time of each input is randomly set between 0 ms and 1000 ms with input rate of 100 Hz. We use the identical input to measure time evolution of the reference effective conductances in the absence of the clamp current and to measure that of the conductances in the presence of the clamp current. We perform the comparison of the values of the conductances measured by the two methods with those of the reference conductances. As shown in Figure 4B, the excitatory and inhibitory conductances determined by our intercept method are, in general, close to the true value of the effective conductance except for the moments when the neuron generates action potentials. In contrast, the conductance determined by the traditional slope-and-intercept method deviates greatly from the true one, in particular, again for the inhibitory case. Under the subthreshold regime, the conductance measured by the intercept method has a relatively small error with time averaged error of 26% for excitatory conductance and 13% for inhibitory conductance. For comparison, those determined using the slopeand-intercept method yield a time averaged error as large as 51% for excitatory conductance and 102% for inhibitory conductance. In our numerical simulation, the true value of inhibitory conductance is significantly greater than that of the excitatory conductance. However, the inhibitory conductance estimated by the slope-and-intercept method is even smaller than the estimated excitatory conductance. Finally, as an illustration of the drastic failure of the traditional method, we stress that the inhibitory conductance can take an unphysical value to become negative, as can be clearly observed in Figure 4B.
In addition, as with the case of the ball-and-stick neuron, the measurement error induced by both methods is dependent of location. By scanning the location across the dendrite, for a pair of constant excitatory and inhibitory conductance inputs, we observe that the intercept method produces reasonably good results with an error of less than 10% even for distal inputs, whereas the error of the slope-andintercept method grows rapidly to 100% as the input location moves away from the soma to the distal dendrites. From our simulation results, the conductance measured by the slope-and-intercept method is always smaller than the true effective conductance. Therefore, an error of 100% corresponds to the case that the measured conductance vanishes while the true one is nonzero. The inhibitory conductance value measured by the slope-and-intercept method could again become negative at distal dendrites - Figure 4F illustrates this severe problem with the traditional method. The intercept method is also verified to obtain correct conductance values when voltage clamp is applied to the soma of the realistic neuron, whereas the traditional slope-and-intercept method can produce negative conductance values, notably for inhibitory conductance (see Figure S2).
When a neuron generates an action potential, biologically, its membrane changes from the passive dynamics to an active one. The ionic current induced by the activation of voltage-gated ion channels can dominate effective synaptic currents induced by local synaptic inputs from the dendrite. As a consequence, the determination of the effective synaptic current based on Eq. 13 and thereby the corresponding In A and B, excitatory and inhibitory inputs are given respectively at 420 µm and 300 µm away from the soma. (C-F) Spatial dependence of the relative error for the excitatory and inhibitory conductance measurement. Here, the location for a pair of excitatory and inhibitory input of constant conductances are scanned across the dendrites. The location distance is measured from the soma. (C-D) are the error of excitatory conductance measured by the intercept method and the slope-andintercept method, respectively, with the same color bar to indicate the percentage of error. (E-F) are the error of inhibitory conductance measured by the intercept method and the slope-and-intercept method, respectively, with the same color bar to indicate the percentage of error. The excitatory and inhibitory reversal potentials relative to the resting potential are ε E = 70 mV, ε I = −10 mV, respectively. The inhibitory reversal potential is changed to ε I = −20 mV once while maintaining the excitatory reversal potential unchanged to obtain two I-V relations for the application of the intercept method.
effective synaptic conductance can have significant errors. This underlies the substantial difference, as observed in Figures 4B and S2B, between the conductance extracted by the intercept method and the reference conductance during an action potential generation. As pointed out by previous theoretical studies [18], it is rather difficult to determine conductance from the I-V relation when there is an action potential.
To overcome this difficulty, in principle, one can pharmacologically block the active sodium channel to suppress the action potential generation. Thereby, the value of conductance measured by the intercept method approaches that of the reference conductance, which reflects the true strength of the effective conductance induced by synaptic inputs on the dendrites.
Discussion
In order to extract excitatory and inhibitory conductances, in many of previous works, a neuron with its complex dendritic arbor is assumed to be a single electrically compact compartment. Thereby, the somatic voltage clamp is deemed to uniformly control the membrane potential throughout the whole neuron. However, recent theoretical and experimental studies [55,40] have shown there is the space clamp effect, which limits the control of the voltage clamp on the membrane potential across the dendritic arbor.
The membrane potential at distal synapses can deviate greatly from the holding potential, and the value of excitatory and inhibitory conductances measured in voltage clamp mode can be significantly distorted.
In our work, we have made no attempt to eliminate or attenuate the space clamp effect. Instead, we have explored the possibility of viewing the soma rather than the whole neuron as an electrically compact point (Eq. 17), so as to deploy the perfectly-clamped voltage at the soma to measure effective excitatory and inhibitory conductances at the soma, which contain the effect of the active ion channels along the dendrites and the filtering property of the dendrite. Conceptually, the effective conductance is a functionally important quantifier because they are strongly correlated to the local postsynaptic conductances at the dendrites and characterizes more directly than the local conductance the functional impact of synaptic inputs on the subthreshold dynamics and the spike trigger mechanism.
Using the traditional slope-and-intercept method, we have further shown that the measured conductance has no clear biological interpretation. It is close to neither the local synaptic conductance on the dendrite nor the effective synaptic conductance. In particular, we emphasize that the value of the conductance determined by the traditional method can be unphysically negative. Based on our theoretical analysis of the ball-and-stick neuron model, we have revealed that the failure of the slope-and-intercept method is caused by the interaction between the synaptic current and the clamp current. This interaction has not been addressed in previous studies. From our analysis, we have proposed a novel intercept method to measure the effective conductance accurately. We have verified the intercept method in both the numerical simulation of the ball-and-stick neuron model and the realistic pyramidal neuron model. Our numerical results show that, in general, the intercept method greatly improves upon the traditional slope-and-intercept method in current or voltage clamp by providing more reliable and accurate values of the effective excitatory and inhibitory conductances than the traditional method.
We note that the measured value of inhibitory conductance by the traditional slope-and-intercept method is much more distorted than that of excitatory conductance. This arises because the ratio of the excitatory to inhibitory conductance measurement errors is dependent of the excitatory reversal potential (70mV relative to the resting potential) and the inhibitory reversal potential (-10mV relative to the resting potential). We can analytically demonstrate that, the prefactors in the slope expression (Eq. 19) in front of G eff E and G eff I can cause the measurement errors in the slope-and-intercept method amplified by a factor of ε E for G I and amplified by a factor of ε I for G E . Therefore, the strength of inhibitory inputs can be significantly underestimated compared with excitatory inputs. One cannot simply compare the amplitudes of G E and G I measured from the traditional slope-and-intercept method to characterize network states such as the balanced state based on the relative amplitudes of G E and G I .
In our work, the soma of the neuron is characterized by the leaky integrator (Eq. 9). If the synaptic Inset in A is the schematic diagram of the recording configuration. (B) is for the case of multiple inputs of 15 excitatory and 5 inhibitory locations distributed across the entire dendritic tree with rate of 100 Hz at each site. (C-F) Spatial dependence of the relative error for the excitatory and inhibitory conductance measurement. Here, the locations for a pair of excitatory and inhibitory inputs with constant conductances are scanned across the dendrites. The location distance is measured from the soma. (C-D) are the error of excitatory conductance measured by the intercept method and the slope-and-intercept method, respectively, with the same color bar to indicate the percentage of error. (E-F) are the error of inhibitory conductance measured by the intercept method and the slope-and-intercept method, respectively, with the same color bar to indicate the percentage of error. In F, the large white area, which corresponds to negative conductance values, demonstrates the drastic failure of the traditional method. The excitatory and inhibitory reversal potentials relative to the resting potential are ε E = 70 mV, ε I = −10 mV, respectively. The inhibitory reversal potential is again changed to ε I = −20 mV once while keeping the excitatory reversal potential unchanged to obtain two I-V relations for the application of the intercept method.
input is so strong as to initiate action potentials, in order to measure the effective conductances, one can either pharmacologically block the action potential related active channels or switch the point neuron characterization from the integrate-and-fire type to others, for instances, the exponential integrate-andfire neuron.
In our intercept method, conductance measurement error cannot be fully eliminated despite the above demonstration that our method produces rather good estimate of conductance in a biologically realistic neuron model, because we note that our analysis is accurate only to the first order approximation, higher order corrections may also contribute to the conductance value; Our analysis is based on the point-neuron model at the soma (Eq. 17), however, the dendritic integration of synaptic inputs can lead to a more complicated form of the point-neuron model [61]. These are important issues for future studies. A precise understanding of the synaptic physiology of neurons using accurate effective conductances is important for investigating synaptic mechanisms of sensory processing, the origination of neuronal oscillations, and the balanced nature of excitation and inhibition in the brain. Figure S1. Determination of effective conductance in the ball-and-stick neuron with voltage clamp at the soma. (A-B), the effective excitatory conductance G eff E (solid red dots) and inhibitory conductance G eff I (solid blue dots) determined by the intercept method is rather close to the true value of the effective excitatory (red line) and inhibitory (blue line) conductances, whereas the excitatory conductance G E (open red circle) and the inhibitory conductance G I (open blue circle) determined by the slope-andintercept method deviates from the true values. As with the current clamp, the deviation is particularly significant for the inhibitory case. (A) is for the case of paired transient excitatory and inhibitory synaptic pulse inputs. Inset in A is the schematic diagram of the recording configuration. (B) is for the case of two excitatory and inhibitory Poisson inputs both with rate of 150 Hz. In A and B, excitatory and inhibitory inputs are given respectively at 420 µm and 300 µm away from the soma. (C-F) Spatial dependence of the relative error for the excitatory and inhibitory conductance measurement. Here, the location for a pair of excitatory and inhibitory input of constant conductances is scanned across the dendrites. The location distance is measured from the soma. (C-D) are the error of excitatory conductance measured by the intercept method and the slope-and-intercept method, respectively, with the same color bar to indicate the percentage of error. (E-F) are the error of inhibitory conductance measured by the intercept method and the slope-and-intercept method, respectively, with the same color bar to indicate the percentage of error. The excitatory and inhibitory reversal potentials relative to the resting potential are ε E = 70 mV, ε I = −10 mV, respectively. The inhibitory reversal potential is changed to ε I = −20 mV once while maintaining the excitatory reversal potential unchanged to obtain two I-V relations for the application of the intercept method. , the effective excitatory conductance G eff E (solid red dots) and inhibitory conductance G eff I (solid blue dots) determined by the intercept method are relatively close to the true values of the effective excitatory (red line) and inhibitory (blue line) conductances, whereas the excitatory conductance G E (open red circle) and the inhibitory conductance G I (open blue circle) measured by the slope-and-intercept method deviate greatly from the true values. The deviation is particularly significant again for the inhibitory case. The traditional method can even produce a negative value for conductance, again demonstrating the deficiency of the traditional method. (A) is for the case of paired transient excitatory and inhibitory pulse inputs located on the dendrites 350 µm and 300 µm away from the soma. Inset in A is the schematic diagram of the recording configuration. (B) is for the case of multiple inputs of 15 excitatory and 5 inhibitory locations distributed across the entire dendritic tree with rate of 100 Hz at each site. (C-F) Spatial dependence of the relative error for the excitatory and inhibitory conductance measurement. Here, the locations for a pair of excitatory and inhibitory input of constant conductances are scanned across the dendrites. The location distance is measured from the soma. (C-D) are the error of excitatory conductance measured by the intercept method and the slope-and-intercept method, respectively, with a common color bar to indicate the percentage of error. (E-F) are the error of inhibitory conductances measured by the intercept method and the slope-and-intercept method, respectively, with a shared color bar to indicate the percentage of error. In F, the large white area, which corresponds to negative conductance values, again illustrates the failure of the traditional method. The excitatory and inhibitory reversal potentials relative to the resting potential are ε E = 70 mV, ε I = −10 mV, respectively. The inhibitory reversal potential is changed to ε I = −20 mV once while keeping the excitatory reversal potential the same to obtain two I-V relations for the application of the intercept method. | 2017-10-14T00:45:38.000Z | 2017-10-14T00:00:00.000 | {
"year": 2017,
"sha1": "0afbb56a21149d4f9dc207bacef27e6084abac77",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0afbb56a21149d4f9dc207bacef27e6084abac77",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Biology"
]
} |
222177365 | pes2o/s2orc | v3-fos-license | Thermometry of intermediate level nuclear waste containers in multiple environmental conditions
Intermediate level nuclear waste must be stored until it is safe for permanent disposal. Temperature monitoring of waste packages is important to the nuclear decommissioning industry to support management of each package. Phosphor thermometry and thermal imaging have been used to monitor the temperature of intermediate level waste containers within the expected range of environmental storage conditions at the Sellafield Ltd site: temperatures from 10 {\deg}C to 25 {\deg}C and relative humidities from 60 %rh to 90 %rh. The feasibility of determining internal temperature from external surface temperature measurement in the required range of environmental conditions has been demonstrated.
Scope of Work
Nuclear fission is a major part of the energy infrastructure of the UK. However, the decommissioning of nuclear facilities requires the safe and sustainable storage of spent fuel and other radioactive by-products. One form of this waste, intermediate level waste (ILW), mostly comprises nuclear reactor fuel element cladding and components, and radioactive liquid effluent sludges, both of which are immobilised in grout. Other wastes, such as graphite and various scrap metal components, are stored in ILW waste containers without grout encapsulation [1]. Typically at the Sellafield Ltd site ILW is stored in cylindrical steel containers and described as waste packages when filled. The container has a pair of dewatering tubes, a sintered gauze layer above the waste, and a meshed vent on the container lid.
Recent laboratory-based measurements of an ILW container demonstrated a correlation between the temperature measured from internal contact thermometers and the external vent radiance temperature [2]. To understand the challenges applying these laboratory measurements to the ventilated engineered stores used, this experimental design was replicated in an environmental chamber to simulate the varying temperatures and high humidity typically experienced. The two methods employed by the temperature group at the National Physical Laboratory (NPL) to de- 1 Corresponding author james.norman@npl.co.uk termine the vent temperature were: thermal imaging and phosphor thermometry [3].
Experimental Setup
This section details the ILW container instrumentation setup and the testing environment. The measurements of the container vent temperature using a thermal imager and phosphor thermometer are described.
Container configuration
A schematic of the ILW container can be seen in Fig. 1, the experimental conditions are identical to those in [2]. To supply heat to the container, two heaters were placed at the bottom of the container above the insulation layer, comprising both wooden blocks and multi-layer insulation. The heaters were each connected to a benchtop controller that regulated the power to the heaters, to obtain a stable temperature set-point.
The ILW container was set up (see Fig. 2) within a large environmental chamber situated at the Office for Product Safety and Standards [4]. This facility permits the control of temperature from −25°C to 70°C and humidity up to 95 %rh and is regulated using a vented air flow systemthe system results in significant movement of air through the chamber.
Three class A thin film Pt100 platinum resistance thermometers (PRTs) [5] were each potted in a 40 mm long cylinder (4 mm in diameter). These PRTs were fixed to the internal dewatering tubes (gauzed and hollow tubes, seen in Fig. 1) within the ILW container to evaluate the internal bulk temperature.
A range of parameters were investigated in this measurement campaign including some measurement set-points where the chamber humidity and thermal imager angle were varied. The ILW container temperature was varied from 15°C to 45°C; the environmental chamber temperature from 10°C to 25°C; environmental chamber humidity at 60 %rh and 90 %rh; and the angle between thermal imager angle and vent normal at 50°and 70°.
Phosphor thermometer
The phosphor thermometer was positioned behind a set of lenses that allowed remote operation of the probe, the same phosphor coating as reported in [2] was used. The probe consists of a combined illumination and measurement system that excites the phosphor coating with light and measures the decay time of phosphor emission. This instrument was traceably calibrated across the temperature range from 1.6°C to 53.6°C through a decay time comparison between a phosphor coated stainless-steel disc (25 mm diameter and 5 mm thick) and an embedded calibrated N-type thermocouple below the coated surface.
Thermal imager
A long-wave infrared (LWIR) (7.5 µm to 13.5 µm) FLIR Tau 2 microbolometer thermal imager was used. To minimise the effect of varying environmental temperatures on the instrument, it was mounted within a water-regulated brass enclosure (jacket) that was set to 20°C.
The validation of apparent radiance temperature measured against ITS-90 was demonstrated through the cal- ibration of the detector gain [6] against a cavity reference source [7]. Whilst the instrument was mounted in its water-regulated enclosure, a two-point non-uniformity correction was measured at 20°C and 30°C, flooding the field of view; this narrow range was used to increase the responsivity to the anticipated application radiance levels. Through a measurement at 23.5°C, the small standard deviation of the digital level -measured within an identical sized region of interest used within the vent measurements -verified the higher image contrast than is typically achieved in the off-the-shelf configuration.
Following the non-uniformity correction, the instrument response was compared against the same blackbody reference source from 5°C to 55°C and a third-order polynomial used to describe the relationship between ITS-90 and detector digital level. The size-of-source effect of the instrument was characterised and the necessary correction from the calibration aperture to the size of the vent was applied throughout the vent measurements [8].
As discussed in section 5.3, the thermal imager measurements indicated that the calibration changed during the measurement campaign; so the thermal imager measured temperatures should be considered as indication of the relative temperature as opposed to an absolute temperature measurement.
Results
This section details the results of the measurement undertaken within the environmental chamber. The results include both the thermal imager measurements and the phosphor thermometer measurements. The results cover measurements at a variety of chamber temperatures and humidities, as well as a range of ILW container temperatures. Fig. 3 shows the temperature of the vent measured by the phosphor thermometer against the average container internal temperature, as measured by the PRTs. The three chamber temperature data sets are distinctly stratified: the temperature measured by the phosphor thermometer is correlated with the chamber temperature. Fig. 4 shows the average temperature of the phosphorcoated vents, as measured by the thermal imager, against the average container internal temperature, as measured by the PRTs. The linear fit is based on the average of temperatures measured for all phosphor coated vents per measurement set-point. Similar to the results recorded by the phosphor thermometer, the three chamber temperature data sets are distinctly stratified: the temperature measured by the thermal imager is correlated with the chamber temperature. It is of note that many of the data points in Fig. 4 indicate that the measured temperature that is lower than the environmental chamber temperature (this is discussed in section 5). Fig. 5 shows the difference between the vent surface temperature measured by the phosphor thermometer, and the average surface temperature measurements of the coated vents, as measured by thermal imaging. The measurement set-points with the largest difference in measured temperature occurred when the chamber was set to nominally 10°C. The set-points with the smallest temperature difference occurred when the chamber temperature was set to nominally 25°C.
Uncertainty Budget
Following an analysis of the uncertainty components present for each of the three thermometry techniques (PRT, phosphor and thermal imaging), a budget was constructed for all measurements according to [9]. The majority of components considered were identical to those described in [2], the differences are detailed in this section.
Contact thermometry
The components evaluated were the same as those in [2]. The expanded uncertainty of measurement for contact thermometry was 0.27°C (k = 2).
Phosphor thermometry
The components evaluated were the same as those in [2]. The contact thermocouple used for the instrument calibration enabled a lower uncertainty to be achieved of 0.06°C (k = 2). Additionally, the standard deviation during measurement was less than 0.05°C. These two improvements in temperature metrology enabled an order of magnitude reduction of measurement uncertainty to 0.11°C (k = 2).
Thermal imaging
When considering the components used previously, this experimental setup has a comparable expanded uncertainty of 1.31°C (k = 2). There were increases in the sensor stability and non-uniform emissivity components due to the varying environmental conditions, but these were dominated by the calibration component. It should be noted that the effect from non-unity emissivity has not been included within this budget. Chamb. 10 • C Chamb. 18 • C Chamb. 25 • C Figure 5: The difference between the vent surface temperature measured by the phosphor thermometer (T phosphor − T TI ), and the average surface temperature measurements of the phosphor coated vents, as measured by thermal imaging, plotted against the temperature measured by the measured average of PRTs inside the ILW container. The points are differentiated by the three chamber set-point temperatures.
Discussion
This section discusses the results detailed in section 3 and the uncertainty budgets detailed in section 4.
Observations from results
Figs. 3 and 4 show similar linear correlations. Both measurement techniques show the effect of the temperature from the local environment -in this case, the environmental chamber -on the surface temperature. The stratification of measurement results, dependent on chamber temperature, shows the effect of change in chamber temperature on the vent surface temperature; therefore, it is important that any measurements, whether they be recorded using thermal imaging or phosphor, are interpreted in reference to the local environmental temperature.
The difference between the vent temperature measurements and the internal measurements indicates that, unlike in the previous results presented in [2], the vent temperature measurements, without taking environmental temperature into account, are a poor representation of the internal bulk temperature of the ILW container.
As noted in section 3, Fig. 4 presents measurements of the vent temperature that are lower than the environmental chamber temperature. For this experimental setup, it is not possible that the vent surface temperature was lower than the chamber temperature; this is due to the ILW container being fully situated within the chamber; this clearly indicates the presence of a systematic error in the thermal imaging measurements of the vent surface temperature. 6 shows the difference between the temperature measured by the PRT positioned at the bottom of the ILW container -nearest the heat source -and the PRT positioned at the top of the ILW container, plotted against the average temperature, as measured by all three PRTs. The difference has greatest magnitude when the environmental chamber temperature was set to 10°C. The difference between the PRT measured temperatures is greatest when the difference between internal container temperature and the environmental temperature is greatest. This phenomena is a strong indication that the external surface temperature of the vent is not only dependent on the internal temperature of the container, but also the temperature gradient between the internal container temperature; as well as the external environment temperature and the coupling between the environment and the container surface.
Secondary influences on thermal imaging results
Before addressing the systematic error in the thermal imaging, possible secondary effects on the temperatures measured by the thermal imager are considered, namely humidity and angle.
Humidity
Tab. 1 shows details of set-points that tested the sensitivity of thermal imager measurements to change in the relative humidity within the environmental chamber , RH chamb . The four pairs of measurement set-points are equivalent in terms of: nominal ILW container temperature, chamber temperature, and thermal imager angle. To test thermal imager measurement sensitivity to chamber relative humidity, the RH chamb was varied from, nominally, 60 %rh to, nominally, 90 %rh for each pair. The difference between the measured average vent temperatures for the first set-point pair shown in Tab. 1 is 0.09°C; this is significantly less than the uncertainty of the measurements. The equivalent difference for the other set-point pairs are: 0.15°C, 0.38°C, and 0.16°C; these are also within the uncertainty of the measurements. These results indicate that the changes in relative humidity of the environment had negligible influence on the temperatures measured by thermal imaging.
Angle
Tab. 2 shows details of set-points that tested the sensitivity of thermal imager measurements to change in the thermal imager angle relative to the vent surface. The three pairs of measurement set-points are equivalent in terms of: nominal ILW container temperature, chamber temperature, and chamber relative humidity. To test thermal imager measurement sensitivity to thermal imager angle, the angle was varied from, nominally, 50°to, nominally, 70°for each pair. The difference between the measured average vent temperatures for the first set-point pair shown in Tab. 2 is 0.17°C; this is significantly less than the uncertainty of the measurements. The equivalent difference for the other setpoint pairs are: 0.76°C and 0.85°C; these are also within the uncertainty of the measurements. There is, therefore, no difference in these measured temperatures within the uncertainty of the measurements. A more general relationship between viewing angle and measured temperature has been established in [2]. Fig. 5 shows the difference between the vent temperature measured by the phosphor thermometer, and the average vent temperature measurements of the coated vents, as measured by thermal imaging; this difference is significantly larger than the independent measurement uncertainty of each technique, detailed in section 4; therefore, an investigation was undertaken to determine the source of this measurement discrepancy. The aspects that may have caused the large difference between thermal imaging radiance temperature and phosphor thermometer temperature were: the size-of-source effect of the vent diameter on the radiance temperature, insufficient decoupling of the thermal imager housing temperature from the environment, and the emissivity of the surface. These are described below and then an in-situ validation of thermal imager temperature using phosphor thermometer measured vent temperatures is proposed.
Size-of-source effect
The apparent temperature of a thermal radiation source is dependent on the apparent size of the thermal radiation source as viewed by a thermal radiation measuring device -in this case a thermal imager -this effect is known as the size-of-source effect (SSE) [8]. If the magnitude of this effect is well understood for a given measurement scenario, a correction for radiance temperature can be applied based on the apparent size of the radiation source. The calibration of the thermal imager utilised a 40 mm diameter aperture. During the chamber measurements, the projected diameter of each vent was 8 mm. The determination of the necessary correction suggests there is a 2°C offset in the thermal imager measured temperature that is nominally invariant with temperature. This correction does not account for the measured difference in temperatures measured by the two techniques as detailed in Fig. 5, which is as large as 10°C.
Surface emissivity
The thermal imager measures apparent radiance temperature, which does not account for the non-unity value of the surface emissivity. The difference between the apparent radiance and true surface temperature for a surface with emissivity less than 1 will vary as a function of the surface temperature, but it will also be equal to 0°C when the surface is in thermal equilibrium with the environment. Assuming the phosphor thermometer is representative of the surface temperature, Fig. 5 does not describe this relationship as the fits show that the temperature difference does not equal 0°C when the chamber and container internal temperatures are equal. Therefore the source of the difference in Fig. 5 is due to effects other than the emissivity (see Appendix for more detail).
Thermal imager thermal-regulator
A laboratory based evaluation of the suitability of the enclosure used to regulate the thermal imager housing temperature was undertaken. During the environmental chamber measurements, the jacket was maintained at 20°C. In the laboratory evaluation the thermal imager was set to measure a reference blackbody source increasing from 5°C to 55°C. This was repeated three times with the jacket temperature set at 18°C, 20°C, and 22°C. The results show no observable dependence of the radiance temperature measured on the jacket temperature -the difference was within the instrumentation measurement uncertainty. Fig. 7 shows the results of the correlation between digital level and temperature and a fit of results, used throughout this investigation to convert thermal imager outputted digital level to a corresponding surface temperature as described in section 2.3, as well as the three sets of additional measurements described. All three additional data sets are self-consistent, indicating that a change of jacket temperature within ±2°C does not have a significant effect on digital level recorded.
An offset between the original data set and the additional measurement results can be seen. It is of note that, between the original measurements and the addi-tional measurements, the thermal imager was removed from the jacket and subsequently remounted in the jacket. It is understood that this remounting changed the thermal contact pathways between the imager and the jacket, thus changing the initial thermal dissipation characteristics. The effect of imager remounting within the jacket on the digital level recorded is important to the measurement of the ILW container vent temperature as the thermal imager was removed from the jacket after the initial calibration measurements and remounted in the jacket to measure the vent temperature in the environmental chamber. The effect of change in thermal contact between the thermal imager housing and an outside heat sink has been further investigated and shown to affect the focal plane array (FPA) digital level of the thermal imager, as indicated by the results shown in Fig. 7.
In-situ calibration
Following the identification of the phenomenon described in section 5.3.3, the viability of performing an in-situ calibration of the thermal imaging results using the results of the phosphor thermometry was investigated.
The in-situ calibration is based on a fit of the digital level and estimated FPA temperature with the phosphor measured vent temperature. The fit has functional form: where DL is digital level, T FPA is the FPA temperature, T phos is the vent temperature measured by the phosphor thermometry. For the data set recorded when the environmental chamber was set to 10°C, T FPA was assumed to be 18°C; for a chamber temperature of 18°C, the T FPA was assumed to be 19°C; and for a chamber temperature of 25°C, the T FPA was assumed to be 20.5°C. These FPA temperature assumptions are based on experience with the thermal imager in the specified environments. Both T phos and DL are the average of measurements per experimental set-point. The values for α, β, and γ are −230.2°C, 0.037°C, and −1.165, respectively. The residual of the thermal imaging temperature resulting from the fit described in Eqn. (1) from a one-to-one fit with the phosphor measured vent temperature is 1.18°C.
Coupling to ILW container internal temperature
By using the in-situ calibration, it is possible to determine the coupling between the measured vent temperature and the internal temperature measured by the PRTs. Fig. 8 shows the measured vent temperature plotted against the internal temperature, as measured by the PRT positioned at the bottom of the ILW container, for each of the environmental chamber temperature set-points.
The fit shown in the figures was determined from the functional form: where T cont is the PRT measured internal temperature, T surf is the measured surface temperature of the vent, and T chamb is the environmental chamber temperature. α and β were determined to be 3.963 and −9.421°C, respectively.
This plot presents the measured data from the experiment against Eqn. 2 and demonstrates the suitability of this function to determine internal container temperature. Whilst it is possible to infer the bulk internal temperature of the ILW container from the measured vent temperature for this experimental setup, the fit parameters will vary depending on the container geometry and material; the contents of the container; and the environmental conditions, in particular the environment temperature.
Conclusion
The results of the experiments carried out within the environmental chamber show the capability to measure the surface temperature of the vent on the ILW container lid and correlate this to the internal temperature of the container within a range of environmental conditions. It is particularly challenging to infer the internal temperature from the surface temperature for double skinned containers; and so approximating this property from the vent temperature provides insight to the rates of corrosion, hydrogen generation, thermal variations and radiogenic heating. However, to measure vent temperature reliably, the ambient temperature needs to be known and taken into account.
The results from both the phosphor thermometry and thermal imaging measurement techniques show that the vent temperature is sensitive to the temperature of the local environment. The stratification of measured temperature, dependent on the environmental chamber temperature, indicates as expected the vent temperature depends on the environment temperature. Further work would be required to generalise this work further, for example establishing the possible correlation between container surface and environment temperatures.
A significantly contributing source of the discrepancy between the measured vent temperature from the thermal imaging measurements and the phosphor thermometry measurements appears to have been a non-repeatable systematic error caused by the removing and remounting the thermal imager in a temperature controlled jacket. The non-repeatable thermal contact between the imager and the jacket has been shown to affect the outputted digital level whilst a stable scene is observed. Further work is required to determine the precise cause of this effect within the thermal imaging systems pipeline.
By performing an in-situ calibration of the thermal imager measurements based on the phosphor thermometry, it has been shown to be possible to infer the bulk internal temperature of the ILW container from thermal imaging of the surface of the container vent. The value of using two independent surface thermometry techniques has been demonstrated by these measurements as solely the thermal imaging data would not have been suitable without either a correction from the phosphor thermometer or traceable surface emissivity data.
The uncertainty of thermal imaging measurements of the surface was approximately 1.3°C (k = 2). Measurements performed of the vent temperatures with varying relative humidity. No significant change in vent temperature was measured indicating that the influence of relative humidity is negligible for the range of container and environment temperatures studied.
Appendix
The data presented in Fig. 5 is shown as a function of the internal PRT temperature. A representation of the same apparent radiance temperature difference instead as a function of the phosphor measured surface temperature can be seen in Fig.9. If the stratification was caused by an emissivity effect then the apparent radiance temperature difference would measure 0°C when the chamber and surface temperatures were equal. | 2020-10-08T02:05:56.771Z | 2020-10-07T00:00:00.000 | {
"year": 2020,
"sha1": "77c89a8c9ee9e0d5e9dfb68c5d5da8adfbc04d00",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "77c89a8c9ee9e0d5e9dfb68c5d5da8adfbc04d00",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
11485552 | pes2o/s2orc | v3-fos-license | Continuous Ethanol Production from Synthesis Gas by Clostridium ragsdalei in a Trickle-Bed Reactor
A trickle-bed reactor (TBR) when operated in a trickle flow regime reduces liquid resistance to mass transfer because a very thin liquid film is in contact with the gas phase and results in improved gas–liquid mass transfer compared to continuous stirred tank reactors (CSTRs). In the present study, continuous syngas fermentation was performed in a 1-L TBR for ethanol production by Clostridium ragsdalei. The effects of dilution and gas flow rates on product formation, productivity, gas uptakes and conversion efficiencies were examined. Results showed that CO and H2 conversion efficiencies reached over 90% when the gas flow rate was maintained between 1.5 and 2.8 standard cubic centimeters per minute (sccm) at a dilution rate of 0.009 h−1. A 4:1 molar ratio of ethanol to acetic acid was achieved in co-current continuous mode with both gas and liquid entered the TBR at the top and exited from the bottom at dilution rates of 0.009 and 0.012 h−1, and gas flow rates from 10.1 to 12.2 sccm and 15.9 to 18.9 sccm, respectively.
Introduction
Syngas fermentation is part of the hybrid conversion technology for the conversion of renewable feedstocks or gas waste streams containing CO, CO 2 and H 2 to biofuels and chemicals.Clostridium ljungdahlii, Clostridium carboxidivorans, Clostridium ragsdalei, and Alkalibaculum bacchi are among the microorganisms that metabolize CO, CO 2 and H 2 via the reductive acetyl-CoA pathway to produce ethanol, acetic acid and cell carbon [1][2][3][4].One major advantage of the hybrid conversion process is the ability to utilize feedstocks such as municipal solid wastes, industrial fuel gases and biomass [5].However, challenges for this technology include mass transfer limitations, enzyme inhibition, low cell concentration and low ethanol productivity.
Ethanol has been reported to be a non-growth associated product of gas fermentation by certain Clostridium species [5,6].Many researchers focused on improving ethanol productivity by optimizing media components, adding reducing agents, adjusting pH, adding nanoparticles and optimizing the bioreactor design to improve the mass transfer of CO and H 2 in fermentation medium [7][8][9][10][11][12][13]. C. ljungdahlii is one of the most extensively studied microorganisms for ethanol production using syngas fermentation.Tenfold (from 5 to 48 g/L) and over threefold (from 0.4 to 1.5 g/L) increases in ethanol and cell mass concentrations, respectively, were achieved in a continuous stirred tank reactor (CSTR) with cells recycled using C. ljungdahlii by designing a defined production medium and controlling pH at 4.5 [14].
Syngas fermentation bioreactors must maximize gas-liquid mass transfer, while achieving high cell densities to promote fast reaction [10].Bioreactors such as air-lift reactors, continuous stirred tank reactors (CSTRs), trickle-bed reactors (TBRs) and hollow fiber membrane (HFM) reactors have been characterized for their capabilities for CO mass transfer into fermentation medium [12,[15][16][17][18][19].Further, improved ethanol production over batch bottle fermentations was reported when fermentations were performed in various bioreactors that provided larger working volume, greater cell recycling, continuous addition of nutrients and syngas, and better control of operating parameters [1,16,[20][21][22][23].For example, C. carboxidivorans produced only 0.9 g/L ethanol in batch bottles [24] compared to 1.6 g/L ethanol in bubble column [21].About 19 g/L ethanol was produced by C. ljungdahlii in a two stage CSTRs with cell recycle [22] compared to about 1 g/L ethanol in batch bottles [25].C. ragsdalei produced about 1.5 g/L ethanol in bottles [8] compared to 2 g/L in two stage CSTRs with partial cell recycle [20].However, 1.7 g/L ethanol was produced by A. bacchi in bottles compared 6 g/L ethanol in a CSTR with cell recycling.
In our previous study [19], the TBR was reported to provide greater mass transfer capabilities compared to a CSTR.Further, in semi-continuous fermentation, formation of a biofilm in the TBR improved the H 2 uptake by decreasing the CO inhibition on hydrogenase because CO is consumed by cells as it flows through the TBR [18].However, higher acetic acid production was observed in the semi-continuous fermentations due to repetitive medium replacement that provided a growth supporting environment.During batch and semi-continuous fermentations, cells undergo lysis as the nutrient levels depleted, causing the fermentation to cease.Production of ethanol in a batch process is time-and labor-intensive due to the long doubling times of syngas-fermenting microbes [22].During continuous fermentation, high cell concentrations and productivity can be maintained for a longer period.Further, a continuous supply of fresh medium would maintain the cell's activity and adapt cells in the biofilm to produce more solvent by controlling fermentation parameters such as dilution rate, pH and gas flow rate.The focus of the present study is to improve ethanol production in a TBR during continuous fermentation.Supply of nutrients to a TBR was controlled by altering the dilution rate.The effect of dilution rate and gas flow rate on gas conversion, gas uptake, product concentrations, yields and productivities in both counter-current and co-current modes of operation were studied.
Microorganism and Medium Preparation
Clostridium ragsdalei (ATCC-PTA-7826) was maintained and grown on standard yeast extract medium.The medium contained 0.5 g/L yeast extract (YE), 10 g/L 2-(N-morpholino)ethanesulfonic acid (MES) as the buffer, 25 mL/L mineral solution without NaCl, 10 mL/L vitamin solution, 10 mL/L metal solution and 10 mL/L of 4% (w/v) cysteine sulfide.The detailed medium composition was previously reported [4].C. ragsdalei stock culture was passaged three times (i.e., inoculum was transferred to fresh medium three times for cells' adaptation) prior to inoculating the TBR to reduce the lag phase.Detailed inoculum preparation was reported previously [18].
Fermentation Experimental Setup
A schematic of the continuous syngas fermentation setup is shown in Figure 1.The TBR designed in house was made of a borosilicate glass column of 5.1 cm diameter and 61 cm long.The detailed reactor design was reported earlier [18].The packing material was 6-mm soda lime glass beads.The TBR liquid outlet was connected to a 500 mL Pyrex glass bottle, which was used as a sump to hold 500 mL of medium.It was operated both in counter-current and co-current modes.A peristaltic pump circulated the liquid at a desired flow rate.The pH and ORP probes (Cole-Parmer, Vernon, IL, USA) were placed in line in the recirculation loop.A liquid sample port was placed in the recirculation loop.Fresh medium from the feed tank was pumped to the reactor through a 6.4 mm T-connector placed in the recirculation loop before the medium enters the top of the TBR using a Bioflo pump and controller (New Brunswick Scientific Co., Edison, NJ, USA).The product stream was connected to another Bioflo pump that pumped out the product to a tank to maintain a constant amount of liquid in the reactor and recirculation loop.A port for acid and base addition was placed in the recirculation loop after the sampling port and was connected to the Bioflo controller for pH control.N 2 was purged continuously through the feed and product tanks at 20 standard cubic centimeters per minute (sccm) to maintain anaerobic conditions.A one-way valve was connected at the gas outlet of both tanks to make sure the gas flowed out and air did not flow back into the tanks.In counter-current mode of operation, the gas entered at the bottom of the TBR.The exhaust gas from the TBR was fed into the sump headspace and then out to the sump gas exit line.In co-current operation, both gas and liquid entered the TBR at the top and exited through the same exit line to the sump.The sump acted as a gas-liquid separator.Further, a back pressure regulator was connected to the sump gas exit line to ensure that a 115 kPa pressure was maintained in the TBR.A pressure gauge was connected at the TBR gas exit line to measure the pressure in the TBR.An additional gas exit line was connected to the sump as a safety exhaust line with a pressure switch and a solenoid valve to vent the excess pressure in the TBR.A bubbler was placed after the pressure regulator to minimize losses of products exiting with the gas.
Continuous Fermentation Procedure
The TBR column, sump, tubing and liquid medium were sterilized in an autoclave (Primus Sterilizer Co., Inc., Omaha, NE, USA) at 121 • C for 20 min.After sterilization, the TBR was setup and purged with N 2 for 5 h.Then, 200 mL of fresh sterile medium was added into the TBR and purged with N 2 for 8 h.Next, the gas was switched to syngas with 38% CO, 5% N 2 , 28.5% CO 2 and 28.5% H 2 (by volume) (Stillwater Steel and Supply Company, Stillwater, OK, USA), which is similar to the composition of coal derived syngas [26].A 60% (v/v) inoculum was aseptically added into the TBR through the liquid sample port.The temperature of the TBR was maintained at 37 • C. The liquid recirculation rate was set at 200 mL/min.At the beginning of the fermentation, the gas flow rate was set at 1.5 sccm.Initially, the TBR was operated in semi-continuous mode.After the CO and H 2 conversion efficiencies reached about 90%, the TBR was switched to continuous mode by turning on the fresh medium and product pumps at a desired flow rate.Effects of three dilution rates of 0.006, 0.009 and 0.012 h −1 on product formation and gas conversion efficiency were examined.The conversion efficiency of each gas during fermentation was estimated based on the amount of gas converted by C. ragsdalei relative to the amount of gas fed to the TBR.Dilution rate equals to the feed rate divided by the medium volume in TBR, sump and recirculation loop.At each dilution rate, the effect of gas flow rate on cell growth, gas conversion, product formation and yields was examined.The gas flow rate in the TBR was gradually increased until CO conversion efficiency dropped below 40%.Then, the gas flow rate was decreased and a new dilution rate was applied.Gas and liquid samples were aseptically withdrawn from the TBR periodically.To avoid flooding of the TBR by cell debris, the recirculation rate was increased from 200 to 500 mL/min for about 10 min at every sampling time to remove cell cells debris between the packing materials.
Sample Analysis
The cell optical density of the fermentation medium from the liquid sample port in the circulation loop was measured at 660 nm (OD 660 ) with a UV spectrophotometer (Cole Parmer, Vernon, Hills, IL, USA).The total cell optical density of the attached cells was measured at the end of the fermentation as described previously [18].The pH measurements were logged into a computer using Biocommand software (New Brunswick Scientific Co.).Fermentation samples were analyzed for ethanol and acetic acid using a DB-FFAP capillary column gas chromatography with flame ionization detector (GC-FID).Gas samples were analyzed in a 6890N gas chromatography with thermal conductivity detector (TCD) (Agilent Technologies, Wilmington, DE, USA).More details of the methods used to analyze gas and liquid samples were described previously [18].
Cell Growth and pH
C. ragsdalei cell OD 660 and pH profiles in the TBR when continuously operated for 3200 h in counter-current and co-current modes are shown in Figure 2. C. ragsdalei started to grow after 174 h of lag phase.The cell OD 660 was 0.35 at 197 h when the TBR was switched to continuous operation with a dilution rate of 0.012 h −1 and a gas flow rate of 1.9 sccm.Cell OD 660 further increased to 0.53 at 207 h and remained constant until 224 h.However, the cell OD 660 started decreasing slowly to 0.20 at 305 h, most likely due to cell washout.At this point, the dilution rate was decreased by 50% (D = 0.006 h −1 ) which resulted in an increase in the cell OD 660 to 0.30 by 357 h.A brief power interruption between 329 and 351 h resulted in no gas flow to the TBR.This caused a decrease in cell activity (i.e., decrease in CO and H 2 gas uptake rates) up to 398 h.The fermentation slowly recovered when the gas flow and dilution rates were reset to 1.5 sccm and 0.006 h −1 , respectively.The cell OD 660 increased from 0.10 at 398 h to 0.29 at 461 h.The cell OD 660 in the liquid medium dropped to approximately zero around 700 h.However, the gas uptake rates were maintained indicating continued cell activity due to a biofilm in TBR rather than suspended cells.This was also confirmed by measuring the total cell mass concentration at the end of the TBR run, which showed a much higher cell mass concentration attached to the TBR than was suspended in the liquid medium as discussed below.Formation of biofilm refers to all cell mass excluding suspended cells.During counter-current flow mode, cells from the biofilm were resuspended into the medium when the pressure was released to clear the medium between the beads in the flooded TBR at 627, 901 and 909 h.This resulted in a sudden increase in the measured cell OD 660 .To avoid flooding issues in counter-current mode, the liquid recirculation rate was intermittently increased from 200 to 500 mL/min for 10 min at various sampling times of 1197, 1371, 1498, 1628, 1643 and 1652 h.Unlike in counter-current mode, cell OD 660 was between 0.05 and 0.3 during co-current mode from 1700 to 3200 h (Figure 2B).
The TBR and glass beads were washed with DI water to calculate the total amount of cells attached to the beads in the TBR after 3200 h of continuous fermentation.The beads were collected in a tub and washed three times with 1 L DI water.The column was also washed with 1 L of DI water to account of cells attached to the column walls.The cell OD 660 in the beads from wash-1, wash-2, wash-3 and column-wash were 8.86, 0.41, 0.13 and 1.97, respectively.Based on this analysis, the estimated overall dry cell weight in the TBR at the end of the fermentation was 4.24 g.
During cell growth, the medium pH decreased from 5.7 at 174 h to 4.7 at 207 h.The pH of the medium was then maintained at 4.6 by addition of about 0.5 to 1 mL of 2 N KOH after every sampling time.After the power interruption between 329 and 351 h, the pH was increased to 5.2 to maintain a pH slightly favorable to cell growth conditions to recover fermentation activity.The pH dropped from 5.2 to 4.7 as cell OD 660 increased between 422 to 461 h.After 461 h, the pH was maintained between 4.5 and 4.6.
Gas Conversion
The CO and H 2 conversion efficiencies in the TBR are estimated as the amount utilized divided by the amount flowing into the TBR.The CO and H 2 conversion efficiencies by C. ragsdalei were 92% and 72%, respectively, when the fermentation was switched from semi-continuous to continuous mode at 197 h (Figure 3A).In counter-current mode, the TBR was operated at dilution rates of 0.012 h −1 (D 3 ), 0.006 h −1 (D 1 ) and 0.009 h −1 (D 2 ) from 197 to 305 h, 305 to 989 h and 989 to 1700 h, respectively.The CO and H 2 conversion efficiencies were 93% and 74%, respectively, at 0.012 h −1 and gas flow rate of 1.9 sccm.However, when the gas flow rate was increased to 2.3 sccm at the same dilution rate, the CO and H 2 conversion efficiencies dropped slightly to 88% and 71%, respectively.
CO and H 2 conversions efficiencies continued to decrease to 81% and 60%, respectively, when the gas flow rate and dilution rate were 2.3 sscm and 0.006 h −1 , respectively, at 305 h.However, the CO and H 2 conversion efficiencies increased to 88% and 74%, respectively, when gas flow rate was reduced to 1.9 sccm at 319 h.The conversion efficiencies of CO and H 2 decreased to 40% and 31%, respectively, due to power shutdown from 329 to 351 h that hindered the fermentation.The fermentation recovered slowly after 398 h with CO and H 2 conversion efficiencies reached to 92% and 86%, respectively at 620 h.As the gas flow rate was increased from 1.5 to 1.9 sccm, the liquid medium flooded the TBR (at 627, 901 and 909 h), which decreased the CO and H 2 conversion efficiencies to about 65% at 909 h (Figure 3A).The TBR flooding caused gas bypass from the bottom of the TBR to the headspace sump and decreased the availability of syngas to cells in the TBR.The gas flow rate was decreased from 1.9 to 1.5 sccm at 909 h to avoid further flooding.CO and H 2 conversion efficiencies recovered to 85% and 81%, respectively, at 981 h before increasing the dilution rate to 0.009 h −1 .
Gas conversion efficiencies of 91% CO and 90% H 2 were achieved between 989 and 1115 h at 0.009 h −1 .While the CO conversion efficiency was about the same at both 0.006 and 0.009 h −1 , the H 2 conversion efficiency was 5% higher at 0.009 h −1 than at 0.006 h −1 at the same gas flow rate.The increase in gas uptake is due to higher cells' activity due to availability of more nutrients at higher dilution rate.A decrease in CO and H 2 conversion efficiencies was observed at various sample points (1197, 1371, 1498 and 1628 h) when the liquid recirculation rate was increased for 10 min to clear the cell debris in the TBR.During the period between 1197 and 1628 h, the gas flow rate was increased from 1.7 sccm to 2.6 sccm.The combination of the increase in gas supply and possible removal of active cells with removal of cell debris likely contributed to the decrease in gas conversion.The TBR operation at 0.009 h −1 from 989 to 1700 h with an increase in gas flow rate from 1.5 sccm to 2.8 sccm resulted in CO and H 2 conversion efficiencies of about 91% and 89%, respectively.
The TBR was operated in co-current mode at dilution rates of 0.009 h −1 (D 2 ) and 0.012 h −1 (D 3 ) from 1700 to 2672 h and from 2672 to 3200 h, respectively, with a gradual increase in gas flow rate from 2.8 sccm to 18.9 sccm (Figure 3B).The TBR gas inlet leaked from 1700 to 2042 h, which resulted in inaccurate gas flow rate measurements.No gas data was obtained during this time period.During the TBR operation at 0.009 h −1 , the increase in gas flow rate from 2.8 sccm at 2042 h to 12.2 sccm at 2607 h resulted in a decrease in CO and H 2 conversion efficiencies from 95% and 88% to 43% and 19%, respectively.A decrease in conversion is expected since the length of time the gas is in the reactor decreases with increasing flow rate.The gas flow rate was reduced to 7.6 sccm at 2607 h to increase CO and H 2 conversion before a new dilution rate of 0.012 h −1 was used.CO and H 2 conversion efficiencies increased to 71% and 42%, respectively.The gas conversion efficiencies at 2660 h were slightly higher than those obtained at 2375 h and the same operating conditions.
The dilution rate was increased to 0.012 h −1 (D 3 ) at 2672 h with a gas flow rate of 7.6 sccm (Figure 3B).CO and H 2 conversion efficiencies reached 77% and 53%, respectively, at 2725 h.These gas conversion efficiencies at 0.012 h −1 and the same gas flow were 8% higher for CO and 21% higher for H 2 than at 0.009 h −1 .This is due to the increase in cells' activity with additional nutrients at the higher dilution rate.Further, when the gas flow rate was increased from 8.4 sccm at 2732 h to 18.9 sccm at 3200 h, the CO and H 2 conversion efficiencies slowly dropped to 50% CO and 30% H 2 , respectively.The differences between CO and H 2 conversion efficiencies at dilution rate of 0.012 h −1 was lower than at 0.009 h −1 indicating an increase in gas uptake at higher dilution rates (Figure 4B).It can also be observed from Figure 3B that the decrease in CO and H 2 gas conversion efficiencies were lower at 0.012 h −1 than at 0.009 h −1 indicating higher cells' activity at 0.012 h −1 .In two stages of continuous syngas fermentation with C. ljungdahlii in a CSTR followed by a bubble column with gas and cells recycling, the increase in dilution rate from 0.01 to 0.016 h −1 was reported to increase the cell OD 600 from 9.9 to 17.8 due to supply of more nutrients [22].The same study reported CO and H 2 conversion efficiencies in the CSTR were 46% and 49%, respectively, at 23 sccm compared to CO and H 2 conversion efficiencies of 86% and 82% in the bubble column at 121 sccm.The high gas conversion efficiency in the second stage is attributed to the high cell OD 600 of 17.8 that was achieved with cell recycling.
Gas Uptake Profiles
The gas uptake profiles during continuous syngas fermentation by C. ragsdalei in the TBR are shown in Figure 4.The specific gas uptake rates in mmol/gcell•h were not calculated because the cell mass concentration in the biofilm during operation was not known and difficult to quantify.Hence, the gas uptakes are described only in terms of mmol/h.CO and H 2 uptake rates at the start of the continuous fermentation at 197 h, 0.012 h −1 and 1.9 sccm were 2.0 and 1.2 mmol/h, respectively.When the gas flow rate was increased to 2.3 sccm at 261 h, CO and H 2 uptake rates slightly increased.However, cell OD 660 decreased to 0.20 at 305 h due to cell washout (Figure 2A).Therefore, the dilution rate was decreased to 0.006 h −1 at 305 h.CO and H 2 uptake rates at 319 h decreased to 1.8 and 1.0 mmol/h, respectively.
To increase gas consumption, the gas flow rate was reduced to 1.9 sccm at 319 h.CO and H 2 uptake rates at 329 h recovered back to 2.0 and 1.2 mmol/h, respectively.However, the cell activity decreased when a power failure occurred between 329 and 351 h.The fermentation slowly recovered with CO and H 2 uptake rates of 1.7 and 1.1 mmol/h between 375 and 620 h.
The gas flow rate was increased from 1.5 to 1.9 sccm between 620 and 787 h, which resulted in a slight increase in CO and H 2 uptake rates to 2.0 and 1.4 mmol/h at 787 h, respectively.These gas uptakes were maintained up to 900 h.However, flooding at 901 h and 909 h resulted in a decline in CO and H 2 uptake rates to 1.5 and 1.0 mmol/h, respectively.Hence, the gas flow rate was decreased to 1.5 sccm at 909 h and was maintained at this flow rate until 981 h.At 981 h, the CO and H 2 uptake rates were essentially still the same as at 909 h.However, since the same gas uptake rates were achieved at a lower gas flow rate, CO and H 2 conversion efficiencies increased (Figure 3).The dilution rate was maintained at 0.009 h −1 during counter-current operation between 989 and 1700 h.A step increment increase in gas flow rate by 5-10% from 1.5 sccm at 989 h to 2.8 sccm at 1700 h resulted in an increase of gas uptake rate to 3.1 mmol/h of CO and 2.1 mmol/h of H 2 .
The TBR was switched at 1700 h to co-current mode due to frequent flooding issues.However, there was gas leak in the inlet to the TBR from 1700 to 2042 h, which resulted in inaccurate gas flow rate measurements and no gas uptake data.The gas flow rate was gradually increased from 2.8 to 6.3 sccm in between 2042 and 2313 h, which increased the gas uptake rates to 5.9 mmol/h CO and 3.3 mmol/h H 2 at 2313 h (Figure 4B).A further increase in the gas flow rate from 6.3 to 12.2 sccm from 2313 to 2672 h resulted in a decrease of H 2 uptake rate to between 2.4 and 3.0 mmol/h, while the CO uptake rate increased between 6.0 and 6.7 mmol/h.The average total CO and H 2 gas uptake rate between 2313 and 2672 h was 8.5 mmol/h.It can be observed that the increase in the dilution rate from 0.009 to 0.012 h −1 and gas flow rate from 2.8 to 18.9 sccm increased the overall CO and H 2 uptake rates.In co-current flow, it was also observed that the increase in dilution rate by 36% (0.009 h −1 to 0.012 h −1 ) resulted in an increase in total CO and H 2 uptake rate by 47%.The gas uptake rates in co-current mode were 2.5 fold higher than in counter-current mode.This was attributed to the ability to operate the TBR in co-current mode at higher gas flow rates.
In the previous study with semi-continuous fermentation in co-current mode TBR, the maximum CO and H 2 conversion efficiencies at 4.6 sccm were 80% (CO uptake rate of 4.4 mmol/h) and 55% (H 2 uptake rate of 2.2 mmol/h), respectively [18].In the present study during co-current continuous fermentation at 4.6 sccm and 0.009 h −1 , gas conversion efficiencies of 82% CO (CO uptake rate of 4.4 mmol/h) and 72% H 2 (H 2 uptake rate of 2.74 mmol/h) were achieved.The high gas conversion efficiency and uptake rates are due to high cells' activity with continuous addition of nutrients during the fermentation.
Product Profiles
At the beginning of continuous fermentation (197 h), ethanol and acetic acid concentrations were 0.8 g/L and 2.4 g/L, respectively (Figure 5A).Ethanol and acetic acid concentrations increased to 2.0 g/L and 5.0 g/L, respectively between 197 and 305 h at a dilution rate of 0.0012 h −1 .A slight increase in ethanol and acetic acid concentrations was observed when dilution rate was decreased to 0.006 h −1 .However, ethanol and acetic acid concentrations decreased between 329 h and 398 h.This decrease was associated with the power shutdown and washout cells and products.The fermentation slowly recovered and product concentrations were observed to be stable from 398 to 454 h after which ethanol and acetic acid concentrations at 627 h reached 3.2 and 6.2 g/L, respectively, when the gas flow rate was increased by 20% from 1.5 sccm between 627 and 787 h, ethanol concentration increased by 20% while the acetic acid concentration decreased by 20%.Ethanol concentration slowly increased to 4.3 g/L while acetic acid concentration remained at 5.0 g/L between 787 and 909 h.Due to flooding at 901 and 909 h, the gas flow rate was reduced from 1.9 to 1.5 sccm to recover the fermentation in the TBR.
The dilution rate was increased from 0.006 to 0.009 h −1 between 989 and 1700 h.This increased the product removal rate from the TBR, which decreased the ethanol concentration by about 40% to 2.5 g/L at 1115 h.However, the acetic acid concentration increased by 15% to around 5.9 g/L at 1115 h.The increase in acetic acid concentration was due to increase in cells' activity and concentration in the biofilm.The increase in cell concentration is associated with acetic acid production and ATP generation as a high amount of energy is required for cell maintenance [1].
The gas flow rate was increased from 1.5 to 2.8 sccm in a step increment of 5-10% every 24 to 36 h between 1115 and 1700 h (Figure 5A).Ethanol and acetic acid concentrations were stable at 2.5 and 6.2 g/L, respectively, when the gas flow rate was increased from 1.5 to 1.7 sccm from 1115 to 1197 h.The liquid recirculation rate was increased from 200 to 500 mL/min for 10 min to clear the cell debris at 1197 h.This resulted in a slow increase in the ethanol concentration to 3.2 g/L and a decrease in the acetic acid concentration to 4.7 g/L at 1245 h.This intermittent increase in liquid recirculation rate could have cleared the cell debris from the packing and caused better gas uptake resulting in a positive effect on ethanol production.
To test the positive effect of the intermittent increase of liquid recirculation rate on ethanol production, the liquid flow rate was again increased to 500 mL/min for 10 min at 1371 h.Ethanol concentration slowly increased to 3.7 g/L while acetic acid remained at 4.7 g/L at 1474 h.Since increasing the liquid recirculation rate intermittently had a positive effect on ethanol production, it was performed when the cell OD 660 in the medium decreased to zero.The intermittent increase in liquid recirculation rate and gradual increase in gas flow rate to 2.4 sscm resulted in production of 5.0 g/L ethanol and 6.1 g/L acetic acid at 1532 h.The gas flow rate was further increased to 2.8 sccm between 1532 and 1700 h.Ethanol and acetic acid concentrations were 4.4 and 5.8 g/L, respectively at 1700 h.
In co-current mode, the gas flow rate was gradually increased from 2.8 to 18.9 sccm (Figure 5B).Similar to the counter-current mode, the gradual increase in gas flow rate with an increase in liquid recirculation rate from 200 to 500 mL/min for 10 min at every sampling time increased ethanol production.Ethanol and acetic acid concentrations were 2.7 and 6.7 g/L, respectively, at 2052 h and 3.1 sccm.An increasing trend in ethanol production and a decreasing trend in acetic acid production were observed as the gas flow rate was increased from 3.1 to 9.2 sccm between 2052 and 2493 h.Ethanol and acetic acid concentrations at 2493 h were 11.9 and 4.6 g/L, respectively.A further increase in the gas flow rate from 9.2 to 11.1 sccm at 2542 h did not increase ethanol production.Additionally, the increase in gas flow rate from 11.1 to 12.2 sccm at 2566 h slightly decreased ethanol and acetic acid concentrations to 10.5 and 4.1 g/L, respectively.This indicates that beyond a gas flow rate of 9.2 sccm, cells reached a kinetic limitation and were not able to process more gas even when more gas was provided (Figure 4B).
Further, when the gas flow rate was decreased from 12.2 to 7.6 sccm at 2607 h, ethanol and acetic acid concentrations were stable at 10.8 and 3.8 g/L, respectively, until 2672 h.When the dilution rate was increased from 0.009 to 0.012 h −1 at 2510 h, the ethanol concentration dropped slowly to 9.9 g/L.However, the acetic acid concentration slightly increased to 5.0 g/L at 2551 h and 7.6 sccm.Further increase in the gas flow rate from 7.6 to 18.9 sccm resulted in 13.2 g/L ethanol and 4.3 g/L acetic acid at 3200 h.The gas uptake rate at 0.012 h −1 was higher than at 0.009 h −1 due to higher cells' activity, which resulted in more ethanol production at 0.012 h −1 .
Productivity and Yields
Ethanol and acetic acid productivities were estimated by multiplying the dilution rate by the product concentration.Ethanol and acetic acid yields were estimated based on CO consumed as previously reported [27].One mole of ethanol is formed from six moles of CO and one mole of acetic acid is produced from four moles of CO.
During counter-current operation, the highest ethanol productivity of 45 mg/L•h was obtained during operation at 0.009 h −1 and 1556 h.However, the highest acetic acid productivity was 63 mg/L•h at 0.009 h −1 and 1611 h.During counter-current mode, acetic acid productivity was always higher than ethanol productivity.However, ethanol productivity was higher during co-current mode.The maximum ethanol productivity during co-current operation was 158 mg/L•h at 0.012 h −1 and 3200 h, while a maximum acetic acid productivity of 68 mg/L•h was obtained at 0.009 h −1 at 2083 h.The ethanol productivity achieved in the present study with continuous syngas fermentation in the TBR was over four times higher than reported during semi-continuous fermentation (37 mg/L•h) in the TBR [18].
Moreover, the molar ratio of ethanol to acetic acid produced in the present study during continuous fermentation at 0.012 h −1 and 18.9 sccm in the TBR was 4:1, which was higher than in semi-continuous TBR fermentation (1:2).In semi-continuous fermentation, as nutrients were depleted from the medium, the gas conversion efficiencies and uptake rates decreased.Replacement of the medium in semi-continuous fermentations resulted in a nutrient-rich environment at pH 5.8 that promoted cell growth and thus more acetic acid production.However, during continuous fermentation the nutrients levels were maintained by altering the dilution rate and the pH was maintained at 4.5 that favored ethanol production.This clearly shows the advantages of the continuous syngas fermentation process.
During counter-current mode, ethanol yield was 22% while acetic acid yield was 42% at 197 h and 1.9 sccm (Figure 6).However, acetic acid yield slowly dropped to 15% while ethanol yield increased to 58% at 294 h.At dilution rate of 0.006 h −1 from 305 h to 989 h, the ethanol yield increased from 28% at 461 h to 85% at 850 h, while acetic acid yield decreased from 38% at 461 h to 13% at 850 h.At a dilution rate of 0.009 h −1 between 989 and 1700 h, the average ethanol yield was about 85%, while the average acetic acid yield was about 20%.As the gas flow rate was increased and the pH was maintained at 4.5, ethanol yields increased due to the availability of more reductants (CO and H 2 ) and pH values that favored solvent production conditions.
(A) (B) In co-current operation at 0.009 h −1 , ethanol yield increased from 46% at 2042 h to 100% at 2232 h.Ethanol yield remained close to 100% during fermentation from 2232 to 2607 h.However, the acetic acid yield decreased from 34% at 2042 h to 16% at 2232 h.It remained close to about 13% from 2232 to 2607 h.Ethanol and acetic acid yields were about 100% and 20%, respectively, during operation at 0.012 h −1 .The higher ethanol yield in co-current mode was due to higher cell activity that processed more gas.
Discussion
As discussed in Section 3.2, high dilution rates provided more nutrients to cells, which increased the cells' activity.The increase in the gas flow rate increased CO and H 2 transfer rates into the medium, which supported ethanol production.Ethanol produced in the present study (13.2 g/L) was higher than that reported in a CSTR with cell recycle using A. bacchi (6 g/L), a bubble column reactor (1.6 g/L) and a monolithic biofilm reactor (4.9 g/L) using C. carboxidivorans [1,5,28].However, 19 g/L ethanol was reported in a two-stage continuous syngas fermentation in a CSTR followed by a bubble column with gas and cell recycling [23] and 48 g/L ethanol was reported in a CSTR with cell recycle using C. ljungdahlii [14].Up to 24 g/L ethanol production was reported with hollow fiber membrane biofilm reactor (HFM-BR) using C. carboxidivorans [29].However in addition to syngas, the presence of 10 g/L of fructose in ATCC 1745 PETC medium as previously reported [29] could have contributed to more ethanol production.
Compared to other microorganisms and reactor designs, the maximum ethanol productivity of 158 mg/L•h achieved by C. ragsdalei in the TBR in the present study was higher than ethanol productivity of 140 mg/L•h reported for C. carboxidivorans in HFM-BR, 110 mg/L•h reported for C. ljungdahlii in CSTR without cell recycle or for A. bacchi in a CSTR (70 mg/L•h) [1,29,30].However, ethanol productivity in the present study was lower than the 301 mg/L•h ethanol productivity reported for C. ljungdahlii in a two stage CSTR and bubble column with gas and cell recycling [23].
The results also showed the many advantages of using continuous syngas fermentation in a TBR compared to other reactors.The cells' activity in the TBR was recovered after power shutdown and multiple flooding issues occurred during counter-current flow.Intermittent increase in liquid recirculation rate cleared cell debris in the TBR and improved gas uptake and ethanol production.However, further improvements in TBR performance are expected by utilizing better packing material for immobilization of cells and increasing the H 2 :CO ratio in the syngas.Glass beads used in this study have a void fraction of 0.38, which is lower than the void fraction provided by other packing materials such as intalox saddles (0.6 to 0.9) and pall rings (0.9) [31].Low void fraction reduces the availability of free space for gas-liquid mass transfer and decreases the reactive holdup volume.Further, use of cell immobilization techniques [32,33] such as covalent coupling using cross linking agents, entrapment, and adsorption on packing with rough surfaces can reduce the time of the biofilm formation and improve the TBR performance.Additionally, there is a need to grow more cells in the TBR, recycle unused gas, perform two-stage reactors using TBRs or in combination with other reactors to increase syngas utilization and productivity, which warrant further investigation.
Conclusions
To our knowledge, this is the first study on continuous operation of syngas fermentations in TBR for ethanol and acetic acid production, and this report highlighted the operational constraints and challenges of continuous syngas fermentation in TBR, and how the bioreactor operation can be restarted after major accidents such as flooding and power shutdown.The highest ethanol concentration, productivity and ethanol to acetic acid molar ratio of 13.2 g/L, 158 mg/L•h and 4:1, respectively, were obtained during co-current continuous syngas fermentation at a dilution rate of 0.012 h −1 .In co-current mode, the total gas uptake rates more than doubled and ethanol productivity increased over fivefold with the increase in the gas flow rate from 2.8 to 18.9 sccm and dilution rate from 0.009 to 0.012 h −1 .Operating TBR in co-current mode avoided flooding issues that occurred during counter-current mode and allowed production of over twofold more ethanol than in counter current mode.
Figure 3 .
Figure 3. Gas conversion efficiencies during continuous syngas fermentation in TBR (A) Counter-current and (B) Co-current flow modes at various dilution rates (D 1 , D 2 and D 3 are 0.006, 0.009 and 0.012 h −1 , respectively); ( ) CO ( ) H 2 (-) Gas flow rate (Open symbols indicate flooded TBR; 0 h to 174 h: lag phase resulted in no data; 1700 to 2042 h: gas leak resulted in no data).
Figure 4 .
Figure 4. Gas uptake rates during continuous syngas fermentation in TBR (A) Counter-current and (B) Co-current flow modes at various dilution rates (D 1 , D 2 and D 3 are 0.006, 0.009 and 0.012 h −1 , respectively); ( ) CO ( ) H 2 ( ) CO+H 2 (-) Gas flow rate (Open symbols indicate flooded TBR; 0 h to 174 h: lag phase resulted in no data; 1700 to 2042 h: gas leak resulted in no data).
Figure 6 .
Figure 6.Product yields based on CO consumed during continuous syngas fermentation in TBR.(A) Counter-current and (B) Co-current flow modes at various dilution rates (D 1 , D 2 and D 3 are 0.006, 0.009 and 0.012 h −1 , respectively); ( ) Ethanol ( ) Acetic acid (-) Gas flow rate.(0 to 174 h: lag phase resulted in no data.1700 to 2042 h: gas leak resulted in no data. | 2017-10-26T17:10:03.362Z | 2017-05-24T00:00:00.000 | {
"year": 2017,
"sha1": "4c1a73982a3cc32e0c320a80a865b833eef9f3d0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2311-5637/3/2/23/pdf?version=1495637156",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c1a73982a3cc32e0c320a80a865b833eef9f3d0",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
245353542 | pes2o/s2orc | v3-fos-license | Bidding in Multi-Unit Auctions under Limited Information
We study multi-unit auctions in which bidders have limited knowledge of opponent strategies and values. We characterize optimal prior-free bids; these bids minimize the maximal loss in expected utility resulting from uncertainty surrounding opponent behavior. Optimal bids are readily computable despite bidders having multi-dimensional private information, and in certain cases admit closed-form solutions. In the pay-as-bid auction the minimax-loss bid is unique; in the uniform-price auction the minimax-loss bid is unique if the bidder is allowed to determine the quantities for which they bid, as in many practical applications. We compare minimax-loss bids and auction outcomes across auction formats, and derive testable predictions.
Introduction
Multi-unit auctions play a critical role in many markets. For example, they are used to allocate generation capacity across power plants in electricity markets, and determine the interest rates at which governments can issue new debt. 1 Traditional equilibrium analysis of these auctions relies on the common prior assumption, which may not be satisfied in practice; and, even when it is satisfied, computation of equilibrium strategies is typically intractable due to the multi-dimensionality of bidders' information [Swinkels, 2001;Hortaçsu and Kastl, 2012]. In this paper we relax the common prior and equilibrium assumptions in multi-unit auctions, and analyze worst-case loss minimizing bidders facing maximal uncertainty. We characterize optimal bids in this framework, show when optimal bids are unique (and when not), derive bounds on optimal bids in terms of novel iso-loss curves, and provide comparative statics across common auction formats.
Real-world bidders often face uncertainty about the distribution of their opponents' bids, and this uncertainty is inconsistent with a Bayesian equilibrium. In a Bayesian equilibrium, bidders can compute for each bid the probability that its respective unit is won. However, surveying academic and professional auction consultants, Kasberger and Schlag [2022] find that most real-world bidders cannot assign winning probabilities to their bids, which suggests that many real-world bidders are uncertain about the bid distribution. This uncertainty is particularly pronounced following policy shifts. In electricity auctions temporally close to deregulation, Doraszelski et al. [2018] show that bidder behavior is hard to anticipate, while in later auctions behavior can be explained by learning and eventual convergence to equilibrium. That is, economists-and bidders themselves-have limited understanding of initial play. 2 To increase our understanding of bidder behavior in the presence of such uncertainty, we study how bidders bid in two major multi-unit auction formats when is hard to anticipate rivals' behavior and compare out-of-equilibrium auction outcomes. We study the pay-as-bid and uniform-price auction formats, both of which are frequently used to allocate homogeneous goods. 3 In these auctions bidders submit demand curves to the auctioneer.
The auctioneer uses submitted demand curves to compute market-clearing prices and quantities. Each bidder receives their market-clearing quantity; in the pay-as-bid auction they pay their bid for each unit received, while in the uniform-price auction they pay the constant market-clearing price for each unit received. Little is known about equilibrium behavior in these auctions when bidders have general, multi-dimensional private values. 4 Our prior-free non-equilibrium approach allows us to characterize the optimal bid functions for arbitrary multi-dimensional private marginal values. As the optimal bid functions depend non-linearly on all marginal values, closed-form solutions are available only when the number of parameters are relatively low (such as in the case of two-unit demand or under flat marginal values); however, we give a simple recursive construction showing that numerical solutions can always be computed straightforwardly. We characterize the optimal bids in three settings that appear in the literature and the real-world. The first setting is the standard discrete multi-unit auction. Our second empirically-relevant setting presumes that a large number of goods is available but that bidders are constrained to submit a relatively small number of bid points; bidders are free to choose the quantities at which bids are submitted. The implied bid function is a step function, and the location and height of the steps are the bidders' choice variables. 5 Finally, in the appendix we also characterize the solution for the continuous divisible-good case, which features prominently in theoretical analyses of auctions for homogeneous goods.
A key concept in our characterization is conditional regret. Conditional on winning a certain number of units, the bid can be ex-post "too high" or "too low." The distinct payment rules in the pay-as-bid and uniform-price auctions imply distinct approaches to loss minimization: in the pay-as-bid auction a bid for a given quantity is too high whenever a higher quantity is received, while in the uniform-price auction a bid is too high only when it sets the market-clearing price. Intuitively, the optimal bid trades off the loss in utility (regret) from bidding too high and the loss in utility from bidding too low-that is, from winning too few units due to shading bids below the bidder's true value. 4 Bayesian equilibrium constructions in these auctions do exist in parameterized contexts. For example, Engelbrecht-Wiggans and Kahn [2002] describe equilibrium when demand barely exceeds supply; Back and Zender [1993] and Wang and Zender [2002] when the good is divisible and bidders have common values; Ausubel et al. [2014] when bidders demand two units; Burkett and Woodward [2020a] when bidders' values are defined by order statistics; and Pycia and Woodward [2023] when bidders have common, decreasing marginal values. 5 Although step functions are mathematically simple they are economically complex: when bids are constant over wide intervals bidders are almost always rationed. When rationing occurs with positive probability Bayesian equilibrium bids must take bidding incentives for non-local units into account, and the equilibrium first-order conditions imply a complicated non-local differential system [Kastl, 2012;Woodward, 2016]. Our prior-free approach is computationally more tractable. We provide analytic solutions in the case of constant marginal values.
We summarize our findings through a set of testable predictions. In the discrete multiunit setting, we expect more variation in the bids in the uniform-price auction than in the pay-as-bid for a given multi-dimensional value. We reach this conclusion because the optimal bid is unique in the pay-as-bid auction but not in the uniform-price auction. 6 Under a natural selection of minimax-loss bids in the uniform-price auction, the optimal bids in the uniformprice auction are both higher and steeper than in the pay-as-bid auction. In general, absent the selection we take, the bids in the uniform-price auction may not be uniformly higher than in a pay-as-bid auction. It can be optimal to bid 0 for high quantities, echoing low-revenue "collusive" equilibria of the uniform-price auction. On the other hand, there is no optimal bid in the uniform-price auction that is uniformly below the optimal bid in the pay-as-bid auction. In an example (Section 3), we show that the optimal bid in the pay-as-bid auction may decrease in the bidder's value. 7 In the constrained setting, there is a unique minimax-loss bid in both the uniform-price and pay-as-bid auction. Hence, one should not expect more variation in the bids for a given value across the two auction formats. When the marginal values are sufficiently constant, then the bids in two auctions cannot be ranked uniformly: the first bid is higher in the uniform-price auction but the optimal bid drops to zero at a lower quantity in the uniformprice auction. We also provide an example in which the bids can be ranked unambiguously, and bids are higher in the uniform-price auction. Using our characterization of the optimal constrained bid function in the case of constant marginal values, a final testable prediction is that the minimax bids are evenly spaced in the quantity space in the pay-as-bid auction but concentrated on intermediate quantities in the uniform-price auction. In general, if one knew the bidders' values, then one could test whether they use the minimax-loss bids. Usually, however, the bidders' values are unobserved, and instead observed bids are used to estimate the bidders' values. Our uniqueness results and characterizations of the optimal bids in this setting lead to point-identification of the values and a simple estimation procedure.
In the pay-as-bid auction, in any of the three settings, the (multi-dimensional) bid is found by equalizing conditional maximal regret across all units; the conditional maximal regret is the higher of the regret from bidding too high and bidding too low. In the uniform- 6 The multiplicity of optimal bids in the uniform-price auction is reminiscent of the multiplicity of Bayesian Nash equilibria [Klemperer and Meyer, 1989;Back and Zender, 1993;Ausubel et al., 2014;Burkett and Woodward, 2020b]. The type of multiplicity is starkly different, however. Multiple lossminimizing bids mean that the minimax-loss best-reply correspondence is multi-valued, while multiple Bayesian Nash equilibria require the coordination of the bidders on one equilibrium. A common assumption in empirical work is that the data is generated by the same equilibrium; this assumption would not lead to the testable prediction that there is more variation in the bids in the uniform-price auction.
7 McAdams [2007] provides examples of a uniform-price auction where Bayesian Nash equilibrium bids may decrease in the bidder's value due to risk aversion and affiliated values. We provide an example for the pay-as-bid auction, using a new rationale. price auction, in the constrained and unconstrained setting, the minimax-loss bid is found by considering the iso-loss curves, the curves that trace a certain level of over-and underbidding loss in the bid-quantity space. In the unconstrained case, the upper and the lower iso-loss curves are tangent for the loss-minimizing bid, and look similar to (strictly concave) budget constraints and indifference curves in standard consumer theory. The points of tangency are the only bids that are pinned down; the only requirement for minimax-loss bids on other quantities is that they lie between the two curves. A similar logic applies to the multi-unit case, so that there is also no unique loss-minimizing bid. Surprisingly this nonuniqueness does not extend to the constrained case, in which there is a unique minimax-loss bid; this bid minimizes the difference between lower and upper iso-loss curve subject to the number of allowed bid points. The iso-loss curves provide an intuitive, graphical way of understanding the optimal bids.
Ex post payments are not generally comparable between auction formats. For small quantities, the high bids of the uniform-price auction yield higher revenue than the low bids of the pay-as-bid auction, but for large quantities the low bids of the uniform-price auction yield lower revenue than the aggregate payment of both high and low bids in the pay-as-bid auction. 8 Our uniqueness results suggest that a seller interested in certainty over the distribution of revenue may prefer the pay-as-bid auction: in both auction formats the distribution of ex post revenue depends on the distribution of private information, but in the uniform-price auction it also depends on the method bidders use to select among optimal bids. 9 On the other hand, our results also show that selection ambiguity can be disposed of by limiting bidders to a finite number of self-selected bid points, hence in practice the distribution of value-relevant private information is the key determinant of expected revenue.
Some of our results, such as bids tending to be higher and steeper in the uniform-price auction and the ambiguous revenue comparison, are in line with previous theoretical and empirical work [Ausubel et al., 2014;Burkett and Woodward, 2020a;Pycia and Woodward, 2023;Barbosa et al., 2022]. Hence, central findings in the common prior extreme of Bayesian Nash equilibrium also hold in the opposite extreme of maximal uncertainty. We find it reassuring that the two models lead to the same qualitative conclusions, as it suggests that the standard Bayesian approach and our robust approach are complementary. The Bayesian Nash equilibrium approach convinces with its internal consistency of beliefs, while our chosen robustness approach is more tractable in very complicated (e.g., asymmetric and multidimensional) settings. Yet the qualitative insights of the two modeling frameworks largely coincide. A novel comparison of the auction formats shows that minimax loss is lower in the uniform-price than in the pay-as-bid auction, suggesting that it is "easier to get it right" in the uniform-price auction. Savage [1951] introduced the minimax loss (regret) decision criterion for statistical decision problems. Since then it has been applied in econometrics [Manski, 2021], mechanism design [Bergemann andSchlag, 2008, 2011;Shmaya, 2019, 2021], operations research [Perakis and Roels, 2008;Besbes and Zeevi, 2011], and more generally in strategic settings. Our paper belongs to the latter category. A first paper on analyzing games with minimax regret as the players' decision criterion was Linhart and Radner [1989] who study the minimization of worst-case regret in bargaining. Parakhonyak and Sobolev [2015] consider Bayesian firms best responding to consumers whose search rules for the lowest price are derived from worst-case regret minimization. Renou and Schlag [2010], Halpern and Pass [2012], Schlag and Zapechelnyuk [2019], and Kasberger [2022] propose solution concepts for loss (regret) minimizing players.
Applying the minimax loss (regret) decision criterion to strategic situations requires the specification of the player's perspective. A first possibility takes an ex post perspective and asks what the optimal action would be if the realized opponent actions were known; this is the ex post regret framework as in most of the existing literature [Stoye, 2011, Bergemann andSchlag, 2011]. An alternative approach takes an interim perspective. Players are uncertain about the distribution of actions (or states), and the loss of an action is the difference between the expected payoff of best responding to the distribution and the expected payoff from the chosen action. The interim perspective is also adopted in the Bayesian approach where players best respond to (their belief of) the distribution of competing actions. As not even the Bayesian approach delivers ex post optimality in games of incomplete information, we prefer the interim perspective. Moreover, note that the interim but not the ex post perspective allows to meaningfully incorporate belief restrictions as in Kasberger and Schlag [2022] and Kasberger [2022]. Following Schlag and Zapechelnyuk [2021] and Kasberger and Schlag [2022], we refer to the interim concept as loss and to the ex post equivalent as regret. We introduce the model in the next section. Section 3 illustrates our approach and some findings in the simple two-unit case. Section 4 contains some key theoretical results for the analysis of minimax loss in pay-as-bid and uniform-price auctions, which are applied in Sections 5 and 6 to analyze the multi-unit and bidpoint-constrained cases, respectively. Section 7 concludes. Proofs, calculations, the analysis of the unconstrained case, and an analysis of the uniform-price auction with a first-rejected-bid pricing rule are provided in the appendix.
Model
We consider an auction for quantity Q > 0 of a perfectly divisible, homogeneous good. There are n ≥ 2 bidders participating in the auction. Buyer i, i ∈ {1, . . . , n}, has marginal value v i : [0, Q] → R + , where v i (q) is their marginal value for quantity q. We assume that marginal values are weakly decreasing, so that v i (q) ≥ v i (q ′ ) whenever q ≤ q ′ . For notational simplicity we assume that bidders have a strictly positive value for each unit, hence v i (Q) > 0. 10 Bidder i submits a weakly decreasing bid function b i : [0, Q] → R + . After observing the bid profile (b j ) n j=1 the auctioneer computes a market-clearing price p ⋆ , The prices p LAB and p FRB are, respectively, the last bid accepted and the first bid rejected. 11 All bids strictly above the market-clearing price p ⋆ are awarded, and all bids strictly below the market-clearing price are rejected. When there are multiple bids placed at the marketclearing price ties are broken randomly. 12 Bidders are risk neutral. If a bidder with value v i receives q i units and makes transfer t i , their utility isû We consider two common auction formats. In a pay-as-bid (or discriminatory) auction, transfers are equal to the sum of bids for received units, In a uniformprice auction, transfers are equal to the market-clearing price times the number of units where q i and t i are functions that map, according to the auction rules, the bidders' bid 10 Our results remain valid when bidders do not strictly demand all units, provided we replace aggregate supply Q with the supremum of all quantities for which marginal value is strictly positive, Q i = sup{q : v i (q) > 0}. Additionally, ifQ i < Q, all results obtain in the limit with values v i (q) + ε, letting ε ց 0.
11 See Burkett and Woodward [2020a]. Treasury auctions frequently apply last-accepted-bid pricing (e.g., the United States and Switzerland) while theoretical analyses frequently study first-rejected-bid pricing [Ausubel et al., 2014].
12 As long as all bids strictly above the market-clearing price are awarded, the precise tiebreaking rule does not affect our results.
to bidder i's quantity q i and transfer t i , respectively. 13
Loss and regret
Given a distribution of opponent bids B −i , bidder i's loss from bidding b i instead of the interim-optimal bid is Loss measures the difference between expected utility given bid b i and the utility obtainable by optimizing the submitted bid with respect to distribution B −i . For example, when bid b i is a best response to distribution B −i , loss is zero. Loss is evaluated from an interim perspective; the equivalent ex post concept is regret, Regret measures how much additional utility the bidder could receive if they had known the bids their opponents submitted prior to choosing their own bid. A utility-maximizing bidder with perfect foreknowledge of their opponents' bids will have zero regret.
If bidder i knew the true distribution of opponent bids B −i , she would evaluate potential bids by standard expected utility. However, in our model bidders face ambiguity regarding the true distribution B −i and know only that B −i ∈ B, where B is a set of feasible distributions over opponent bids. In the presence of this ambiguity, bidder i evaluates potential bids according to the maximum loss generated by any feasible distribution of opponent bids; the optimal bid b ⋆ minimizes this loss, We refer to b ⋆ as bidder i's minimax-loss or optimal bid. We focus on the case of maximal uncertainty, in which B contains all joint distributions on feasible bid functions; i.e., all distributions over n − 1 weakly-decreasing functions mapping [0, Q] to R + . Note that B is 13 Integrability of B −i is not a constraint on our results, since in all auction formatsû is bounded below by 0 and above by Qv i (0).
14 Since loss is bounded below by zero, the infimum of maximum loss always exists; as the arginf of maximum loss, b ⋆ is the limit of a sequence of bids approximating minimax loss, and is guaranteed to exist by compactness of the bid space. rich enough to include uncertainty about the number of bidders and supply. 15 We offer a descriptive and a prescriptive interpretation of minimax-loss bids. From a prescriptive perspective, a practical advantage of our non-Bayesian approach is that the bids are completely prior-free, i.e., they do not depend on the other bidders' value distributions and strategies. All a bidder needs to know is their willingness-to-pay, hence the bids are robust because the bidder need not worry about misspecified beliefs. Indeed, if any bid distribution is deemed possible, then in particular the actual distribution is possible. Kasberger and Schlag [2022] illustrate empirically that loss-minimizing bids perform well in first-price auctions despite bidders having very coarse beliefs about competitors' behavior. On the other hand, group decisionmaking provides a descriptive motivation for minimax loss. Suppose a corporation tasks a team with finding the right bid. Based on information learned after the auction, the executive board or a rival colleague might criticize the bidding team for having missed an opportunity, and the bidding team may want to preemptively defend against such a critique. By selecting a minimax-loss bid the bidding team can claim, "Your alternative bid would have been worse than our bid had there been this other bid distribution. This bid distribution was a real possibility." The minimax bid is then robust to complaints that appeal to the materialized bid distribution. 16 Minimax bids are a way to justify the choice as an (undisputed) counterfactual case can be presented so that the minimax bid was the compromise between the two cases. 17 Our subsequent analysis is simplified by the following observation, reducing loss (an interim value taken over beliefs) to a pointwise objective over potential allocations.
Observation 1 (Reduction to aggregate demand). When maximizing loss, it is sufficient to consider the distribution of aggregate opponent demand for each quantity q ∈ [0, Q].
Bidder i's ex post utility is unaffected by the specific bids submitted by their opponents, provided that the aggregate demand curve of their opponents remains fixed. Moreover, from a bidder's perspective, it makes no difference whether the aggregate demand of the opponents is considered, or if residual supply is considered (as supply Q is constant). Hence, Observation 1 15 There are bid distributions in B that put all the mass on bidder j bidding zero, i.e., b j (q) = 0 for all q. This effectively reduces the number of bidders so that n is merely an upper bound on the number of bidders. Our model can also be understood as featuring (residual) supply uncertainty: let Q be the upper bound of the support of supply and reduce supply through other bidders that demand units at prohibitively high prices, above v i (0). 16 Savage [1951] also suggests group decision making as a justification for the minimax principle. In his story group members have different subjective probability assessments and the minimax principle seeks to keep the greatest "violence" done to anyone's opinion to a minimum. In contrast, we interpret the minimax as a way to defend against ex post complaints.
17 If the bid was chosen to maximize the payoff guarantee, then many opportunities might indeed be missed. Thus, maxmin expected utility is not robust to complaints about missed opportunities. states that maximizing loss over the set of feasible joint distributions of opponent bids can be replaced by maximizing loss over the set of feasible aggregate opponent demand curves (i.e., residual supply curves). Importantly, in our subsequent analysis we do not need to consider the number of opponents bidder i faces: it is sufficient to consider an arbitrary demand curve, independent of its source. For this reason our results depend on bidder i alone.
Illustrative example
Before analyzing the general case, we first illustrate our analytical approach in a two-unit example tied to our mult-unit results (Section 5). Bidder i has value v i1 for their first unit and a marginal value of v i2 for their second unit. We assume marginal values are decreasing and non-negative, i.e., v i1 ≥ v i2 ≥ 0. The bidder can submit two bids (b i1 , b i2 ). Our Lemma 1 reduces the interim loss-minimization problem to an ex post regret-minimization problem; therefore we consider distinct outcomes which might maximize regret.
Pay-as-bid auctions
In the pay-as-bid auction, there are three relevant outcomes: the bidder wins either zero, one, or two units. We consider these outcomes on a case-by-case basis.
Case 1: zero units. Conditional on winning no units, bids are most suboptimal if the bidder could have won as many as they wanted at a price just above their first-unit bid. In this case, loss equals Note that the value v i2 needs to be sufficiently high so that bidder i actually wants to win two units at the "just-lost" price b i1 .
Case 2: one unit. Conditional on winning one unit, the bidder knows that they have overbid on the first unit and could reduce their bid. The worst-case overpayment occurs when the opponent's bid for their first unit is just above the bidder's bid for their second unit. In this case, the bidder could improve her utility not only by decreasing their bid for the first unit, but also by slightly increasing their bid for the second unit, and loss is The minimax-loss bid balances the conditional regret of all three outcomes: underbidding regret conditional on losing the auction (winning zero units), (underbidding) regret conditional on winning one unit, and overbidding regret conditional on winning two units. Maximal loss is Maximal loss is minimized by equalizing the three expressions. Thus the minimax-loss bid vector in the pay-as-bid auction is The case distinction is due to the value for the second good being below or above the bid for the first; i.e., the term (v i2 − b i1 ) + in equation (1). Figure 1 illustrates the bidding functions as a function of v i2 , v i2 ∈ [0, 1]. If v i2 = 0, then the minimax bid is b PAB i1 = 1/2, which is as in the first-price auction for a single good [Kasberger and Schlag, 2022]. The bid b PAB i1 decreases in v i2 for v i2 ≤ 3/7. This antitonicity arises because increasing v i2 in this range increases the loss conditional on receiving a single unit, hence the bid b i1 falls so that loss is equalized across outcomes. For values above 3/7, both bids increase in v i2 , though the second bid b PAB i2 increases more quickly than b PAB i1 . By corollary, the spread between the two bids uniformly decreases in v i2 .
Uniform-price auctions
As in the pay-as-bid auction, three outcomes are focal when evaluating loss in the (lastaccepted bid) uniform-price auction: the bidder either receives zero, one, or two units. We consider these outcomes on a case-by-case basis.
Case 1: zero units. As in the pay-as-bid auction, if the bidder wins zero units they know that they have underbid two opponent bids. Their bids are most suboptimal if they could marginally increase their bid and win as many units as they desire, in which case loss is Figure 1: First-and second-unit bids in the pay-as-bid and uniform-price auctions, when the bidder demands two units.
Case 2: one unit. Conditional on winning one unit, the bidder overbids if b i1 sets the market-clearing price and underbids if the market-clearing price is just above b i2 . In this case, loss is Case 3: two units. When the bidder wins two units, they set the market-clearing price. In this case, bids are most suboptimal when the bidder could have reduced bids to (almost) zero without losing any units; then loss is Maximal loss is then Due to the different signs, maximal loss is minimized by equalizing at least some of the conditional losses; this contrasts the pay-as-bid auction, in which maximal loss is minimized by equalizing all of the conditional losses. Pairwise equalization of maximum loss gives a minimax-loss bid vector in the uniform-price auction, The first bid can be found by equalizing the underbidding regret conditional on losing the The second bid can be found by equalizing the underbidding regret conditional on winning one unit v i2 − b i2 and the overbidding regret conditional on winning two units 2b i2 . While minimax-loss bids must minimize cross-conditional regret for some unit, this will not in general determine the minimax-loss bid for all units. With demand for two units, worst-case loss minimization uniquely determines the bid for the first unit, but the bid for the second unit need only lie within the bounds minimax loss in the uniform-price auction. The range of feasible minimax-loss bids in the uniform-price auction is depicted in Figure 1. The different shades distinguish minimax-loss bids above and below the marginal value.
Loss in auctions for homogeneous goods
We begin by establishing general properties of the loss-minimization problem in the pay-asbid and uniform-price auctions. In the case of maximal uncertainty, bidders believe every possible distribution of opponent bids is feasible. Following Observation 1, this is equivalent to bidders believing that every distribution of residual supply curves is feasible. In particular, bidders believe that degenerate distributions on specific aggregate demand curves are feasible, which implies that maximum loss is equivalent to maximum regret. This is a consequence of linearity of bidder preferences, and is not specific to the analysis of auctions or other features of our model.
Lemma 1 (Reduction to maximum regret). Under maximal uncertainty, maximizing loss is equivalent to maximizing regret. That is, for all values v i and bids b i , Following Lemma 1, bidder i's loss maximization problem can be identified with a regret maximization problem. As noted in Observation 1, bidder i's utility depends only on the aggregate demand curve submitted by bidders −i and does not depend directly on any other bidder's specific bid. We therefore consider the set of feasible demand functions S, Abusing notation, let q i (b i , S) be the quantity bidder i receives when they submit bid b i and 18 Each opponent j = i submits a decreasing bid b j : [0, Q] → R + , so the aggregate demand of bidder i's face aggregate demand curve S, and let R(b i ; S, v i ) be bidder i's regret when submitting bid b i against opponent aggregate demand S.
The regret maximization problem is complicated by the dependence of bidder i's market allocation q i on both their own bid b i and opponent demand S. To simplify the problem, we decompose the regret maximization problem to the related problem of maximizing conditional regret. Given any quantity q ∈ [0, Q], bidder i's conditional regret from winning q units is Given a bid b i and an opponent demand curve S, bidder i's quantity allocation q i (b i , S) is deterministic. Since maximum loss is identical to maximum regret, which is derived ex post after opponent demand is realized, it follows that maximum loss is the highest conditional regret from receiving any quantity, Conditional regret forms the basis of our subsequent results on bidding in pay-as-bid and uniform-price auctions under maximal uncertainty.
Pay-as-bid auctions
To develop intuition for loss minimization in the pay-as-bid auction, consider the potential sources of regret in a canonical single-unit first-price auction. Ex post, bids in single-unit discriminatory auctions are either too high-because the bidder strictly outbid the secondhighest bidder-or too low-because the bidder underbid the highest bidder, whose bid was below the bidder's value. 19 This same intution is true pointwise in multi-unit pay-as-bid auctions: the bidder frequently would prefer to increase their bid for large quantities and decrease their bid for small quantities. We use this observation to pin down conditional regret in the pay-as-bid auction.
If bidder i submits bid b and obtains quantity q, they know that the market-clearing price is That is, their regret is at least their overpayment for units they received, plus the utility foregone by underbidding for units they value above the market-clearing price. This regret opponents is a function mapping [0, (n − 1)Q] to R + . However, because there are only Q units available demand is only relevant for quantities q ∈ [0, Q]. 19 In the case in which bids are neither too high nor too low, regret is zero. Typically, maximal regret will be nonzero.
20 For notational simplicity we define b i would be realized if, for example, all opponents submitted flat bids at the price p ⋆ . This expression is strictly decreasing in p ⋆ , hence bidder i's conditional regret is at least Because R PAB q is the regret the bidder has in the case in which they wish they had bid slightly more for larger quantities, we refer to R PAB q as underbidding regret.
Alternatively, bidder i might be able to obtain the same allocation by bidding just above zero for all units. This will be the case when their opponents, in aggregate, submit extremely high bids for Q−q units and zero bids for all remaining units. In this case all nonzero payment is wasted, and regret is is the regret the bidder has in the case in which they wish they had bid nearly zero for all units, we refer to R PAB q as overbidding regret.
Because maximum loss is equal to maximum regret, and ex post regret is obtained at some allocation, maximal loss may be identified with maximizing conditional regret.
Lemma 2 (Maximum loss in pay-as-bid). In the pay-as-bid auction, maximal loss given bid is weakly increasing in q, Lemmas 1 and 2 together imply that maximum loss is the supremum of underbidding regret, taken over all quantities q.
Corollary 1 (Maximum loss in pay-as-bid). In the pay-as-bid auction, maximal loss given bid b i is
Uniform-price auctions
In the uniform-price auction, bids above the market-clearing price are relevant only to the extent that they guarantee a unit is awarded; they do not otherwise affect the bidder's utility. This is in contrast to the pay-as-bid auction, where bids above the market-clearing price are paid whenever the unit is awarded. We first establish expressions for underbidding and overbidding regret in the last accepted bid uniform price auction, in line with our analysis of overbidding and underbidding regret in pay-as-bid auctions. The market-clearing price is When bidder i receives quantity q, the market-clearing price must be . The lower is the market-clearing price, the higher is underbidding regret, hence bidder i's underbidding regret is As in the pay-as-bid auction, underbidding regret accounts not only for the fact that the bidder might regret not bidding just above the market-clearing price, but also for the fact that the bidder might affect their own transfer. In particular, if the bidder sets the marketclearing price at a quantity just to the left of a discontinuity in their bid, they can reduce their bid and also the market-clearing price without affecting their allocation. Alternatively, bidder i might be able to obtain the same allocation by bidding just above zero for all units. This will be the case when their opponents submit high bids for Q − q units and submit zero bids for all remaining units. In this case all nonzero bids are wasted, and regret is higher the higher is the market-clearing price, hence overbidding regret is This differs from overbidding regret in the pay-as-bid auction, R PAB q , since in the uniformprice auction only the marginal bid is relevant. The conditional regret for any quantity q is Because maximum loss is equal to maximum regret, and ex post regret is obtained at some allocation, maximal loss may be identified with maximizing conditional regret.
Lemma 3 (Maximum loss in uniform-price). In the uniform-price auction, maximal loss 21 An analysis of the first rejected bid uniform-price auction can be found in Appendix A. The analyses differ only in the multi-unit case.
Minimax-loss bids in multi-unit auctions
In most practical applications the homogeneous commodity up for auction is not perfectly divisible. We first consider the case in which bidder i can bid on M discrete units, and their value for unit k is For ease of exposition, we define q i0 = 0 and b iM +1 = 0.
We now analyze minimax-loss bidding in the pay-as-bid and uniform-price auctions in this context. Most proofs for this section may be found in Appendix C.
Pay-as-bid auctions
For quantities q ∈ (q k−1 , q k ), underbidding regret (2) is weakly decreasing in q: increasing q does not affect the value of the integral on the other hand, increasing q shrinks the bounds of integration of Q q (v i (x) −b i (q)) + dx, and since the integrand is weakly positive it follows that underbidding regret R PAB q (b i ; v i ) is weakly decreasing on this range.
An immediate implication is that maximum underbidding regret is obtained at some multi-unit quantity q k . Then Corollary 1 implies Underbidding regret for quantity q k increases in the bid for quantities q k ′ ≤ q k , decreases in the bid for quantity q k+1 , and is unaffected by the bid for quantities q k ′ > q k+1 . It follows that if b i is an optimal bid vector then underbidding regret must be constant at all quantities q k .
Theorem 1 (Equal conditional regret in multi-unit pay-as-bid). If b i is a minimax-loss bid vector in the multi-unit pay-as-bid auction, then R PAB Theorem 1 gives a straightforward method for computing minimax-loss bids: minimize conditional regret for any quantity, conditional on equal conditional regret across all quan-tities. Although computationally straightforward, optimal bids do not have a general analytical form. The definition of conditional regret contains a summation over all units which are valued more than a given bid, and the extent of this summation depends not only on the bidder's values but also on the prospective bid, complicating the relationship between bid and loss.
The minimax-loss bid is unique for any multi-dimensional valuation. Uniqueness particularly simplifies the estimation of the private values if one believes that the observed bid data is generated by bidders playing the minimax-loss bids under maximal uncertainty. In this case, one can infer bidder i's value v iM from the bid on the M th unit (v iM = (M + 1)b PAB iM ). Marginal values for units k < M can be inferred by recursively solving Equation (3) for the unknown v ik , starting with k = M − 1. Note that this estimation is straightforward, and involves solving only a sequence of linear equations. In contrast to the approach relying on Bayesian Nash equilibrium as the data-generating model, estimating values from minimaxloss bids does not require the difficult estimation of the (opponent) bid distribution. From a normative perspective, a unique bid is attractive as it saves one from further assessing the relative merits of all minimax-loss bids.
Observation 2. The minimax-loss bid vector is strictly below marginal values wherever v ik > 0. Letk = max{k : v ik > 0}. Applying Corollary 2 gives b PAB ik = v ik /(k + 1) < v ik , and the bid for the last positively-valued unit is below the value for this unit. Then if there is k with b ik = v ik > 0, there is a maximal such quantity. Since values are weakly decreasing in quantity, this implies that Observation 3. The minimax-loss bid vector is strictly decreasing in quantity wherever v ik > 0. Otherwise, there is k such that b ik = b ik+1 and v ik > b ik (see Observation 2). In this case, the left-hand side of (3) is zero and the right-hand side is strictly positive. Increasing b ik increases the left-hand side of (3) and decreases the right-hand side, and it follows that b PAB ik > b PAB ik+1 whenever v ik > 0. The tractability of the minimax-loss bid stands in stark contrast to the typical intractability of the Bayesian equilibrium in the pay-as-bid auction. As discussed in the introduction, Bayesian equilibrium characterizations exist only in relatively simple (usually complete information or one-parameter) economic settings. On the other hand it is straightforward to numerically compute minimax-loss bids. Equation (3) provides an implicit definition for minimax-loss bids in the pay-as-bid auction; and although this equation cannot in general be solved in closed form, the bid representation in Theorem 2 is straightforward to compute nu- The following example shows that an analytical solution is available when the marginal values are sufficiently flat.
Example 1. Suppose that there are M units available for auction, and that bidder i's value vector is v i . Assume that bidder i's value vector is relatively flat, so that v iM ≪ v i1 . 22 Following Corollary 2, the minimax-loss bid for unit M is b PAB iM = v iM /(M + 1). Note that the bid for unit k can be written as Thus for k < M, minimax-loss bids in the pay-as-bid auction can be written as The minimax-loss bid in this pay-as-bid auction is compared to its uniform-price equivalent in Figure 2 in Example 2 below.
Uniform-price auctions
We now analyze the multi-unit uniform-price auction, in which bidder i may bid on M discrete units. Following Lemma 3, maximum loss is That is, maximum loss is a maximum over conditional regrets, which are defined as the higher of overbidding and underbidding regrets for quantity q k . This can be written equivalently as The following Theorem is immediate.
Theorem 2 (No unique optimal bid in uniform-price). Generically, there is not a unique minimax-loss bid in the multi-unit uniform-price auction unless M = 1.
If there is a unique minimax-loss bid, then R for a single unit, and incentives are as in a first-price auction; Kasberger and Schlag [2022] show that the unique minimax-loss bid in this case is To obtain sharp predictions for optimal strategies, we introduce a selection from the set of minimax-loss bids. We define a cross-conditional regret minimizing strategy to be one which minimizes the larger of overbidding regret for unit q k and underbidding regret for unit q k−1 , which we term cross-conditional regret. By construction cross-conditional regret is independent across bid points; since regret is maximized by cross-conditional regret for some quantity, a cross-conditional regret-minimaxing bid is a minimax-loss bid.
The appeal of cross-conditional regret minimizing bids is that any bid b ik is justifiable ex post. If another minimax bid was chosen so that the bid for unit k was below the respective cross-conditional regret minimizing bid for that unit, then after winning q k−1 units, the case can be made that this bid was too low as it would have been profitable to win more units. Only the cross-conditional regret minimizing bid does not allow such complaints as the regret of paying too much for q k units serves as a defense.
Theorem 3 (Cross-conditional regret minimizing bids in uniform-price). The unique crossconditional regret minimizing bid b LAB i is such that for all units k, In the pay-as-bid auction the minimax-loss bid is unique, while in the uniformprice auction any bid in the gray region minimizes maximum loss. The red line is the cross-conditional regret minimizing bid in the uniform-price auction.
Theorem 3 illustrates the nonuniqueness of minimax-loss bids in the uniform-price auction. If b LAB i is a cross-conditional regret minimizing bid, equation (5) Overbidding regret at quantity q = q M is R LAB q M (b i ; v i ) = Mv iM /(M + 1) and underbidding regret at quantity 0 is R LAB 0 (b i ; v i ) ≥ v i1 /2. These are unequal when, for example, v i1 ≥ 2v iM , in which case there cannot be a unique minimax-loss bid.
As in the pay-as-bid auction (Example 1), minimax-loss bids in the uniform-price auction have a convenient analytical expression when a bidder's values are relatively constant.
Example 2. Suppose that there are M units available for auction, and that bidder i's value vector is v i . Assume that bidder i's value vector is relatively flat, so that v iM ≪ v i1 . 24 Following Theorem 3, the cross-conditional regret minimizing bid is Figure 2 illustrates these bids, and compares them to their corresponding pay-as-bid bids.
24 Formally, we require b i1 ≤ v iM , which in light of equation (6) is equivalent to Although the bidding function of Theorem 3 cannot be compared to all Bayesian Nash equilibria of the uniform-price auction, it is apparent that it does not resemble "collusive" low-revenue equilibria that are frequently discussed in the literature [Ausubel et al., 2014, Marszalec et al., 2020. 25 Indeed, the bid on the last unit is positive in a cross-conditional regret minimizing strategy under maximal uncertainty, while it is zero in the canonical lowrevenue Bayesian Nash equilibrium.
Comparison of auction formats
Bids in the uniform-price auction may be higher or lower than in the pay-as-bid auction, and the revenue comparison of the two formats is inherently ambiguous. To demonstrate revenue ambiguity, we first compare the cross-conditional regret minimizing bids in the uniform-price auction to the regret minimizing bids in the pay-as-bid auction.
Comparison 1 (Uniform-price bids above pay-as-bid bids). The unique cross-conditional regret minimizing bid in the multi-unit uniform-price auction is higher than the unique minimax-loss bid in the pay-as-bid auction: Although cross-conditional regret minimizing bids in the uniform-price auction are above the unique minimax-loss bid in the pay-as-bid auction, this is not the case for all selections of minimax-loss bids in the uniform-price auction. In the uniform-price auction, underbidding regret for large quantities is necessarily small: uniform pricing implies there is no wedge for overpayment (as there is in the pay-as-bid auction), and there is little utility foregone by not receiving a small number of units. Since conditional regret is the larger of overbidding and underbidding regret, and overbidding regret is increasing in bid, for large quantities there is a conditional regret-minimizing bid which is equal to zero; this zero bid is below the unique minimax-loss bid in the pay-as-bid auction, which is strictly positive for all units the bidder values positively. We explore this further in our analysis of iso-loss curves and minimax-loss bids in bidpoint-constrained auctions in Section 6.
Comparison 2 (Semi-comparability of optimal bids). Suppose M ≥ 2 units are available. If b LAB is a minimax-loss bid in the multi-unit uniform-price auction, then b LAB ≤ b PAB .
However, there is a minimax-loss bid b LAB in the multi-unit uniform-price auction such that While there is a minimax-loss bid in the uniform-price auction which is not everywhere greater than the minimax-loss bid in the pay-as-bid auction, there is no minimax-loss bid in the uniform-price auction which is everywhere below the minimax-loss bid in the pay-as-bid auction.
We now ask whether the minimax-loss bids are rankable by elasticity (steepness). Previous theoretical work has identified uniform-price bids as more elastic (i.e., steeper) than pay-as-bid bids [Malvey and Archibald, 1998;Ausubel et al., 2014;Pycia and Woodward, 2023] in the Bayesian paradigm. This results from the significant demand-shading incentives for small quantities in the pay-as-bid auction-where bids for small quantities are paid for all larger quantities-and the significant demand-shading incentives for large quantities in the uniform-price auction-where bids are paid times the quantity for which they are offered. This intuition extends to the loss-averse context, provided restriction is made to cross-conditional regret minimizing bids in the uniform-price auction.
Comparison 3 (Uniform-price bids steeper than pay-as-bid bids). Define the average slope of the bid b to be α = (b 1 − b M )/M. Cross-conditional regret-minimizing bids in the multiunit uniform price auction are on average steeper than the unique minimax-loss bid in the pay-as-bid auction: α LAB ≥ α PAB .
Note that the minimax-loss bid for the last unit in the pay-as-bid auction is identical to the cross-conditional regret minimizing bid for the last unit in the uniform-price auction, b iM = v iM /(M + 1). The proof of Comparison 2 shows that b LAB i1 > b PAB i1 , which completes the proof of Comparison 3. The comparisons of the bid functions imply that the auctioneer's revenues cannot be generically compared across the two auction formats.
Comparison 4 (Ambiguous revenue). Depending on the joint value distribution, both ex post and expected revenues can be higher in either multi-unit auction format.
Minimax-loss bids do not depend on the distribution of opponent values; the joint value distribution is necessary to compute expected revenue. If the distribution places significant probability on each bidder demanding exactly one unit, the uniform-price auction may yield higher revenue; following Comparison 1, the cross-conditional regret minimizing bid in the uniform-price auction is higher than the unique minimax-loss bid in the pay-as-bid auction, and therefore the ex post transfer to the auctioneer can be higher in the uniform-price auction than in the pay-as-bid auction. Similarly, although the uniform-price bid for quantity Q may be above the pay-as-bid bid for quantity Q, price discrimination in the pay-asbid auction may yield a higher transfer to the auctioneer than the uniform payment given cross-conditional regret minimizing bids. In total, when the distribution places significant probability on bidders having zero value, with small probability on demanding the entire market, the pay-as-bid auction will yield higher revenue.
While revenue cannot be ranked across auction formats, bidder loss is uniformly lower in the uniform-price auction than in the pay-as-bid auction. The existence of multiple minimaxloss bids in the uniform-price auction does not affect this comparison, because even when some bids are not uniquely defined, there remains a quantity q k for which conditional regret R LAB q k is equal to maximum loss (Lemma 1).
Comparison 5 (Minimax loss). In the multi-unit case, minimax loss is lower in the uniformprice auction than in the pay-as-bid auction, This comparison is strict whenever M > 1.
What are the implications of one mechanism having lower minimax loss than another?
Suppose a bidder can obtain costly information about the other bidders' behavior; this information will shrink the set of possible bid distributions B. The bidder will tend to acquire more information when the subsequent auction mechanism yields higher minimax loss. Thus Comparison 5 implies that bidders in the pay-as-bid auction may obtain more costly information than bidders in the uniform-price auction.
Minimax-loss bids in bidpoint-constrained auctions
As discussed in the introduction, in practice bidders are frequently constrained from submitting a distinct bid for each unit. For example, bidders can submit up to 10 bidpoints in Czech treasury auctions [Kastl, 2011] or 40 steps in the Texas electricity market [Hortaçsu et al., 2019]. We now consider the case in which bidder i can submit up to M bid points, The implied bid function is as in the multi-unit case; the only distinction is that the quantities at which bids are submitted are now a choice variable for the bidder.
Optimization over quantities in addition to bid levels allows the bidder to reduce loss below what is feasible in the multi-unit context. Nonetheless, the qualitative results of the multi-unit case remain intact.
Pay-as-bid auctions
As in the multi-unit case, the minimax-loss bid in the bidpoint-constrained pay-as-bid auction equates overbidding regret across all units. This leads immediately to an expression for minimax-loss bids.
Theorem 4 (Constrained minimax-loss bids in pay-as-bid). The unique minimax-loss bid in the constrained pay-as-bid auction solves for all k = 1, 2, . . . , M.
Intuitively, the bidder minimizes their maximum payment subject to equal conditional regret across all outcomes. We illustrate Theorem 4 for the case in which the bidder has constant marginal values.
Example 3 (Pay-as-bid with constant marginal values). Suppose bidder i's marginal value is
constant, v i (q) = v for all q. The constrained loss optimization problem is
Equating conditional loss across units requires
Solving this equation recursively, backwards from b M +1 = 0, gives a closed-form expression for optimal bids conditional on quantities, Minimizing loss then implies Notably, minimax bidpoints are evenly spaced in the quantity space. Figure 4 plots these bids and compares them to minimax-loss bids in the bidpoint-constrained uniform-price auction.
Providing more qualitative insights on the optimal constrained bid, Example 8 in Ap-pendix D shows that the location of a bidstep need not change if the bidder's value for larger quantities changes, provided that preferences are over discrete units. Moreover, similar to the two-unit example in Section 3, the bid level can decrease in the bidder's value for larger quantities.
Theorem 4 provides a constrained optimization problem for computing minimax-loss bids in the pay-as-bid auction when the bidder may submit at most M bid points. The optimization problem is stated in terms of divisible goods, and in a multi-unit setting with constrained bids it is possible that the minimax-loss bid has bid points which are away from integer quantities. Practically, the bidder may approximate minimax loss in the bidpointconstrained auction by rounding the quantities at which bids are submitted down to the nearest feasible unit; this approximation is especially tight when the number of units available is large (hence when the rounding has little effect).
Proposition 1 (Approximate minimax loss in bidpoint-constrained multi-unit pay-as-bid).
Suppose that (q i , b i ) is a minimax-loss bid in the constrained pay-as-bid auction with M b bid points, and L ⋆ is minimax loss in the constrained multi-unit pay-as-bid auction with M q units is feasible in the multi-unit auction, and Importantly, when M q is relatively large-that is, when the number of discrete units available is large-the right-hand integral will tend to be small relative to L ⋆ , as the integral is bounded above by Qv i (0)/M q .
Uniform-price auctions
In Section 5.2 we showed that there are typically many minimax-loss bids in the multiunit uniform-price auction. We show below that this is in stark contrast to the bidpointconstrained uniform-price auction, where there is a unique minimax-loss bid. As there is a unique minimax bid when the location of the bid steps can be chosen, nonuniqueness in the multi-unit case derives from the prespecified location of bid points in the multi-unit auction.
Theorem 5 (Minimax-loss bids in constrained uniform-price auction). In the bidpoint-constrained uniform-price auction with M bid points, the unique minimax-loss bid solves We provide some intuition for the uniqueness in the constrained case and contrast it with the multiplicity of the multi-unit case. Intuitively, when the bidder receives a small quantity they do not leave a lot of money on the table due to overbidding, because they received a small number of units and their total payment is low; they also do not miss out on significant utility from underbidding, because the market price will tend to be high and they will not desire many units at this price. Thus the main source of loss is bids on intermediate quantities, leaving bids on small (and very large) quantities only partially specified. This stands in contrast to the bidpoint-constrained case where the locations of the bid steps are choice variables. Given the choice, the bidder will submit relatively dense bids for intermediate quantities and relatively sparse bids for extreme quantities; the large gaps between bid points for small units work against the intuition arising from the multi-unit case, where bidpoint gaps are uniform, that bids for small quantities are not uniquely determined.
The chief distinction between the multi-unit and constrained-bid cases is that in the constrained-bid case the spacing of bid points is an additional tool for reducing ex post regret. The construction of the minimax-loss bid in the constrained uniform-price auction follows from observing that steps in the implied bid function extend between two iso-loss curves. Given loss L, the upper iso-loss curve is c(·; L) such that qc(q; L) = L, and the lower iso-loss curve is c(·; L) such that Q q (v i (x) − c(q; L)) + dx = L. The bid b(q) = c(q; L) induces overbidding loss which is constant in quantity, and the bid b(q) = c(q; L) induces underbidding loss which is constant in quantity. 26 Figure 3 illustrates the two iso-loss curves.
The upper iso-loss curve is always a hyperbola; the lower loss curve depends on marginal values.
Bids above the upper iso-loss curve induce loss above L by inducing overbidding regret above L, and bids below the lower iso-loss curve induce loss above L by inducing underbidding regret above L. It follows that the minimax-loss bid must lie entirely between the upper and lower iso-loss curves. In particular, the minimax-loss bid in the constrained uniform-price auction extends from the lower iso-loss curve to the upper iso-loss curve, then jumps down Loss L equal to constrained minimax loss to the lower iso-loss curve, and extends again to the upper iso-loss curve; this continues until a bid of zero is reached. Figure 3b illustrates this construction for M = 4. If the bid did not extend fully between the two iso-loss curves, with a slight perturbation the bid could be made to lie strictly between the two iso-loss curves, which would entail strictly lower loss.
Constructing bidpoint-constrained minimax-loss bids is straightforward. For loss L such that c(·; L) ≥ c(·; L), let q 0 = 0 and for all k ∈ {1, . . . , M} let b k = c(q k−1 ; L) and let q k be such that c(q k ; L) = b k . 27 If c(q M ; L) > 0 constrained minimax loss is above L, and if c(q M ; L) < 0 constrained minimax loss is below L. In either case, a new level of loss L ′ may be proposed, and the procedure continues until c(q M ; L) = 0 (or is within numerical tolerance). Figure 3a illustrates the case when the level of loss is above the minimax loss. In the Figure, the final step q ′ 4 is too high, and loss can be decreased. The construction of minimax-loss bids between the upper and lower iso-loss curves provides an intuitive argument for the uniqueness of minimax-loss bids in the uniform-price auction. Given a level of loss and associated iso-loss curves, either there is no M-step step function between them, or there is a single M-step step function between them, or there are multiple such step functions between them. If there is no feasible step function between the iso-loss curves, this level of loss is not feasible and minimax loss is above the assumed loss.
On the other hand, if there are multiple feasible step functions between the iso-loss curves the iso-loss curves can be brought closer together (by reducing assumed loss) while still allowing for a feasible step function between them. This improvement in loss is infeasible only when 27 In the event that c(Q; L) > b k , we define q k = Q. there is a unique step function between the iso-loss curves, and at that point maximum loss is minimized.
The following example illustrates Theorem 5 for the case in which the bidder has constant marginal values. 28
Example 4 (Uniform-price with constant marginal values). Suppose that bidder
The minimax-loss bid induces loss C M Qv, and solves The solution to this expression is unique: the recursive equation for q k increases in C M , while the endpoint condition for q M decreases in C M . 29 Figure 4 illustrates these bids and compares them to the unique minimax-loss bids in the pay-as-bid auction.
Examples 3 and 4 suggest a new testable prediction. With constant marginal values, the bids in the bidpoint-constrained pay-as-bid are evenly spaced, while they are more clustered around intermediate quantities in the bidpoint-constrained uniform-price auction. More generally, the location of the bids in the pay-as-bid auction is more dispersed than in the uniform-price auction.
As is the case in the bidpoint-constrained pay-as-bid auction, the unique minimax-loss Figure 5: Minimax-loss bids in the constrained uniform-price auction with Q = 1, v = 1, and varying numbers of bid points. 30 Submitted bids lie on upper and lower iso-loss curves, and more-central iso-loss curves (a lower upper iso-loss curve and a higher lower iso-loss curve) correspond to lower loss. bid in the bidpoint-constrained uniform-price auction may be rounded to an approximate minimax-loss bid in the constrained multi-unit uniform-price auction. In the uniform-price auction this approximation is guaranteed by rounding bid points upward to the nearest feasible quantity, which differs from the pay-as-bid auction in which quantities are rounded down.
Proposition 2 (Approximate minimax loss in bidpoint-constrained multi-unit uniform-price). Suppose that (q i , b i ) is a minimax-loss bid in the constrained uniform-price auction with M b bid points, and L ⋆ is minimax loss in the constrained multi-unit uniform-price auction with is feasible in the constrained multi-unit auction, and As is the case in the pay-as-bid auction, the minimax loss approximation will be close when the number of available units is large.
Comparison of auction formats
We now compare the constrained auction formats. The first comparison relates the two unique minimax-loss bidding functions. We give a condition under which they cannot be uniformly ranked. Note that unlike the multi-unit case it is not enough to compare the levels of the bids b LAB and b PAB ; the optimal bid function is a step function and the location of the steps matters for the comparison. However, if q LAB M < q PAB M , then the bid in the uniform-price auction is 0 for smaller quantities than in the pay-as-bid auction.
Comparison 6 (Sufficient conditions for the semi-comparability of optimal constrained bids). Let b LAB and b PAB be the minimax-loss bids in the constrained uniform-price and payas-bid auction, respectively. If the marginal values are sufficiently flat, then b LAB The proof shows that q LAB M is always below Q and that q PAB M = Q when the marginal values are sufficiently flat. Examples 8 and 9 in Appendix D show that the bids in the two auction formats can sometimes be ranked unambiguously. In particular, the bids in the uniform-price auction can be higher than in the pay-as-bid auction.
Comparison 6 implies that the minimax-loss bid in the constrained uniform-price auction drops to 0 at a quantity at which the minimax-loss bid in the constrained pay-as-bid auction is still positive. The ambiguous revenue comparison is immediate.
Comparison 7 (Ambiguous revenue). Depending on the joint value distribution, both ex post and expected revenues can be higher in either constrained auction format.
We illustrate the ambiguous revenue comparison by means of the following numerical example.
Example 5. We simulate bidpoint-constrained auction outcomes for different choices of the number of allowed bid points M. In the simulated auctions the available quantity is Q = 100, hence the locations of bidpoints correspond to percentage of aggregate supply. We vary the number of bidders from n = 2 to n = 10. Bidders' marginal values are constant, v(q) = v 0 , where v 0 follows a truncated lognormal distribution with support v 0 ∈ [0.5, 2] and mean 1.
For each number of allowed bid points, M, we first compute constrained minimax-loss bids in both the pay-as-bid and uniform-price auctions. In the pay-as-bid auction bids are obtained from the expressions in Example 3; in the uniform-price auction bids are obtained from the simple search procedure outlined in Section 6.2. Figure 6 plots average auction revenue as a function of the number of bid points M.
As expected, increasing the number of bidders increases the seller's expected revenue: the highest value of n independent draws increases in n in expectation. In general, revenue is and 4, bidders in a pay-as-bid auction with a single bid point will bid half their value for the full market quantity, and bidders in a uniform-price auction with a single bid point will bid more than half their value for less than the full market quantity. Revenue in the pay-asbid auction is therefore half the highest marginal value, while revenue in the uniform-price auction is more than half the second-highest marginal value. It follows that expected revenue will be higher in the pay-as-bid auction when both the number of bid points and the number of bidders are small.
Although average revenues may be ranked, reverse rankings can be observed ex post. Figure 6 also compares ex post revenues and depicts the share of simulated auctions in which uniform-price revenue is higher than pay-as-bid revenue. As the number of bidders increases, the share of auctions in which revenue is higher in the uniform-price auction increases. Low-revenue outcomes mainly appear in uniform-price auctions with two bidders, and these "collusive" outcomes are less likely when there are many bidders. The uniformprice auction dominates the pay-as-bid auction with ten bidders in terms of revenue in expectation and ex post in the majority of auctions. The ambiguous, setting-dependent revenue ranking is in line with empirical results on multi-unit auctions. 31 Nonetheless, it is generically true that increasing the number of bidders increases the performance of the uniform-price auction relative to the pay-as-bid auction. Because initial bids are relatively high in the uniform-price auction and bids are relatively inelastic, increasing the number of bidders has strong upward influence on the market-clearing price, and thus on revenue. As in multi-unit auctions the minimax loss is lower in the uniform-price auction.
Comparison 8 (Minimax loss). In constrained auctions with M bid points, minimax loss is lower in the uniform-price auction than in the pay-as-bid auction, Example 5 (continued). In the setting of Example 5, Figure 7 reports the normalized minimax loss, computed as loss divided by the constant marginal value v 0 . As predicted by Comparison 8, the level of minimax loss is higher in the pay-as-bid auction. The figure also shows decreasing gains from adding another bid point and a relatively fast convergence to the unconstrained level of minimax loss. Indeed, minimax loss with four bid points is less than 10% higher than with 25 bid points in both auction formats.
Conclusion
In this paper we have characterized optimal prior-free bids in the pay-as-bid and uniformprice auctions, the two leading auction formats for allocating homogeneous goods such as electricity and government debt. Our analysis considers two natural cases of bid selection: in the multi-unit case bidders may bid on M discrete units, and in the bidpoint-constrained case bidders may select up to M bid points at self-selected quantities. In each case the two pricing rules create different incentives for the bidders; our analysis shows that taking a worst-case loss approach to bid optimization enables a tractable analysis of the two formats. Remarkably, our analysis remains tractable even with multi-dimensional private information because we do not require the inversion of strategies as in the canonical Bayesian Nash equilibrium approach. Hence, we believe the worst-case loss approach may also be fruitfully applied to other complex strategic interactions. A Uniform-price auctions with a first rejected bid price The pricing rule of the uniform-price auction affects bidders' strategic incentives. In the main text, we consider the last accepted bid pricing rule, in which the market price is the highest possible market-clearing price; this is the uniform-pricing rule commonly used in practice.
In this appendix we analyze the multi-unit case of the uniform-price auction with a first rejected bid pricing rule, in which the market price is the lowest possible market-clearing price; this is the uniform-pricing rule commonly analyzed in the literature.
Our analysis of the first rejected bid pricing rule begins by defining overbidding and underbidding loss, as in our analyses in the main text. In the uniform-price auction with the first rejected bid pricing rule, winning bids below values can never be too high as they do not determine the market-clearing price. In particular, the bid for the first unit, b i1 , can be too high only if it is above value. As such, in this analysis we constrain attention to bids which are below value, and later verify that this assumption is satisfied by the minimax-loss bids we obtain. Conditional on bidder i winning k units, k ≥ 1, the only bid that may be too high is bidder i's first rejected bid b ik+1 . For this case, we define overbidding regret as This is the additional utility the bidder could have received if they reduced their bid b ik+1 to zero. The case occurs if the other bidders bid a strictly positive amount for only the M − k units they received. The overbidding regret of winning 0 units is 0. Similar to the other pricing rules, a bid is too low if the bidder wants to win more units given the marketclearing price. Maximal regret arises if the opponents all bid just above b ik+1 for the units they received. The resulting underbidding regret is This is the additional utility the bidder could have received if they bid just above b ik+1 for all units for which it is profitable to do so. Note that these regret terms correspond to those in the pay-as-bid auction, except that underbidding regret in the uniform-price auction does not consider bids for submarginal quantities. The conditional regret for unit k, k ∈ {0, 1, . . . , M − 1}, is the maximum of overbidding and underbidding regret, Observation 4. Note that if b ik > v ik , then overbidding regret conditional on winning k − 1 units is (k−1)b ik > 0 and underbidding regret is M k ′ =k (v ik ′ −b ik ) + = 0. That is, bidding above value equates overbidding and underbidding regret only when k = 1. Since the minimax-loss bid must equate overbidding and underbidding regret for some unit (see Lemma 4 below), there is a minimax-loss bid that is weakly below the bidder's value vector.
Lemma 4 (Maximal loss in first rejected bid uniform-price auction). In the first rejected bid uniform-price auction, the maximal loss given bid Proof. We first consider the augmented problem in which the bidder receives k units at market price p ⋆ ∈ [b ik+1 , b ik ]. In a uniform-price auction, a bidder facing a known residual supply curve should pick a point on the supply curve to maximize their own utility. The bidder's utility from this optimization increases as the residual supply curve falls, hence the loss-maximizing supply curve must be as low as possible. When receiving k units, the bidder knows that either their opponents demanded M − k units with bids weakly above p ⋆ and the M − k + 1 th unit at p ⋆ , or their opponents demanded M − k units with bids weakly above p ⋆ and the market-clearing price is p ⋆ = b ik+1 . The loss-maximizing residual supply curve S is given by The loss-maximization problem is then Note that S(M −k + 1; p ⋆ ) is increasing and locally constant ink; then loss is obtained at It follows that loss, conditional on market-clearing price p ⋆ , is The equation is obtained by pluggingk ∈ {k − 1, k, v −1 (p ⋆ )} into Equation (7). By construction, p ⋆ ≤ b ik ; then The right-hand inequality follows by the assumption that b i ≤ v i . Then the leftmost in the maximization expression for R FRB q k is bounded above by the middle term in R FRB q k−1 , and hence Similar to the analysis of cross-conditional regret minimizing bids in the last accepted bid uniform-price auction, note that R FRB k is increasing in b ik+1 while R FRB q k is decreasing in b ik+1 , and both terms are independent of b ik ′ for k ′ = k + 1. Then if maximum loss is determined by conditional regret for unit k, it must be that R There is, however, no unique optimal bid. Theorem 6 (No unique minimax-loss bid). If M > 1, then there is not a unique minimaxloss bid in the uniform-price auction with the first rejected bid pricing rule.
Proof. It is sufficient to consider b i1 . When the bidder receives 0 units, overbidding regret is 0 and underbidding regret R FRB 0 is non-negative but arbitrarily close to 0 when b i1 is close to v i1 . As overall minimax loss L FRB is strictly positive, any choice of b i1 such that In the specific case of a single unit, M = 1, Lemma 4 gives maximum loss as
Then the unique minimax-loss bid is
In light of Theorem 6, minimax-loss bids are not uniquely defined in the uniform-price auction when M > 1 indivisible units are available. To obtain sharp predictions for minimaxloss bids, we define a conditional regret minimizing strategy as one which minimizes conditional regret for each unit. Because conditional regret is independent across units, and regret is minimized by conditional regret for some unit, a conditional regret minimizing strategy is a regret minimizing strategy.
Definition 2. The bid vector
The following theorem characterizes the unique conditional regret-minimizing bid vector.
Theorem 7 (Conditional regret-minimizing bids). The unique conditional regret-minimizing Proof. The claim follows immediately from earlier arguments. It remains to be shown that b FRB i is a valid bid (that is, monotone). Suppose to the contrary that there is k such that This is a contradiction, so it cannot be that b FRB Similar to minimax-loss bids in the auction formats analyzed in the main text, conditional regret minimizing strategies in the first rejected bid uniform-price auction are straightforward to compute but potentially infeasible to represent in closed form. In particular, determination of b FRB ik still faces issues of potential nonlinearities in R FRB q k (·; v i ). We consider two examples. Example 6 (Two-unit demand in first rejected bid uniform-price). In the first rejected bid price auction with demand for two units, the conditional regret minimizing bid vector This follows immediately from Theorem 7. The first bid b i1 cannot be too high, provided that it is below value. Thus, the overbidding regret conditional on losing is 0. The bid is too low if one could win more units by marginally raising it, leading to a worst-case regret conditional on losing the auction of Bids are decreasing in quantity. Then following Theorem 7, potential nonlinearities are irrelevant when b FRB i2 ≤ v iM . When this is true, bids are as given as stated. We then check Since v ik ′ ≤ v i2 for all k ′ ≥ 2, the condition in the proposition is immediate.
B Unconstrained bidding
The minimax-loss bids derived in the main text are computationally tractable, but are not necessarily expressable in closed form. The apparent analytical complexity of optimal bids arises from the recursive structure to the loss-minimization problem (in pay-as-bid), and from the simultaneous optimization over bid levels and bid points in the constrained bidpoint model. In this appendix we show that minimax-loss bids may be tractable in an unconstrained divisible-good context, where there is no need to optimize over the location of bid points, and the recursive structure of minimax-loss bids can be expressed as a differential equation. We also show that loss in this unconstrained case is approximated by loss in the multi-unit and constrained bidpoint cases as the number of bid points becomes large.
In this appendix, we assume that the marginal value function v i is Lipschitz continuous.
B.1 Pay as bid
With unconstrained bids and divisible goods, the equal conditional regret condition from the multi-unit and constrained cases requires that the derivative of conditional regret is equal to zero, Regret conditional on receiving the maximum possible allocation is The fundamental theorem of differential equations implies that solutions to the system cannot cross, hence the bid for quantity Q must be minimal, and optimal unconstrained bids may be computed as the solution to a differential equation.
Proposition 3 (Unconstrained pay-as-bid bids). The unique minimax-loss bid in the unconstrained divisible-good pay-as-bid auction solves Then maximum loss is lower under bid b i than under bidb i , andb i is not a minimax-loss bid. Then b i (Q) = 0 for any minimax-loss bid, and uniqueness follows from the fundamental theorem of differential equations.
The differential equation defining minimax-loss bids in the pay-as-bid auction is similar to the first-order condition defining best responses in a standard Bayesian Nash equilibrium; see, e.g., Hortacsu and McAdams [2010], Pycia and Woodward [2023], and Woodward [2021]. The distinction is that in Bayesian Nash equilibrium the first-order condition contains probabilistic effects-increasing the bid for a particular quantity increases the probability that this quantity is received-while the differential equation in Proposition 3 does not. Intuitively, this is because loss is maximized conditional on receiving any particular quantity, and hence the loss-maximizing probability a quantity is won is constant across all quantities.
Because any bid which is feasible when M bid points are allowed is also feasible when M ′ > M bid points are allowed, minimax loss decreases as the constraint on the number of bid points is increased. Since the unconstrained-optimal bid b i may be arbitrarily approximated by step functions with small step widths, it follows that minimax loss in the multi-unit and constrained pay-as-bid auctions converges to minimax loss in the unconstrained pay-as-bid auction. Because the minimax-loss bid is unique in the pay-as-bid auction, minimax-loss bids in the multi-unit and constrained pay-as-bid auctions converge to the minimax-loss bid in the unconstrained pay-as-bid auction.
Proposition 4 (Convergence to unconstrained minimax-loss bid). Let L Mq and b Mq be minimax loss and the minimax-loss bid (respectively) in the multi-unit pay-as-bid auction with M q units, and let L M b and b M b be minimax loss and the minimax-loss bid (respectively) in the constrained pay-as-bid auction with M b bid points. Let L ⋆ and b ⋆ be minimax loss and the minimax-loss bid (respectively) in the unconstrained pay-as-bid auction. Then where · 1 represents the L 1 norm.
Proof. Because L ⋆ is minimax loss when bids are unconstrained, L ⋆ ≤ L Mq and L ⋆ ≤ L M b for all M q and M b . Since maximum loss is continuous in bid and the minimax-loss bid b ⋆ can be arbitrarily approximated by a step function (when the number of steps grows large), it follows that lim Mqր∞ L Mq = L ⋆ and lim M b ր∞ L M b = L ⋆ . Now suppose that |b Mq − b ⋆ | does not converge to 0 as M q grows large. Then there is a be a sequence of such minimax-loss bids, where M qk M qk ′ whenever k < k ′ . Bids are decreasing in quantity, hence by Helly's selection theorem it is without loss of generality to assume thatb M qk →b ⋆ in the L 1 norm, and since minimax loss converges the maximum loss associated with bid b ⋆ is L ⋆ , the maximum loss associated with bid b ⋆ . It follows thatb ⋆ is a minimax-loss bid in the unconstrained pay-as-bid auction. Since there is a unique minimax-loss bid in the pay-as-bid auction (Proposition 3), this contradicts the assumption that lim kր∞ b M qk = b ⋆ .
Showing that |b M b − b ⋆ | converges to 0 is essentially identical to the argument above, and is omitted.
B.2 Uniform price
When bids are completely unconstrained, cross-conditional regret minimization requires The cross-conditional regret minimizing bid is unique because overbidding regret increases in bid while underbidding regret decreases in bid. Note that the divisibility of the auctioned good turns cross-conditional regret into conditional regret.
Proposition 5 (Cross-conditional regret minimizing bid in unconstrained uniform-price auction). In the unconstrained uniform-price auction there is a unique cross-conditional regret minimizing bid, b LAB , and this bid solves The proposition implies that b LAB (0) = v i (0), i.e., it is optimal to bid value for the "first unit." Moreover, it is optimal to bid 0 for the Q, b LAB (Q) = 0. Figure 8 illustrates the upper and lower iso-loss curves for a loss-level equal to minimax loss. In the unconstrained case the upper and lower iso-loss curves are tangent to each other. The bids at the points of tangency are uniquely determined and equal to the crossconditional regret minimizing bids. In the example depicted in the figure, there is a single point of tangencyq. Other bids are only partially determined; any bid must be below the upper iso-loss curve and above the lower iso-loss curve. In the figure any decreasing bidding function in the shaded area is a minimax bid. All minimax bidding functions agree onq.
The figure illustrates the nonuniqueness of the minimax bid. Indeed, as in the multi-unit uniform-price auction, there is not a unique minimax-loss bid in the unconstrained uniformprice auction. If there is, overbidding regret must be equal across all quantities, giving qb(q) = L for all quantities q. This would imply that high bids for small quantities give zero underbidding regret; these bids can be reduced without affecting maximum loss, and the minimax-loss bid is nonunique.
The following theorem formally states that any weakly decreasing bid below marginal values and between the upper and lower iso-loss curves minimizes worst-case loss.
Theorem 8 Proof. Suppose that c(·; L) ≤ b ≤ c(·; L). At any quantity q, overbidding regret is qb(q) ≤ qc(·; L) = L; and at any quantity q, underbidding regret is The left-hand inequality follows from the fact that c(·; L) is continuous. Then conditional regret at quantity q is such that R LAB q (b; v i ) ≤ L, and it follows that the loss of bid b is weakly below L.
Finally, as the number of available bid points becomes large-either because the commodity becomes divisible, or because the limited-bid-step constraint is weakend-constrained bids can arbitrarily approximate an unconstrained minimax-loss bid. Since loss is continuous in bid, minimax loss will converge to unconstrained minimax loss; and, moreover, the limit of a sequence of constrained bids will be a minimax-loss bid in the unconstrained model.
Proposition 6 (Convergence to unconstrained minimax-loss bid). Let L Mq be minimax loss and a minimax-loss bid (respectively) in the multi-unit uniform-price auction with M q units, and let L M b and b M b be minimax loss and the minimax-loss bid (respectively) in the constrained uniform-price auction with M b bid points. Let L ⋆ be minimax loss in the unconstrained uniform-price auction. Then along any convergent sequence b Mq → b q⋆ and any Proof. The convergence of optimal loss follows from the fact that, as M q and M b tend toward infinity, multi-unit and constrained-bid bids can arbitrarily approximate the unconstrained cross-conditional regret minimizing bid b LAB . Since loss is converging, in the limit bids must lie between the limiting upper and lower iso-loss curves, which are continuous in loss.
Theorem 8 implies the desired result.
C Omitted Proofs
Proof of Lemma 1. Consider the maximization of loss where we have swapped the order of the suprema. Observe that the inner maximization problem is linear in the choice variable B −i . Winkler [1988] proves that the extreme points of B are distributions with a single point in the support. Since loss is linear in B −i , maximum loss is attained at an extreme point.
C.1 Analysis of pay-as-bid auctions
Proof of Lemma 2. In a pay-as-bid auction, a bidder facing a known residual supply curve should bid a constant amount for all units they desire: because bids are paid, a bid above the resulting market-clearing price can be reduced to save payment without affecting allocation.
Since maximizing loss is equivalent to finding an ex post residual supply curve that maximizes regret, the loss-maximization problem is equivalent to solving To beat the opponent bid for unit Q − q with certainty, bidder i must bid strictly above S(Q − q), or b i (q) = S(Q − q) + ε for any ε > 0. Since regret is defined by a supremum, we let ε = 0 while retaining the assumption that bidder i wins unit q for sure. for some q. Note that U ⋆ is decreasing in S. LetS < S, and consider If q ⋆ =q ⋆ , then U ⋆ (S; v i ) ≤ U ⋆ (S; v i ) since the required bid underS is lower than the required bid under S. If q ⋆ =q ⋆ , then U ⋆ (S; v i ) ≤ U ⋆ (S; v i ) since loss is higher underS with selected quantityq ⋆ than with selected quantity q ⋆ , and the required bid is lower even with selected quantity q ⋆ .
Then when considering maximum loss, it is sufficient to consider residual supply curves which are as low as possible. Conditional on bidder i receiving share q, the only constraint on the residual supply curve is S(Q − q) ≥ b i + (q); that is, bidder i's opponents bid more for their aggregate Q − q unit than bidder i bid for their "next" unit. Because bids are monotone, the lowest residual supply curve satisfying this constraint is Given this residual supply curve, bidder i's optimal bid will either win q units at a price of 0, or will win as many units as desired at a price of b i + (q). In light of Lemma 1, which shows that maximum loss is equivalent to maximum regret, the result follows from evaluating the ex post utility of this decision.
C.1.1 The multi-unit case
Proof of Theorem 1. We show that R PAB Importantly, loss is continuous in bid. Note that increasing all bids by ε > 0 will weakly decrease R q k (b i ; v i ) for all k and strictly increase M k=1 b ik ; then if b i is loss-minimizing, it must be that Similarly, decreasing all bids by ε > 0 strictly de- This is a contradiction, and it must be that b ik+1 > 0. In this case, reducing b ik+1 will weakly increase R PAB , and the arguments above show that increasing all bids by some small amount will strictly reduce loss.
It follows that R PAB for all k, and the result is immediate.
Proof of Corollary 2. Following Theorem 1, conditional regret is equalized across all units.
Then for all units k, 1 ≤ k ≤ M, From this, it immediately follows that b PAB iM = v iM /(M + 1). Fixing b PAB ik+1 , the left-hand side of (8) is strictly positive when b ik = b PAB ik+1 , strictly negative when b ik = v ik , and continuous and monotone in b ik . Then there is a unique b ik that solves equation (8) conditional on b PAB ik+1 .
C.1.2 The bidpoint-constrained case
Proof of Theorem 4. This proof is substantially similar to proof of the equivalent result for the multi-unit pay-as-bid auction (Theorem 1). As in the proof of Theorem 1, Lemma 2 implies that the loss minimization problem is Then the loss optimization problem in the pay-as-bid auction can be written
Recall that
Note that R PAB q k decreases as q k increases while, for all k ′ > k, R PAB q k ′ increases as q k increases.
Proof of Proposition 1. Let (q, b) minimax loss in the constrained pay-as-bid auction. Loss is Now consider the set of bid points q ′ , where That is, q ′ k is the feasible bid point nearest to (but below) q k . Define the bid vector b ′ so that b ′ k = b(q k ). By construction, (q ′ , b ′ ) is feasible in the multi-unit auction. Since (q, b) is optimal, loss is higher under (q ′ , b ′ ), and since b ′ ≤ b the loss is bounded above by Then there is a feasible bid in the multi-unit auction with loss no more than Q/Mq 0 v i (x) dx higher than the optimal bid in the constrained auction.
C.2 Analysis of uniform-price auctions
Proof of Lemma 3. The proof of this claim is substantially similar to the proof of the equivalent result for the pay-as-bid auction (Lemma 2) and is omitted.
C.2.1 The multi-unit case
Proof of Theorem 2. If there is a unique minimax-loss bid , and if these terms are nonequal b ik+1 can be adjusted without affecting loss, since b i is optimal. The same argument is sufficient to show that R LAB Then if there is a unique minimax-loss bid Except in special cases, this equality cannot simultaneously hold for all k.
Proof of Theorem 3. The claim follows immediately from earlier arguments. Bids are weakly below values, since R LAB q k is zero and R LAB q k is strictly positive when b ik > v ik . We now show that the proposed bid is monotone. Suppose to the contrary that there is k such that The penultimate inequality follows since we have assumed, by way of contradiction, that b LAB ik < b LAB ik+1 ; but this assumption implies b LAB ik ≥ b LAB ik+1 , a contradiction, thus it cannot be that b LAB ik < b LAB ik+1 .
C.2.2 The bidpoint-constrained case
Proof of Theorem 5. We first prove that the minimax bid (b i , q i ) must solve b 1 q 1 = b k q k for k ∈ {1, 2, . . . , M} , and Let k denote the largest index for which maximal loss is attained, i.e, either k = M + 1 if Let k < M + 1. We show that R LAB q k−1 = R LAB q k . Suppose R LAB q k−1 > R LAB q k . As b k appears in only these two expressions, raising b k decreases only R q k−1 and increases only R LAB q k . Suppose R LAB q k−1 < R q k . Decreasing b k decreases R LAB q k and increases R LAB q k−1 . We do not have to worry about the effect on R LAB q M −1 as underbidding regret decreases in b k and q k−1 . As regret is maximized by M + 1, the inequality must hold with equality. The argument of the previous paragraph implies R LAB q M −1 = R LAB q M . The result follows. We now prove that a unique solution exists. To do so, note that we can express b k as a function of q k−1 and q k by solving The left-hand side increases in b k and is 0 at b k = 0. The right-hand side decreases in b k , is positive for b k = 0, and tends to 0 as b k increases. Thus, there is a unique b k (q k−1 , q k ) that solves the equation. The bid b k (q k−1 , q k ) decreases in q k−1 and q k .
Proof of Proposition 2. Let (q, b) be optimal in the constrained uniform-price auction. Loss Here, b + (q) = lim εց0 b(q + ε). Now consider the set of bid points q ′ , where That is, q ′ k is the feasible bid point nearest to (but above) q k . Define the bid vector b ′ so that b ′ k = b(q k ). By construction, (q ′ , b ′ ) is feasible in the multi-unit auction. Since (q, b) is optimal, loss is higher under (q ′ , b ′ ), and the difference for a given quantity q k is Then there is a feasible bid in the multi-unit auction with loss no more than higher than the optimal constrained bid in the divisible-good auction.
since, in the multi-unit case, the minimax-loss bid in the pay-as-bid auction is always positive (Corollary 2), there exists a minimax-loss bid in the multi-unit uniform-price auction which is not everywhere above the unique minimax-loss bid in the multi-unit pay-as-bid auction.
Proof of Comparison 4. When bidder i has unit demand, the ex post transfer to the auctioneer is identical in the last accepted bid uniform-price auction and the pay-as-bid auction. We therefore assume the bidder demands at least two units, M ≥ 2.
We now show that when bidder i is awarded a small quantity, the ex post transfer to the auctioneer can be larger in the last accepted bid auction than in the pay-as-bid auction; and, when bidder i is awarded a large quantity, the ex post transfer to the auctioneer can be smaller in the last accepted bid auction than in the pay-as-bid auction. The former claim follows from Comparison 1, which implies that whenever v i2 > 0. Then the transfer is higher in the last accepted bid auction when the market-clearing price (which is bounded above by b LAB i1 ) is relatively close to b LAB i1 . The latter claim is immediate: since b LAB iM = b PAB iM and bids are strictly decreasing in the pay-as-bid auction, b PAB i2 > 0 implies that M k=1 b PAB ik > Mb PAB iM . Because the comparison of ex post transfers is ambiguous and depends on the quantity allocated, expected transfers are also ambiguous: quantity distributions which place significant weight on quantities under which uniform-price revenue is higher will have higher expected revenue in the uniform-price auction, and quantity distributions which place significant weight on quantities under which pay-as-bid revenue is higher will have higher expected revenue in the pay-as-bid auction.
Proof of Comparison 5. Given a bid functionb, for any quantity q conditional underbidding regret is higher in the pay-as-bid and uniform-price auctions, R PAB q (b; v i ) ≥ R LAB q (b; v i ). Moreover, overbidding regret is weakly higher in the pay-as-bid auction, R PAB q Since loss is the supremum of the higher of conditional overbidding and underbidding regrets, taken over all units, it follows that loss is weakly lower in the uniform-price auction.
In the multi-unit case with quantity M > 1 the comparison is strict. The proof of Comparison 1 shows that b LAB ik > b PAB ik except at k = M. Let q be the quantity for which worst-case loss equals conditional regret in the uniform-price auction, and let b LAB denote the cross-conditional regret minimizing bids of the uniform-price auction. Then we have that where we use that b PAB ≤ b LAB (Comparison 1) and the fact that underbidding regret involves lowering the bids on [0, q].
Proof of Comparison 6. The statement b LAB 1 > b PAB 1 follows essentially from the proof of Comparison 8. See also the proof of Comparison 2. The proof that q LAB in the uniform-price auction does not. Since conditional regret is continuous in bid, it follows that a small deviation from (q PAB , b PAB )-namely, increasing b PAB i1 slightly to b LAB i1 > b PAB i1 and decreasing q PAB 1 slightly to q LAB 1 < q PAB 1 -will strictly lower conditional regret for quantities q ∈ [0,q PAB 1 ] while keeping conditional regret for higher quantities below L PAB . Since minimax loss is the maximum of conditional regret, taken over all bid points, it follows that minimax loss in the constrained uniform-price auction with M bid points is strictly below optimal loss in the constrained pay-as-bid auction with M bid points.
Let g k ≡ q k − q k−1 be the gap between the k th and k + 1 th bid points. Then we have Thus the above solution remains valid for all ε ∈ [0, 1/6], as assumed.
The first term is overbidding regret when the bidder receives q units (note that this is independent of whether bids are above or below value), the second term is underbidding regret when the bidder receives 0 units and b > ε (we will constrain our analysis to ensure this assumption is valid), and the third term is underbidding regret when the bidder receives just above q units. Together, the first two terms imply b = 1 1 + q .
In particular, the bid is higher in the uniform-price auction and drops to 0 later, and the bids in the pay-as-bid auction are never uniformly higher than in the uniform-price auction. | 2021-12-22T02:15:35.217Z | 2021-12-21T00:00:00.000 | {
"year": 2021,
"sha1": "d35fcbfbc471d59c1b86422cc253a8c6e26b1a57",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d35fcbfbc471d59c1b86422cc253a8c6e26b1a57",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
255969380 | pes2o/s2orc | v3-fos-license | Oncologic resection of pancreatic cancer with isolated liver metastasis: Favorable outcomes in select patients
Background: Patients with pancreatic ductal adenocarcinoma (PDAC) and liver metastasis are treated with palliative chemotherapy, whereas similar patients with metastatic colorectal cancer are considered for aggressive surgery. Methods: Using an institutional database, PDAC patients undergoing liver resection for isolated metastasis were identified. Their overall survival (OS), treatment factors, and clinicopathological variables associated with survival were also evaluated. Results: Forty-seven patients underwent curative-intent surgery for metastatic PDAC to the liver between 2000 and 2019. Median OS was 21.9 months from diagnosis. Fourteen patients underwent unplanned resection of radiographically occult liver metastasis during pancreatectomy with median OS of 8.7 months. On the other hand, 29 patients received systemic chemotherapy followed by planned resection; this cohort had the most favorable prognosis following aggressive surgery with median OS being 38.1 months from diagnosis and 24.1 months from surgery. Preoperative chemotherapy (HR = 7.1; p = .002) and moderate to well differentiation of the primary tumor (HR = 3.7; p = .003) were associated with prolonged survival in multivariate analysis, whereas lymph node metastases, response to preoperative therapy, number of liver metastasis, and extent of liver surgery were not. Conclusions: In select patients with PDAC and isolated liver metastasis, curative-intent surgery can result in meaningful survival. This aggressive approach seems most beneficial in patients following induction chemotherapy.
| INTRODUCTION
Pancreatic ductal adenocarcinoma (PDAC) is one of the deadliest cancers in which oncological resection provides the only chance for a cure.Using current selection criteria only <20% of patients are candidates for curative-intent surgery at the time of diagnosis. 1,2his is mainly the result of systemic disease with half of patients being diagnosed with distant metastases, mainly in the liver. 1,3Standard treatment in patients with liver-only metastatic pancreatic cancer is palliative intent chemotherapy.In contrast, for similar patients with resectable liver-only metastatic colorectal cancer, resection of select patients is recommended by the NCCN guideline and is associated with improved median survival and potential cures. 4e recent application of multi-agent chemotherapy, such as fluorouracil, leucovorin, irinotecan, and oxaliplatin (FOLFIRINOX) and nab-paclitaxel with gemcitabine (GnP), in the treatment of pancreatic cancer have improved control of metastatic disease compared to previous standard therapy such as single-agent gemcitabine.The rapid integration of these more efficacious multiagent therapies has resulted in a growing cohort of patients with stable or regressive liver-only disease.][10] However little is known about the impact of this practice on long-term survival.To date no prospective randomized trial has been reported on this topic and most reports consist of only a few cases.The aim of this study is to evaluate the long-term survival outcome of patients undergoing resection of liver-only metastatic pancreatic cancer based on a large single-institution experience.We report here that hepatic resection is associated with improved survival in a select subset of patients.We further clarify the features prognostic of favorable long-term survival.
| Patient inclusion criteria
Patients with PDAC undergoing pancreatectomy and liver resection for isolated metastases between January 2000 and December 2019 at the Johns Hopkins Hospital were identified using a prospectively maintained database.Only patients with pathologically confirmed metastatic PDAC to the liver were included, whereas patients with suspicious or biopsy-proven liver lesions that disappeared following preoperative chemotherapy who did not then undergo liver resection were excluded.Furthermore, patients with pathologically confirmed metastasis to an extrahepatic site identified at surgical exploration were also excluded.
| Data collection
Patient clinical characteristics including gender, age at diagnosis, performance status based on the Eastern Cooperative Oncology Group (ECOG) classification, disease-specific information including preoperative and maximum serum carbohydrate antigen 19-9 (CA19-9) levels, intraoperative findings, histopathological details, type and duration of chemotherapy regimens were collected.Imaging data, including the size, location, and resectability of the primary pancreatic tumor, was determined based on pancreas-protocol computed tomography scans with arterial and venous phases.Similarly, the size, location, and number of liver metastases were identified on multi-phase computed tomography scans; liver-protocol magnetic resonance imaging and fluorodeoxyglucose-positron emission tomography were also used in select cases.Radiographic progression of liver metastasis was defined as an increasing number of metastases or enlarging tumor size in comparison to pre-treatment imaging.In addition, pathological features including primary tumor site, size, differentiation grade, therapeutic response, resection margin status, lymphovascular and perineural invasion, and nodal metastasis were extracted.Microscopic identification of invasive carcinoma within 1 mm of the surgical margin was characterized as R1, while a margin of 1 mm or greater was characterized as R0.The date of last follow-up was defined as the date of death or the date of last office visit or telephone contact.
| Preoperative treatment
The modality and duration of preoperative treatment was decided following multidisciplinary input from a team of medical oncologists, radiation oncologists, surgeons, radiologists, and pathologists.Systemic therapies were administered under the care of a medical oncologist.
| Surgical procedure and definition
Surgical procedures were classified as "planned" or "unplanned" based on the intent of the surgeon given the available clinical information prior to operative exploration (Figure 1).As such, planned surgery was performed for patients with suspected or biopsy-proven liver metastasis on preoperative imaging and for patients with liver metastasis identified at an aborted surgical procedure who then received systemic therapy and were considered for curative-intent surgery.Unplanned surgery, on the other hand, was performed for patients with radiographically occult liver metastasis incidentally identified at surgical exploration regardless of whether systemic therapy was administered.In cases where a liver metastasis was found at surgical exploration, the metastasis was removed via excisional "wedge-type" partial liver resection.Major hepatectomy was defined as removal of three or more liver segments; this included right hepatectomy, left hepatectomy, extended hepatectomy, or at least three wedge resections.Conversely, minor hepatectomy was defined as fewer than three wedge resections.The resectability of the primary tumor was classified according to National Comprehensive Cancer Network (NCCN) guideline. 11Pathological stage was reported using the 8th edition of the AJCC/UICC TNM classification system.Therapeutic response to preoperative therapy was reported using the College of American Pathologists classification.Preoperative serum CA19-9 was the most recent measurement prior to pancreatectomy, whereas maximum serum CA19-9 was defined as the greatest value of CA19-9 except for instances in which biliary obstruction was felt to artificially influence the value.In previous studies, serum CA19-9 levels >100-200 U/mL have been associated with unresectability and poor survival. 2,12,14Therefore, we chose a threshold CA19-9 value of 200 U/mL.
| Ethical consideration
This study was approved by the institutional review board in October 2019 (IRB002272784).
| Statistical analysis
Overall survival from diagnosis was defined as the time from the date of the initial diagnosis to either death from any cause or last follow-up.In contrast, overall survival from surgery was defined as the time from the date that both liver and pancreatic resection were completed to either death from any cause or the last follow-up.Relapse-free survival was defined as the time from the date of the completion of pancreas and liver resection to the date of recurrence.Median survival was estimated using the Kaplan-Meier method, using a log-rank test to compare survival between groups.Quantitative variables were presented using median value and interquartile range (IQR; 25th, 75th percentiles).Univariate and multivariate analysis was performed using Cox proportional-hazards model with 95% confidence intervals (CI).Analytic tests with p < .05were defined as statistically significant.
| Clinical characteristics-Between
January 2000 and December 2019, 3231 patients who underwent pancreatectomy for PDAC at Johns Hopkins Hospital were included in a prospective database.We identified 47 patients who underwent surgery for pancreatic cancer with isolated liver metastasis.Patient demographics and clinical characteristics are shown in Table 1.Most patients were found to have liver metastasis on initial staging evaluation (60%), but a significant proportion (40%) had radiographically occult liver metastasis identified during exploratory surgery.
| Cancer treatments-
The patients included in this analysis had a range of cancer treatments based on their clinical presentation, goals of care, and provider recommendations.An overview of the treatment modalities and sequences are detailed in Figure 1.One patient had undergone curative-intent pancreatic resection but was found to have metachronous liver metastasis and received chemotherapy followed by liver resection (Group A); this patient was initially diagnosed with stage III pancreatic cancer.Twentythree patients were diagnosed with localized pancreatic cancer but were found to have radiographically occult liver metastasis at the time of surgical exploration (Groups B-D).
Of these patients, nine underwent liver resection for diagnosis of radiographically occult liver metastasis during aborted exploratory surgery followed by systemic chemotherapy and interval pancreatic resection (Group B).Another 11 patients underwent "upfront" pancreatic resection combined with liver resection for occult liver metastasis (Group C), whereas three patients received "neoadjuvant" therapy followed by combined pancreatic and liver resection for occult liver metastasis (Group D).The remaining 23 patients were found to have liver metastasis on initial staging (Groups E-F) with four patients undergoing "upfront" combined pancreatic and liver resection (Group E) and 19 patients receiving chemotherapy followed by combined pancreatic and liver resection (Group F).
The pancreatectomy procedure performed was determined by the location of the primary tumor and involvement of adjacent structures.Most patients (79%) underwent simultaneous pancreas and liver surgery (Groups C-F).The majority of liver metastases (83%) were confined to one lobe of the liver and amenable to minor hepatectomy (defined as fewer than three liver segments).All nine patients who underwent major hepatectomy had planned surgery for liver lesions identified on preoperative imaging.The maximum number of pathologically confirmed liver metastases was four, whereas the majority of cases had one (45%) or two (34%) (Table 2).Four patients had pathological complete response in the liver metastases.
| Surgical outcomes-
The surgical procedures were performed safely.Twentyone patients (45%) had at least one postoperative complication.Major complications (Clavien-Dindo classification III-IV) were seen in eight (17%) patients. 15There were no perioperative deaths within 30 days of surgery.However, seven patients (15%) were readmitted within 30 days after discharge.These surgical outcomes are in line with patients undergoing pancreatic surgery at our institution.
| Cancer-specific outcomes
For the entire cohort of 47 patients, median follow-up was 18.1 months from diagnosis and 9.5 months following completion of surgery.The 1-and 2-year survival rates from diagnosis were 78% and 47%, respectively.Median OS was 21.9 months from diagnosis (Figure 2a) and 12.3 months from completion of surgery (Figure 2b).Among the 32 patients who received preoperative chemotherapy followed by surgery, median OS was 30.7 months from diagnosis and 24.1 months from surgery, whereas median OS was only 11.1 months from diagnosis and 10.6 months from surgery in the 15 patients who underwent "upfront" surgical resection.As expected, the administration of preoperative chemotherapy was associated with prolonged survival from diagnosis (p < .001)and improved survival following surgery (p = .01)indicating better patient selection.
| Treatment intent and timing of surgery
Thirty-three patients underwent planned liver and pancreatic surgery with median OS was 26.3 months from diagnosis and 17.4 months from completion of surgery, whereas 14 patients underwent unplanned resection of radiographically occult liver metastases during pancreatectomy with a median OS of only 10.5 months from diagnosis and 8.7 months from completion of surgery.The patients undergoing planned surgery had better prognosis than those undergoing unplanned surgery (p < .001,p = .009,respectively), again highlighting the importance of patient selection.Only four patients underwent planned liver and pancreas resection without receiving preoperative chemotherapy (Group E).In reviewing the rationale for "upfront" surgery in this small cohort, the patients had an uncertain preoperative diagnosis including suspected pancreatic neuroendocrine tumor or colon cancer with extension to the pancreas; however, they were later found to have pancreatic cancer.The remaining 29 patients, who underwent planned resection after systemic chemotherapy, had the most favorable outcomes.As one might expect, this well-selected cohort of 29 patients had longer median overall survival than the other 18 patients who underwent unplanned resection or did not receive preoperative chemotherapy (38.1 vs. 11.1 months from diagnosis, p < .001;24.1 vs. 9.8 months from completion of surgery, p < .010; Figure 3a,b).Median RFS for the patients undergoing planned surgery after systemic chemotherapy was 8.1 months, which was longer than the median RFS of 3.2 months for other cohorts, although this difference was not statistically significant (p = .191;Figure 3c).When survival was compared across the study population with cohorts defined by surgical intent and treatment sequence (i.e., planned surgery after chemotherapy group, planned upfront surgery group, unplanned surgery after chemotherapy group, and unplanned upfront surgery group), survival was best among patients in the planned surgery after chemotherapy group (median OS from diagnosis; 38.1 vs. 13.1 vs. 8.1 vs. 11.1 months, p < .001(Figure 4a); median OS from completion of surgery; 24.1 vs. 12.3 vs. 3.7 vs. 9.0 months, p < .05(Figure 4b), respectively).We compared survival in patients undergoing liver resection followed by interval pancreatic surgery (n = 9) to patients undergoing simultaneous liver and pancreas surgery (n = 37).The median OS from diagnosis was not statistically different between the two groups (35.6 months for patients with staged surgery vs. 19.2months for patients with simultaneous surgery; p = .192).
| Prognostic factors
We assessed the association between several cancer-specific clinical features and prognosis.First, we compared survival based on the number of metastatic liver lesions.Patients with one liver metastasis had median OS from diagnosis of 23.5 months, whereas patients with two or more liver metastases had median OS from diagnosis of 21.5 months (p = .596).In addition, there was no significant difference for the survival between unilobar and bilobar liver metastases (21.5 vs. 30.7 months, p = .250)or extent of liver surgery (30.7 months for major resection vs. 21.5 months for minor resection; p = .927).Therefore, the number of liver metastases, location of liver metastases, and extent of liver surgery were not also associated with prolonged survival.We also assessed the prognostic value of preoperative serum CA19-9 level.The median OS from diagnosis for patients with preoperative CA19-9 ≤ 200 was significantly better than for patients with preoperative CA19-9 > 200 (26.3 vs. 19.1 months, p < .05).However, the maximum serum CA19-9 value prior to surgery was not associated with survival.
We also evaluated the prognostic value of pathological findings for the primary tumor.Eleven patients with T3/T4 tumor had a poorer prognosis than those with T0/T1/T2 tumor (14.0 vs. 25.7 months, p = .004).Median OS from diagnosis for patients with well-/ moderately differentiated tumor grade was significantly better than patients with poorly differentiated tumor grade (36.9 vs. 17.7 months, p = .003).Furthermore, patients with R0 resection had a significantly better prognosis than patients with R1 resection (25.9 vs. 17.7 months, p = .011).On the other hand, lymphovascular invasion, perineural invasion, lymph node metastases, and therapeutic response of the primary tumor did not affect survival.
| Univariate and multivariate analysis
Elevated preoperative serum CA19-9 level > 200, large (T3/T4) primary tumor size, and presence of microscopic-positive pancreatectomy margin were associated with reduced survival in univariate analysis, but not in multivariate analysis (Table 3).Upfront surgery (HR = 7.1, p = .002)and poorly differentiation of the primary tumor (HR = 3.7; p = .003)were each independently associated with poor prognosis in multivariate analysis.
| DISCUSSION
In this report, we have described clinical outcomes for a diverse population of patients undergoing aggressive surgery for pancreatic cancer with limited, liver-only metastasis.Among these patients, the cohort treated with systemic chemotherapy followed by planned surgery had meaningful survival and acceptable morbidity.Our study strongly suggests that this approach may benefit select patients with "oligometastatic" pancreatic cancer.The current study builds on other recent reports that have demonstrated feasibility and safety for hepatectomy for liver metastases of pancreatic cancer.7][18] This concept has been well-established in hepatectomy for colorectal liver metastases and is not considered standard treatment for pancreatic liver metastases. 11r study differs from previous reports in several important aspects that advance our understanding of metastasectomy of liver-only PDAC.First of all, recent multi-agent regimens such as FOLFIRINOX or GnP are used for metastatic pancreatic cancer at a high rate (84%).To our knowledge, our study on this topic has the highest rate of patients receiving modern chemotherapy regimens to date. 8,10,16,18,19In previous reports, on the other hand, preoperative chemotherapy was not administered to many patients; those who had chemotherapy often received gemcitabine alone or fluorouracil with only a small subset using modern chemotherapy. 10,16,18,20Modern chemotherapeutic regimens, such as FOLFIRINOX or GnP, have shown the highly therapeutic effect for metastatic pancreatic cancer and increasing survival rate than Gemcitabine alone, 5,6 and the prognosis of patients with metastatic pancreatic cancer can be expected to be prolonged more than the previous single-agent or multi-agent regimens.This study is the first paper to show the benefit of liver metastasectomy for liver metastatic pancreatic cancer following preoperative modern chemotherapy in the era of modern chemotherapy.Secondly, we evaluated who benefitted from surgical resection.In this study, 32 patients received preoperative chemotherapy.Notably, of these 32 patients, the survival of 29 patients who underwent planned surgery was significantly longer than that of the remaining patients who underwent unplanned surgery after preoperative chemotherapy (p = .019).Moreover, the survival in our operative cohort was longer than median OS for non-surgical patients with metastatic pancreatic cancer treated with FOLFIRINOX (11.1 months) or GnP (8.5 months) in other studies. 5,6This may depend on the difference in preoperative treatment period between planned and unplanned surgery.The median duration of preoperative chemotherapy of planned surgery tended to be longer than that of unplanned surgery (8 months (IQR 6-11) vs. 2.5 months (IQR 1.8-5.3),p = .063).It may indicate that not only systemic chemotherapy but also the duration of the preoperative chemotherapy could contribute to prolonged survival.Also, tumor control during preoperative systemic treatment is considered to be the background of this cohort.In other words, patients who have received systemic chemotherapy for pancreatic cancer with liver metastases for a certain period of time and could have kept stable disease are considered the best suitable cohort for aggressive surgery.Therefore, if liver metastases are unexpectedly discovered during exploration, pancreatic resection is aborted, and systemic treatment is given priority with consideration of aggressive surgery in the future for patients who have a favorable treatment response.Third, this study does not include cases in which distant metastases other than liver have been resected, and only cases of radical resection for purely isolated liver metastases were evaluated.Some prognostic factors have been reported including multi-agent chemotherapy, surgical resection, the number of liver metastases, CA19-9 reduction, tumor differentiation, negative resection margin of liver tumor, preoperative chemotherapy, and adjuvant chemotherapy. 10,192][23] However, in this study, adjuvant chemotherapy and negative resection margin of liver tumor were not significantly associated with the OS.Because we could not collect the information about adjuvant therapy for eight patients who were followed at outside hospital after surgery and two patients had not started scheduled adjuvant therapy yet at analysis, we excluded 10 patients.Therefore, missing data and small sample size may have affected the results of analysis.Poor differentiation of the primary tumor was one of the independent poor prognostic factors.This result may indicate that tumor biology could strongly affect the survival of patients with liver metastatic pancreatic cancer.This suggests that the histopathological result of preoperative biopsy may be one of the criteria in deciding on treatment, including surgical resection or continuing systemic treatment for even selected patients with liver metastases.
In this study, there are several limitations.First, this study was a retrospective study in a single institution.Second, diagnostic evaluation of the liver was not uniform for all patients, especially for patients who underwent unplanned surgery.As such, the liver lesions found at exploration may be just the tip of the iceberg with additional "occult" lesions remaining in the liver.In that case, recurrence may be pointed out early.In fact, of the seven patients who underwent unplanned surgery and were pointed out to have a recurrence, five had a recurrence in the liver, with a median time to recurrence of 2.3 months.Third, we may have been not able to evaluate the prognosis for all PDAC patients with isolated liver metastasis, because all cases without evidence of histopathological liver metastases in preoperative or resected specimens were excluded.That means the cases in which actual chemotherapy had been significantly effective and complete response of preoperative chemotherapy had been seen may have been excluded.However, this is not only a limitation but also one of the most important criteria in this study.We consider the presence of histopathological evidence to be important in assessing the accurate prognosis of patients with PDAC with isolated liver metastasis.Cases in which diagnostic imaging was highly suggestive of liver metastasis but a confirmatory biopsy was not performed, and the patient underwent pancreatectomy "only" following systemic chemotherapy (because of the disappearance of liver metastases) were also excluded.These cases are similar to the cohort of 24 patients reported by Frigerio et al. 17 in which median survival was 56 months.Fourth, several new anticancer agents were introduced during the study period.This makes it difficult to assess the potential impact of specific chemotherapeutics on clinical outcomes in patients undergoing liver metastasectomy.The multi-agent chemotherapy regimens of FOLFIRINOX and GnP, in particular, have been shown to improve survival in metastatic pancreatic cancer.Furthermore, it was also challenging to determine the optimal duration of preoperative treatment in this study.Fifth, advances in diagnostic imaging have improved the accuracy of cancer staging.Although the impact of imaging technology on the management of pancreatic cancer is beyond the scope of this work, it likely influences the detection of liver metastasis.As such, we acknowledge this as a limitation of the study and recognize it may have affected treatment decisions and/or the clinical subgroups in the analysis.
Previous studies presented that survival of patients with aggressive surgery was longer than those of patients without surgery, 18,24,25 whereas Dunshcede et al. 9 and Gleisner et al. 26 reported that no survival advantage for patients undergoing synchronous liver resection compared to patients treated without surgery.To date, there are no published randomized control trials evaluating surgical resection vs. systemic treatment alone for pancreatic cancer with isolated liver metastasis.Therefore, there is no high-level evidence that aggressive surgery for oligometastatic pancreatic cancer contributes to prolonged survival.Even if patients are well-selected, it is necessary to compare the survival between the surgery cases and non-surgical cases.A prospective trial to evaluate the role of complete macroscopic cytoreduction in such patients is warranted.Our study provides a foundation, but additional studies are necessary to determine the role of liver resection and pancreatic surgery for patients with pancreatic cancer and oligo liver metastasis after induction chemotherapy, similar to the treatment paradigm for metastatic colorectal cancer.
| CONCLUSION
In conclusion, our study shows that liver metastasectomy and pancreatic resection for patients with pancreatic cancer and isolated liver metastasis can result in meaningful survival, especially in selected patients undergoing planned surgery after treatment with modern, multi-agent chemotherapy.A prospective trial to evaluate the role of aggressive surgery for patients with isolated liver metastasis following induction chemotherapy is warranted.Overview of patient cohorts based on identification of liver lesion(s) on initial imaging, sequence of chemotherapy and surgical resection, and planned versus unplanned liver metastasectomy.
FIGURE 2 .
FIGURE 2. Kaplan-Meier survival curves for patients with pancreatic cancer and isolated liver metastasis undergoing oncologic resection showing (a) overall survival from initial diagnosis (median overall survival: 21.9 months), (b) overall survival from completion of surgery (median overall survival: 12.3 months), and (c) recurrence-free survival (median recurrencefree survival: 6.1 months) for all patients in the study.
FIGURE 3 .
FIGURE 3. Kaplan-Meier survival curves for patients with pancreatic cancer and isolated liver metastasis undergoing oncologic resection showing (a) overall survival from initial diagnosis, (b) overall survival from completion of surgery, and (c) recurrence-free survival for the cohort of patients undergoing planned surgery after preoperative chemotherapy (solid line) compared to all other patients in the study (dashed line).
FIGURE 4 .
FIGURE 4.Kaplan-Meier survival curves for patients with pancreatic cancer and isolated liver metastasis undergoing oncologic resection showing (a) overall survival from initial diagnosis and (b) overall survival from completion of surgery for patient cohort groupings of planned surgery after preoperative chemotherapy (solid line), planned upfront surgery (dashed line), unplanned surgery after preoperative chemotherapy (dotted line) and unplanned upfront surgery (dashed/dotted line).
TABLE 3
Univariate and multivariate Cox proportional hazards regression analysis of prognostic factors for overall survival from diagnosis | 2023-01-19T20:37:57.337Z | 2023-01-18T00:00:00.000 | {
"year": 2023,
"sha1": "5fdab3b436a9bf3b4f007114562ed1c4f7d54015",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jhbp.1303",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2df5f6cea66004549494c9b08be359e9a98521fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116936918 | pes2o/s2orc | v3-fos-license | Characterization of the degree sequences of (quasi) regular uniform hypergraphs
In hypergraph theory, determining a characterization of the degree sequence $d=(d_1,d_2,\ldots,d_n)$ where $d_1\ge d_2\ge\ldots,d_n$ are positive integers, of an $h$-uniform simple hypergraph $\cal H$, and deciding the complexity status of the reconstruction of $\cal H$ from $d$, are two challenging open problems. They can be formulated in the context of discrete tomography: asks whether there is a matrix $A$ with positive projection vectors $H=(h,h,\ldots,h)$ and $V=(d_1,d_2,\ldots,d_n)$ with distinct rows. In this paper we consider the two subcases where the vector $V$ is an homogeneous vector, and where $V$ is almost homogeneous, i.e., $d_1-d_n=1$. We give a simple characterization for these two subcases, and we show how to solve the related reconstruction problems in polynomial time. To reach our goal, we use the concepts of Lyndon words and necklaces of fixed density, and we apply some already known algorithms for their efficient generation.
Introduction
The degree sequence, also called graphic sequence, of a simple graph (a graph without loop or parallel edges) is the list of vertex degrees, usually written in nonincreasing order, as d = (d 1 , d 2 , . . . , d n ), d 1 ≥ d 2 ≥ · · · ≥ d n . The problem of characterizing the graphic sequences of graphs was solved by Erdös and Gallai (see [4]): Theorem 1. (Erdös, Gallai) A sequence d = (d 1 , d 2 , . . . , d n ) where d 1 ≥ d 2 ≥ · · · ≥ d n is graphic if and only if Σ n i=1 d i is even and Σ k i=1 d i ≤ k(k − 1) + Σ n i=k+1 min{k, d i }, 1 ≤ k ≤ n. An hypergraph H = (V ert, E) is defined as follows (see [5]): V ert = {v 1 , . . . , v n } is a ground set of vertices and E ⊂ 2 |V ert| \ ∅ is the set of hyperedges such that e ⊂ e ′ for any pair e, e ′ of E. The degree of a vertex v ∈ V ert is the number of hyperedges e ∈ E such that v ∈ e. An hypergraph H = (V ert, E) is h-uniform if |e| = h for all hyperedge e ∈ E. Moreover H = (V ert, E) has no parallel hyperedges, i.e., e = e ′ for any pair e, e ′ of hyperedges. Thus a simple graph (loopless and without parallel edges) is a 2-uniform hypergraph.
The problem of the characterization of the degree sequences of h-uniform hypergraphs is one of the most relevant among the unsolved problems in the theory of hypergraphs [5] even for the case of 3-uniform hypergraphs. For its last case Kocay and Li show that any two 3-uniform hypergraphs with the same degree sequence can be transformed into each other using a sequence of trades [9]. Furthermore the complexity status of the reconstruction problem is still open.
This problem has been related to a class of problems that are of great relevance in the field of discrete tomography. More precisely the aim of discrete tomography is the retrieval of geometrical information about a physical structure, regarded as a finite set of points in the integer lattice, from measurements, generically known as projections, of the number of atoms in the structure that lie on lines with fixed scopes. A common simplification is to represent a finite physical structure as a binary matrix, where an entry is 1 or 0 according to the presence or absence of an atom in the structure at the corresponding point of the lattice. One of the challenging problems in the field is then to reconstruct the structure, or, at least, to detect some of its geometrical properties from a small number of projections. One can refer to the books of G.T. Herman and A. Kuba [15,16] for further information on the theory, algorithms and applications of this classical problem in discrete tomography.
Here we recall the seminal result in the field of the discrete tomography due to Ryser [20]. Let H = (h 1 , . . . , h m ), h 1 ≥ h 2 ≥ · · · ≥ h m , and V = (v 1 , . . . , v n ), v 1 ≥ v 2 ≥ · · · ≥ v n , be two nonnegative integral vectors, and U (H, V ) be the class of binary matrices A = (a ij ) satisfying In this context H and V are called the row, respectively column, projection of A, as depicted in Fig. 1. Denoting byV = (v 1 ,v 2 , . . .) the conjugate sequence, also called the Ferrer sequence, of Ryser gave the following [20]: Moreover this characterization, and the reconstruction of A from its two projections H and V , can be done in polynomial time (see [15]). Some applications in discrete tomography requiring additional constraints can be found in [1,7,2,12,17,18,19,24].
As shown in [4] this problem is equivalent to the reconstruction of a bipartite graph G = (H, V, E) from its degree sequences H = (h 1 , . . . , h m ) and V = (v 1 , . . . , v n ). Numerous papers give some generalizations of this problem for the graphs with colored edges (see [3,6,10,11,14]).
So, in this context, the problem of the characterization of the degree sequence (d 1 , d 2 , . . . , d n ) of an h-uniform hypergraph H (without parallel edges) asks whether there is a binary matrix A ∈ U (H, V ) with nonnegative projection vectors H = (h, h, . . . , h) and V = (d 1 , d 2 , . . . , d n ) with distinct rows, i.e., A is the incidence matrix of H where rows and columns correspond to hyperedges and vertices, respectively. To our knowledge the problem of the reconstruction of a binary matrix with distinct rows has not been studied in discrete tomography.
In this paper, we carry on our analysis in the special case where the h-uniform hypergraph to reconstruct is also d-regular, i.e., each vertex v has the same degree d, in other words the vector of the vertical projection is homogeneous, i.e., V = (d, . . . , d). We also study the problem where the huniform hypergraph to reconstruct is almost d-regular, i.e., V = (d, . . . , d, d− 1, . . . , d − 1), in other words the hypergraph has span one.
We focus both on the decision problem, and on the related reconstruction problem, i.e., the problem of determining the existence of an element of U (H, V ) consistent with H and V , and in affirmative case, how to quickly reconstruct it. To accomplish these tasks, we will design an algorithm that runs in polynomial time with respect to the dimensions m and n of the matrix to reconstruct. The algorithm relies on the concepts of Lyndon words and necklaces of fixed density, and uses an already known algorithm for their efficient generation.
Definitions and introduction of the problems
Let A be a binary matrix having m rows and n columns, and let us consider the two integer vectors H = (h 1 , . . . , h m ) and V = (v 1 , . . . , v n ) of its horizontal and vertical projections, respectively, as defined in Section 1 (see Fig. 1). In this paper we will consider some specialized versions of the following general problems: Input: two integer vectors H and V , and a class of discrete sets C.
Question: does there exist an element of C whose horizontal and vertical projections are H and V , respectively?
Input: two integer vectors H and V , and a class of discrete sets.
Task: reconstruct a matrix A ∈ C whose horizontal and vertical projections are H and V , respectively, if it exists, otherwise give failure.
In [21], Ryser gave a characterization of the instances of Consistency(H, V, C), with C being the class of the binary matrices, that admit a positive answer. He moved from the following trivial conditions that are necessary for the existence of a matrix consistent with two generic vectors H and V of projections: Condition 1: for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, it holds h i ≤ n and v j ≤ m; Condition 2: Σ m i=1 h i = Σ n j=1 v j , and then he added a third one to obtain the characterization, as recalled in the Introduction.
The authors of [8], pointed out that these two conditions are also sufficient in case of homogeneous horizontal and vertical projections, by showing their maximality w.r.t. the cardinality of the related sets of solutions.
Ryser defined a well known greedy algorithm to solve Reconstruction(H, V, C) that does not compare the obtained rows, and does not admit an easy generalization to perform this further task.
In the sequel, we are going to consider the class of binary matrices having no equal rows and homogeneous horizontal projections, due to its connections, as mentioned in the Introduction, with the characterization of the degree sequences of h-uniform hypergraphs. Among them, we restrict our analysis to those matrices that are also, first, d-regular, i.e., whose vertical projections are also homogeneous: H = (h, . . . , h) and V = (v, . . . , v); we denote this class by E; and, second, almost d-regular, i.e., whose vertical projections are also almost homogeneous: H = (h, . . . , h) and V = (v, . . . , v, v − 1, . . . , v − 1); we denote this class by E 1 . Now, we state a third necessary condition for answering to Consistency(H, V, E) (and also to Consistency(H, V, E 1 ) as we will see in Section 5): Condition 3 can be rephrased, in our setting, as follows: there does not exist a matrix having H = (h, . . . , h) and V = (v, . . . , v) as homogeneous projections, and more than n h different rows; otherwise at least two rows will be identical. We will prove that the three conditions 1,2, and 3 are also sufficient to solve (in linear time) the problem Consistency(H, V, E).
To this aim, we use an approach different from those standardly used in Discrete Tomography: we consider each row of a matrix in E as a binary word, and we group them into equivalence classes according to their cyclic shifts, as defined in the next section.
The problem Consistency(H, V, E)
Let us consider each row of a binary matrix as a binary finite word u = u 1 u 2 . . . u n , whose length n is the number of columns of the matrix, and whose number h of 1-elements is the value of the horizontal projection.
We note that applying a cyclic shift to the word u, denoted by s(u), we obtain a different word s(u) = u 2 u 3 . . . u n u 1 , unless the cases u = (1) n or u = (0) n , of the same length, and having the same number of 1-elements inside. Iterating the shift of a word u, we obtain a sequence of different words that row wise arranged as a matrix, belong to E. We indicate with s k (u), where k ≥ 0, the application of k times the shift operator to the word u.
Unfortunately the words repeat after at most n shifts, and consequently the vertical projections of the obtained matrix are upper bounded by n, so, in general, only a submatrix of a solution of Reconstruction(H, V, E) is achieved (see Fig. 2). The following trivial result holds: Proposition 1. Let u be a binary word of length n having h ≤ n 1-elements inside. Let us consider the n × n matrix A obtained by row wise arranging the n cyclic shifts of u. Then, A has the horizontal and vertical projections equal to h.
As already noticed, the rows of the matrix A may not all be different. Throughout the paper we will denote by M (u) the matrix obtained by row wise arranging all the different cyclic shifts of a word u. To establish how many different rows can be obtained by shifting a given binary word, we need to recall the definitions and main properties of necklaces and Lyndon words.
Following the notation in [22], a binary necklace (briefly necklace) is an equivalence class of binary words under cyclic shift. We identify a necklace with the lexicographically least representative u in its equivalence class, denoted by [u]. The set of all (the words representative of) the necklaces with length n is denoted N (n). For example, An important class of necklaces are those that are aperiodic. An aperiodic (i.e. period ≥ n) necklace is called a Lyndon word. Let L(n) denote the set of all Lyndon words with length n. For example, L(4) = {0001, 0011, 0111}.
We denote fixed-density necklaces, and Lyndon words in a similar manner by adding the parameter d to represent the number of 1-elements in the words. We refer to the number d as the density of the word. Thus the set of necklaces with density d is represented by N (n, d), and the set of Lyndon words with density d is represented by L(n, d). For example, N (4, 2) = {0011, 0101}, and L(4, 2) = {0011}.
It is known from Gilbert and Riordan [13] that the number of fixed density necklaces and Lyndon words is respectively, where the symbols φ and µ refer to the Euler and Möbius functions. Now we enlighten the connection between these objects and our problem, refining Proposition 1: If u is a word of length n and density h ≤ n, then the cardinality of [u] (i.e. the number of rows of M (u)) is a divisor of n.
As a consequence, we have: The first 12 rows of the matrix in Fig. 2 are obtained by row wise arranging the 12 different cyclic shifts of the Lyndon word u = (0) 6 (1) 6 . Such a submatrix M (u) has horizontal and vertical projections equal to 6, that is the density of u.
This equation is an immediate consequence of the fact that each word of length n and density h belongs to exactly one necklace. Proof. Let us proceed by contradiction assuming that there does not exist a Lyndon word whose length is n/d < m, for each d ∈ d 1 , . . . , d t . Since H and V are homogeneous, and satisfy Conditions 1 and 2, then there exists a matrix A having H and V as projections (a consequence of Ryser's characterization of solvable instances, as stated in [8], Theorem 3).
Let us assume that d = d t = gcd{n, h}, h ′ = h/d, and n ′ = n/d; from Condition 1, it holds h ′ m = vn ′ with n ′ and h ′ coprime, so v = h ′ (m/n ′ ), and n ′ divides m. The hypothesis n ′ > m leads to a contradiction. Theorem 3 can be rephrased saying that if H and V are homogeneous consistent vectors of projections, then there exists a solution that contains all the elements of a necklace [u]. The solution in linear time of Consistency(H, V, E) is a neat consequence: Corollary 2. Let H and V be two homogeneous vectors satisfying Conditions 1, 2, and 3. There always exists a matrix having different rows, and H and V as projections.
The result of Theorem 3, together with the following proposition that point out a property of the necklace whose representant is u = (0) n−h (1) h , will be used in the next section to solve Reconstruction(H, V, E). The elements u ′ , s h (u ′ ), s 2h (u ′ ), . . . , s (k−1)h (u ′ ), with k = n/ gcd{n, h}, forms a subclass of [u], and they can be arranged in a matrix A ′ such that 2. A ′ is minimal with respect to the number of rows among the matrices having H as horizontal projections, and homogeneous vertical projections.
The proof directly follows from the properties of the greatest common divisor. Let us denote with M 0 (u) the matrix A ′ defined in Proposition 4, and with M i (u) the matrix defined in the same way starting from the word u = (1) i (0) n−h (1) h−i , with 0 ≤ i < gcd{n, h}(= n/k).
An algorithm to solve Reconstruction(H, V, E)
We start recalling that in [23] a constant amortized time (CAT) algorithm FastFixedContent for the exhaustive generation of necklaces N (n, h) of fixed length and density is presented. The author then shows that a slight modification of his algorithm can also be applied for the CAT generation of the Lyndon words L(n, h). In particular, his algorithm -here denoted GenLyndon(n, h)-constructs a generating tree of the words, and since the tree has height h, the computational cost of generating k words of L(n, h) Output : An element of the class E having H and V as horizontal and vertical projections, respectively.
Step 1: Let compute the sequence d 0 = 1 < d 1 < d 2 < · · · < d t of the common divisors of n and h, and initialize the matrix A −1 = ∅.
Step 2.2: Create the matrix A i , obtained by row wise arranging the matrices A i−1 and M ((u j ) d i ), for j = 1, . . . , q.
If v = 0 then output A i , else if q = L(n, h), create the matrix A obtained by row wise arranging the matrix A i with the column wise arranging of d i times the matrices M (u) j , with u = (0) n−h (1) h , j = 0, . . . , q ′ − 1, and q ′ = v · gcd{n, h}/h, else update n = n/d i+1 , and h = h/d i+1 .
A brief explanation of Step 2.2 is needed: for each common divisor d i of n and h, the algorithm considers all the Lyndon words of L(n/d i , h/d i ); if the matrices obtained from them can be stuffed inside the solution matrix, then the algorithm performs this action, and starts again the step with i = i + 1, otherwise the algorithm sets aside the word u = (0) n/d i (1) h/d i , and stuffs the matrices obtained from the other Lyndon words in the solution matrix. Since the remaining vertical projections v ′ are less than h/d i , then the matrices M (u) j , with 1 ≤ j ≤ q ′ as defined in Proposition 4, can be used to fill the gap, without going on generating the elements of L(n/d i+1 , h/d i+1 ).
The second run of Step 2 starts, and GenLyndon(3, 1) generates the Lyndon word 001. The final matrix A 1 is created by row wise arranging A 0 with the matrix M ((001) 2 ) as shown in Fig. 3, on the right.
A second example concerns the use of the word (0) n−h (1) h that in certain cases is set aside from the sequence of Lyndon words generated in Step 2: the instance we consider is H = (3, . . . , 3) of length m = 15, and V = (5, . . . , 5) of length n = 9. In Step 1 the values d 0 = 1, and d 1 = 3 are set.
Note that without the use of the Lyndon word 000000111, the procedure is not able to reach the solution since in the second run of Step 2, GenLyndon(3, 1) generates only one Lyndon word, i.e. 001, whose matrix M ((001) 3 ) has homogeneous vertical projections equal to 1, not enough to reach the desired value 2.
The validity of Rec(H, V, E) is a simple consequence of Theorem 3. Clearly, the obtained matrix has homogeneous horizontal and vertical projections, equal to h and v, respectively, and, by construction, all the rows are distinct. Moreover, the algorithm always terminates since at each iteration, we add as many rows as possible to the final solution. Concerning the complexity analysis, we need to generate O(m) different Lyndon words and shift each of them O(n) times. So, since the algorithm GenLyndon(n, h) requires O(k · h · n) steps to generate k words of L(n, h), the whole process takes polynomial time.
Remark: Let us consider the special case where H = V = (h, . . . , h) with 0 < h < n. The step 2.1 of Rec gives q = 1 and Rec(H, H, E) returns the matrix A 0 with the first row (0) n−h (1) h . Hence we remark that t A 0 = A 0 and so any two columns are different.
In graph G a twin is a pair of vertices {u, v} such that u and v have Corollary 3. Given n and k two positive integers, the construction of a kregular bipartite graph G = (X, Y, E), |X| = |Y | = n, without twins, if any, can be done in polynomial time. Moreover the following condition characterizes the degree sequence of a k-regular bipartite graphs without twins: Proof. It directly follows from the remark just above.
Reconstruction of an h-uniform hypergraph with span one
Let us consider the case where the h-uniform hypergraph to reconstruct is almost d-regular, in other words its degree sequence has a span one, i.e., its vertical projections are V = (v, . . . , v, v − 1, . . . , v − 1). So, let us indicate with E 1 , the set of matrices having different rows, homogeneous horizontal projections, and vertical span one projections. In order to solve this problem we will use the algorithm Rec(H, V, E) we designed in the previous section. Again in [8], it has been proved that also for span one projections vectors, Conditions 1 and 2 are sufficient to ensure the existence of a compatible matrix; again Condition 3, formulated with E 1 instead of E, succeeding in forcing that matrix to belong to the set E 1 . So, let us consider the following algorithm that relies on Rec(H, V, E): Output : An element A 1 of the class E 1 having H and V as horizontal and vertical projections, respectively.
Step 1: let n 0 and n 1 be the number of elements v and v − 1 of V , respectively, and set k to be the least integer such that it is both multiple of h and n, and greater than h · m. Create the homogeneous vectors of projections H ′ and V ′ such that H ′ = (h, . . . , h) has length m ′ = k/h > m, and V ′ = (v ′ , . . . , v ′ ), of length n and v ′ = k/n.
Step 2: run Rec(H ′ , V ′ , E), and let A be its output matrix.
Step 3: act on the submatrix M 0 (u) of A, as defined in Proposition 4, by deleting the rows s i h (u), with 0 ≤ i < t, and t = (n(v ′ − v) + n 1 )/h. Give the obtained matrix A 1 as output, after rearranging the columns in order to obtain the desired sequence of vertical projections.
Finally a rearrangement of the columns is needed in order to make the matrix compatible with the starting vector V , as in Fig. 5, on the right. Fig. 4, which is the output of Rec(H ′ , V ′ , E). On the right, a rearrangement of its columns makes it compatible with the initial sequence V .
More precisely columns 4, 5, and 6 are shifted in the first three positions, preserving their order.
Remark: a rearrangement of the columns of a matrix causes a related rearrangement of the elements of the vector of the vertical projections, without modifying the values of its elements. Furthermore, it is straightforward that such a rearrangement also preserves the inequality relation between the rows.
The correctness of RecSpan1(H, V, E 1 ) follows after observing that: i) by definition of k, it holds: k/n − v < h/gcd{n, h}.
In words, this means that the reconstructed matrix A compatible with the homogeneous vectors H ′ and V ′ is a minimal one, w.r.t. the dimensions, including A 1 . Furthermore, the difference between the number of rows of A 1 and A is less than the rows of M 0 (u); ii) in Step 2, the vectors H ′ and V ′ satisfy Condition 3 by definition of k, and since H and V do. As a consequence the call of Rec(H ′ , V ′ , E) always reconstructs a matrix A; iii) it is straightforward that the algorithm Rec(H ′ , V ′ , E) always inserts the submatrix M 0 (u), with u = (0) n−h (1) h , in A. So, the deletion of the rows s(u) i h , according to the consecutive values of i ranging from 1 to t, forces the vertical projections to maintain two different consecutive values at each time, and reaching the desired values.
It is straightforward that the complexity of RecSpan1(H, V, E 1 ) is the same as Rec(H, V, E).
Conclusion
The question of necessary and sufficient conditions for the existence of a simple hypergraph H = (V ert, E), |V ert| = n, |E| = m, with a given degree sequence is a long outstanding open question even in the case of a 3-uniform hypergraph (|e| = 3 for each e ∈ E). In this paper, we answered to this question in the special case where H is h-uniform and d-regular or H is h-uniform and almost d-regular, i.e. the degree sequence of V ert is (d 1 = v, d 2 = v, . . . , d n = v), (d 1 = v, . . . , d n 0 = v, d n 0 +1 = v − 1, . . . , d n 0 +n 1 = v − 1), respectively. Merging the results of the three previous sections we can state the following: 3. v ≤ h/n · n h . Moreover, given a degree sequence satisfying the conditions of this theorem, we give two linear time (in the size of the incidence matrix) algorithms that construct a h-uniform d-regular hypergraph or a h-uniform almost dregular hypergraph.
A next step to the characterization of the degree sequence of a simple hypergraph would be its study for the subclass of uniform hypergraphs (in particular three uniform hypergraphs) with span k, i.e. the degree of any vertex ranges from {v − k, v − k + 1, . . . , v} a set of k successive values, where k ≥ 2 is a fixed integer. | 2013-09-30T13:09:16.000Z | 2013-09-30T00:00:00.000 | {
"year": 2013,
"sha1": "a85dfe129742d3156e2d9ea39688d794ca910a81",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "981286c385764d0182cc59a4e7d5006ca2505bbd",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
215787835 | pes2o/s2orc | v3-fos-license | Large-scale multivariate dataset on the characterization of microbiota diversity, microbial growth dynamics, metabolic spoilage volatilome and sensorial profiles of two industrially produced meat products subjected to changes in lactate concentration and packaging atmosphere
Data in this article provide detailed information on the diversity of bacterial communities present on 576 samples of raw pork or poultry sausages produced industrially in 2017. Bacterial growth dynamics and diversity were monitored throughout the refrigerated storage period to estimate the impact of packaging atmosphere and the use of potassium lactate as chemical preservative. The data include several types of analysis aiming at providing a comprehensive microbial ecology of spoilage during storage and how the process parameters do influence this phenomenon. The analysis includes: the gas content in packaging, pH, chromametric measurements, plate counts (total mesophilic aerobic flora and lactic acid bacteria), sensorial properties of the products, meta-metabolomic quantification of volatile organic compounds and bacterial community metagenetic analysis. Bacterial diversity was monitored using two types of amplicon sequencing (16S rRNA and GyrB encoding genes) at different time points for the different conditions (576 samples for gyrB and 436 samples for 16S rDNA). Sequencing data were generated by using Illumina MiSeq. The sequencing data have been deposited in the bioproject PRJNA522361. Samples accession numbers vary from SAMN10964863 to SAMN10965438 for gyrB amplicon and from SAMN10970131 to SAMN10970566 for 16S.
b s t r a c t
Data in this article provide detailed information on the diversity of bacterial communities present on 576 samples of raw pork or poultry sausages produced industrially in 2017. Bacterial growth dynamics and diversity were monitored throughout the refrigerated storage period to estimate the impact of packaging atmosphere and the use of potassium lactate as chemical preservative. The data include several types of analysis aiming at providing a comprehensive microbial ecology of spoilage during storage and how the process parameters do influence this phenomenon. The analysis includes: the gas content in packaging, pH, chromametric measurements, plate counts (total mesophilic aerobic flora and lactic acid bacteria), sensorial properties of the products, meta-metabolomic quantification of volatile organic compounds and bacterial community metagenetic analysis. Bacterial diversity was monitored using two types of amplicon sequencing (16S rRNA and GyrB encoding genes) at different time points for the different conditions (576 samples for gyrB and 436 samples for 16S rDNA). Sequencing data were generated by using Illumina MiSeq. The sequencing data have been deposited in the bioproject PRJNA522361. Samples accession numbers vary from SAMN10964863 to SAMN10965438 for gyrB amplicon and from SAMN10970131 to SAMN10970566 for 16S.
© 2020 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license.
Specifications table
Subject area Applied Microbiology and Biotechnology More specific subject area Microbial ecology of food spoilage during industrial production. Type of data ( continued on next page ) Data format Raw, analyzed Parameters for data collection The type of meat used for production of sausage was the first parameter analyzed (raw pork sausages and raw poultry sausages). The second parameter was the time of storage from the raw meat material up to sausages at the end of storage (primary cuts, gut casing, spices and fat). The third parameter was meat batch variability along ten sampling campaigns which were conducted for each meat product. Finally, we analyzed two process parameters: one was the influence of three doses of potassium lactate (complete dose, half dose or zero dose) used as preservative as well as three packaging atmospheres (air, and two modified atmospheres i.e. 70%O 2 /30%CO 2 and 50%N 2 /50%CO 2 ,).
Description of data collection
Meat samples were collected directly in two independent factories in France, just out of the production line. The large-scale sampling strategy was organized over a period of six months from July to
Value of the data
• The data provide a link between meat spoilage (bacterial counts, pH, color, volatile metabolic spoilage compounds non-targeted quantification further referred as "volatilome" and sensorial profiles), the packaging atmosphere, and the use of lactate and microbiota composition. • Sequencing data can be used to understand the variation of bacterial community dynamics, abundance and diversity in these two meat products according to the type of meat and packaging atmosphere, and the use of lactate. • Sequencing data can be used to identify biomarkers of spoilage. Accessibility to 16S rDNA and gyrB OTU (Operational Taxonomic Unit) data and detailed associated metadata allows researchers to perform new analyses with their own research purposes. • Ten independent sampling campaigns were conducted for two meat products. A wide number of conditions were tested (three lactate concentrations and three packaging atmospheres). This large-scale sampling strategy in which over 550 samples were sequenced, at four different time points (from day 0 to day 22 of storage) provides a unique dataset for powerful statistical analysis of process parameters influencing raw meat sausage spoilage.
Data description
In the EU 20% of the initial meat production is lost, more than half occurring at animal production, slaughtering, processing and distribution steps [1 , 2] . This crucial economic and environmental issue in food industry is in part attributed to spoilage during storage, which is the consequence of bacterial growth and subsequent metabolic activities causing organoleptic changes of the final product unacceptable for human consumption (defects in texture, color, odor, taste or aspect) [3] .
For industrial meat producers, predicting accurate used-by-dates for their products based on spoilage occurrence remains a big challenge. The lack of large-scale multivariate data generated and associated with specific meat productions is one of the gaps that needs to be filled before such challenge could be overcome. The objective of this project was thus to provide such dataset based on a large collaborative project between academic partners, technical centers, and two industrial producers. We collected over a six months production period, much comprehensive information on the dynamics of bacterial communities and physico-chemical variables associ- ated with meat spoilage along the processing steps. Processed pork and poultry meat (sausages) were chosen as two examples of spoilage sensitive meat products that are among the most consumed in France. Products included all sausages ingredients (meat cuts, gut casing, spice mixes, and fat) to cover the whole meat process. The experimental work consisted in determining the diversity and the dynamics of bacterial communities during food processing and storage, and correlating them with spoilage occurrence. In the frame of this project, sensory and nontargeted metabolic volatilome analyses were performed in order to characterize the meat product spoilage in relation with food processing factors like concentration of chemical preservatives (potassium lactate) and gas composition of storage packaging. The originality of this project was to integrate biotic (microbial ecology), abiotic (sensory attributes and storage conditions) and temporal factors (monitoring along processing steps) to generate heterogeneous multivariate data for used-by-date predictive mathematical models. Fig. 1 illustrates the global experimental design of this study. Links to supplementary data (available as spreadsheets) presented in Table 1 include all the metadata characterizing each sample, as well as the samples that were selected for the different analyses (16S rDNA sequencing, gyrB sequencing, volatilome and sensorial profile).
Experimental design and sampling
The project focused on the monitoring of two food matrices: pork sausages and poultry sausages. For pork sausages, two types of meat pieces were used: normal mid shoulder meat and defatted/boneless/derinded shoulder meat (also named shoulder 4D). For poultry sausage, the meat was from turkey. In both sausages, pork fat was added to a final content of 20% and 11% in pork and poultry sausages, respectively. The following additives were added in Table 1 List of supplementary tables (Tables available as spreadsheets at https://doi.org/10.15454/UDQLGE ) making up the dataset. Table S1 Sample nomenclature and metadata associated with each sample. Table S2 pH dynamics over time for the different meat products. Table S3 Chromametric measures for the different meat products. Table S4 Gas composition (%) of the packaging atmosphere over time for the different meat products Table S5 Total mesophilic aerobic flora (log 10 CFU. g −1 ) over time for the different meat products Table S6 Lactic acid bacteria (log 10 CFU. g −1 ) over time for the different meat products Table S7 Non-targeted metabolic volatilome composition (reduced centered normalization of pic areas) over time for the different meat products Table S8 Sensorial profiles over time for the different meat products Table S9 Microbial diversity analysis based on 16S rDNA V3-V4 region amplicon sequencing including OTU abundance table, OTU taxonomic assignment table and samples metadata table usable for phyloseq R package analysis [4] . Table S10 Microbial diversity analysis based on gyrB amplicon sequencing including OTU abundance pork sausages: potassium lactate (1.13% w/w, corresponding to full normal dose); sodium acetate 0.27% (w/w); sodium ascorbate 0.06% (w/w). In poultry sausages the following additives were added: potassium lactate (2.0% w/w, corresponding to full normal dose). Furthermore, both sausages were battered with a spice mix as ingredient added to a concentration of ∼2.5% and containing (sodium salt ∼40%, dextrose ∼10%, spices ∼15%, aromas ∼22%). Ten sampling campaigns were conducted during 6 months on the production chains of two sausage producers in order to get a sufficient number of independent biological replicates. For each campaign and each food matrix, identical sampling strategy was applied. Four types of samples corresponding to the ingredients constituting sausages were collected: primal cuts, gut casing, fat and spices. Three meat batters were separately prepared with different doses of lactate (the normal dose routinely used by the industrials, half of this dose and without any lactate). The three meat batters were sampled before their embossing into the tubular gut casing. Pork sausages were packed immediately after embossing whereas poultry sausages were packed 2 days later. Sausages were separately packaged by five into trays under three different atmospheres (normal air; 50%CO 2 -50%N 2 ; and 70%O 2 -30%N 2 ), sealed with thin high barrier polyesterbased film PET/EVOH (Co-polymer of Ethylene and Vinyl alcohol)/PE (PolyEthylene) and stored during the first 5 days at 4 °C and then until the end of the incubation at 8 °C. The physical properties of the packaging were as follows: oxygen transmission rate < 5 cm 3 /m 2 ·24 h − 1 ·bar −1 , CO 2 transmission rate < 25 cm 3 /m 2 . During storage, sausages were sampled at three different dates: day 7, day 15, and day 22 (which was considered as an abused storage time).
Trays were frozen at −20 °C prior color, sensory, and VOC analyses. Other treatments or analyses were performed directly.
Physico-chemical analyses
For all samples pH measures were performed on three sausages per tray using FiveGo FG2 with the electrode LE427-S7 (Mettler toledo, USA) inserted in sausages. For pork sausages gas composition of each tray was assessed with CheckMate3 (Dansensor, France). The same procedure was applied for poultry sausages with a digital O 2 /CO 2 Oxybaby analyzer (WITT, Germany). Visual color changes were evaluated in triplicates for each condition, each sampling time and each production batch using the Minolta CR400 ChromaMeter (Grosseron, France) in CIE-Lab scale. The measurements determined chromatic coordinates of L * (brightness), a * (green-red balance) and b * (blue-yellow balance).
Bacterial collection
Ten to fifty grams of each food-product batch were mixed in a 400 ml stomacher bag of 280 μm (Interscience BagPage, France) with 4 vol (40 to 200 mL) of BK018HA peptone water (Biokar Diagnostics, France) supplemented with 1% V/V Tween 80 (VWR Chemicals, France). Mixes were treated for 3 min (pork) or 2 min (poultry) in a Masticator Homogenizator (IUL, Spain). Then, 32 mL of the shreds were collected and centrifuged at 500 × g for 3 min at 4 °C to spin down the food matrix fibers and debris. The still-turbid supernatant ( ∼25 mL) was collected and centrifuged at 30 0 0 × g (pork) or 10,0 0 0 x g (poultry) for 5 min at 4 °C to spin down the bacterial cells. The bacterial pellet thus obtained was washed in 1 mL of sterile ultrapure water and collected after centrifugation as above for 5 min at 4 °C to serve directly for DNA extraction or for plating.
Plating
Bacterial counts were estimated from filtrates obtained after the stomaching step and following the ISO 4833-1:2013 and ISO 15214:1998 methods. Serial dilutions in peptone water were performed and plated on Plate Count Agar (PCA) (Oxoid, France) for pork, (Biomerieux, France) for poultry and on de Man, Rogosa, and Sharpe (MRS) agar (Oxoid, France) for pork, (Biomerieux, France) for poultry and incubated aerobically for 48 h at 30 °C to estimate the total aerobic mesophilic population and the mesophilic lactic acid bacteria population size in CFU g −1 of meat, respectively.
Sequence read processing, OTU clustering and taxonomic assignments
Paired-end sequences were merged and trimmed as described previously [5] . Data were subsequently imported into the FROGS (Find Rapidly OTUs with Galaxy Solution) pipeline [6] to be cleaned, filtered, clustered into Operational Taxonomic units and taxonomically assigned using the Silva 128 SSU database [7] and a homemade gyrB database [5] .
Sensory descriptive analysis
Samples of poultry and pork sausages, previously stored at −20 °C, were one-night defrost in a cold room. Just before analysis, about 50 g of sample were transferred into a 250 mL opaque glass vial and sealed with a glass stopper. A sensory panel of experts in profiling techniques ( n = 15; at least 5 previous experiences on food sensory analysis with profiling techniques) were trained during 8 months (minimum 15 h) on altered pork and poultry sausages. During training, they developed a consensus vocabulary in order to separate important attributes describing altered sausages off-odor compared to non-altered ones. Same attributes were chosen for pork and poultry sausages. Quantitative descriptive analysis (QDA) [8] was performed using those seven olfactory attributes. The different attributes and their respective descriptors used for the evaluation of the samples were global alteration odor, rancid, eggy/sulfurous, ethereal/fermented fruit, fermented/old dry sausage, old cheese, sour/pungent. The QDA was performed in ten sessions. Each panelist received randomly 12 coded samples in a session. Samples were presented monadically according to a balanced design in vials containing sausage for sniffing. Sensory evaluation was carried out at room temperature (25 ± 2 °C) in isolated booths in a sensory lab. Each panelist rated the global alteration odor intensity and the 6 odor attributes intensities of each sample on a 10 cm unstructured line scale (from "absent" to "very intense") with sensory evaluation software (Fizz Biosystems, France).
Volatile organic compound (VOC) analysis
Samples of poultry and pork sausages, previously stored at −20 °C, were cut in small pieces in a cold room to minimize the volatilization of VOC. Around 3-4 g of sample were transferred into a 20 mL headspace vial and sealed with a screwcap with silicone rubber septum. The vial was weighted before and after sampling to determine the exact weight of sausage. Analyses were carried out in triplicate. VOC were determined by HS-GC-MS. All analyses were performed on a Varian 450 gas chromatograph (Varian, USA) coupled to a Varian 225-IT mass spectrometer (Varian), equipped with a CTC Combi PAL (CTC Analytics AG, Switzerland). Samples were equilibrated by agitation at 60 °C for 20 min prior to injection and 1 mL was drawled from the headspace to inject in the GC. The HS-GC-MS conditions were as follows: capillary column: DB-624 UI (30 m x 0.25 mm I.D × 1.4 mm film thickness) (Agilent Technologies, USA); Carrier gas: Helium with a flow rate of 1.4 mL.min −1 ; Injection port mode: splitless; Needle temperature: 60 °C; Injection temperature: 220 °C. The oven temperature was programmed from an initial temperature of 40 °C (7 min holding), rising to 50 °C at 4 °C/min (1 min holding), to 70 °C at 4 °C/min (1 min holding), to 120 °C at 3 °C/min (2 min holding) and to 245 °C at 30 °C/min (4 min holding). Transfer line temperature: 250 °C. The temperatures of the manifold and the ion trap were kept constant at 150 °C and 40 °C respectively. Data were obtained in scan mode at four scans/s in the mass range ( m/z ) of 35-350 atomic mass units. VOC were identified by comparison of GC retention times and mass spectra with those of the standard compounds. Peak area (in UA) was used as quantitative data to monitor the relative changes of VOC over storage, potassium lactate concentration and atmosphere packaging. Data were initially subjected to pre-processing using the standard normal variate transformation to facilitate comparison of the peaks with different magnitudes. | 2020-03-26T10:17:46.269Z | 2020-03-20T00:00:00.000 | {
"year": 2020,
"sha1": "c92b1520aa4a916d3ab21cd2cc1781296b6a87fd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2020.105453",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e12ac8ead9bb602e567193bc459be6da7306b8b0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
214541644 | pes2o/s2orc | v3-fos-license | A Road Condition Classification Algorithm for a Tire Acceleration Sensor using an Artificial Neural Network
: The automotive industry is experiencing a period of innovation, represented by the term CASE (connected, autonomous, shared, and electric). Among the innovative new technologies for automobiles, intelligent tire (iTire) collects road surface information through sensors installed inside a tire and informs the driver of the road conditions. iTire can promote safe driving. Various kinds of research on iTire is ongoing, and this paper proposes an algorithm to determine the road surface conditions while driving. Specifically, we have proposed a method for extracting the feature points of a frequency band, by converting acceleration data collected by sensors through fast Fourier transform (FFT) and determining road surface conditions via an artificial neural network. Lastly, the applicability of the algorithm was verified. Nine and Hz from where these data were summed) and a were selected as input variables for the input layer. The hidden layer had six nodes, and the final layer output one of three road conditions (dry, wet, or rough). To enable the edge the nodes to learn the optimal weight, the learning rate was set to 0.05 and the target error to 0.01.
Introduction
The automotive industry is experiencing a period of innovation, represented by the term CASE (connected, autonomous, shared, and electric). This term symbolizes the application of technologies from various fields to automobiles [1][2][3]. Considerable effort has also been directed toward improving driving stability and convenience by utilizing various vehicle technologies, such as sensors and wired/wireless communications [4][5][6]. Tires, which are in direct contact with the road surface, greatly affect the stability of the vehicle. As a technology that improves vehicle stability and driver convenience, tire-pressure monitoring systems (TPMSs) measure tire pressure and relay that information to the driver [7]. TPMS has been mandatory for automobiles sold in the United States since 2007, and in Korea since 2015 [8,9].
TPMS can prevent tire-related accidents [10,11]. However, it is also important that the driver is aware of road conditions, such as whether the surface is normal, unpaved, wet, snow-covered, or icy [12,13]. Given that tires are in direct contact with the road surface, information regarding not only tire-related, such as pressure, temperature, wear condition, and tread depth, but also information on the degree of road surface friction and pavement condition, can be obtained [14]. Tires that collect such information through an internal sensor and relay data on the road surface conditions to the driver are called intelligent tires (iTires). Cars equipped with iTire technology can determine the road condition on behalf of the driver, which may not be known by the driver, and ensure adequate steering and braking, thereby promoting safer driving [15,16].
Tire-based methods for sensing and classifying road conditions include utilizing the slip ratio (given by the speed ratio between the driving and driven wheels of a vehicle), applying information acquired visually, and using an ultrasonic sensor. This study used data on vibrations sensed by the tires. Kanwar et al. determined road surface conditions through a fuzzy logic-based method, in which the maximum frictional force was calculated using tire load and slip data [17]. However, that study used simulation data rather than data obtained in a real vehicle environment, so further studies are required for validation. Niskanen et al. conducted a study to distinguish between two road surfaces with different friction conditions by attaching an acceleration sensor to the inside of a tire [18]. Their road condition classification method can be used to measure acceleration within the leading edge section of the tire, i.e., the section before the sensor contacts the ground, in the frequency domain. However, the road surface has to be classified as either concrete or ice, where these surface types show large differences. Hanatsuka et al. classified road surface conditions using a support vector machine (SVM) learning technique [19]. Their method provided high road condition classification accuracy when used with tires of various sizes, but it is difficult to apply this method in real time due to the large computational load. In a subsequent study, a road condition classification method that can be applied in real time was introduced based on high-speed kernel computation [20]. However, a high-frequency acceleration sensor is required (5 kHz or higher), which leads to high energy consumption and costs.
In this paper, we describe the iTire system and propose an algorithm for determining road conditions using data from an acceleration sensor installed inside a tire. Specifically, our algorithm involves extracting the feature points of the frequency band, by converting the collected acceleration data through fast Fourier transform (FFT) and then applying an artificial neural network (ANN) analysis. Road conditions were classified as dry, wet, or rough. Finally, we verified the applicability of the algorithm using acceleration data accrued by the sensor for an actual vehicle.
This paper consists of five sections. Section 2 describes the iTire system. Section 3 provides the method for extracting feature points and an algorithm for determining road surface conditions using ANN analysis. Section 4 describes the experimental environment and the verification results. The final section reports the conclusions and future research plans.
iTire System
In general, iTires utilize the output of an acceleration sensor attached inside the tire to determine road conditions, which are relayed to the driver. The communication modality for iTires is usually wireless, because the sensor attached inside the rotating tire cannot be wired to the processing unit installed in the main body of the vehicle. Among the various wireless communication methods, Bluetooth is the most widely used because of its reliability and convenience. Figure 1 shows a schematic of the iTire system, including the accelerometer, the pathway for transmission of the acceleration data, and the road condition classification module that receives the data.
The acceleration sensor can measure up to 200 g on three axes and outputs data at a 1 kHzsampling rate; it is equipped with a Bluetooth transmitter. The sensor can convert measured analog values into digital values and then into Bluetooth (wireless) format. Bluetooth 4.2 can transmit up to a distance of 300 ft (91 m), at a transmission rate of up to 1 Mbps. Bluetooth 5.0, which was recently developed, can achieve a transmission range and rate of 1200 ft (366 m) and 2 Mbps, respectively.
The road condition classification system consists of a Bluetooth receiver (which receives the sensor information), a function that processes the received acceleration sensor values through FFT, and a function to determine the road conditions using ANN analysis. The data periodically transmitted from the acceleration sensor via Bluetooth are received by the Bluetooth receiver and transferred to a signal pre-processing unit that performs feature extraction (by converting the measured tire acceleration sensor values via FFT). Finally, ANN analysis determines the road surface conditions based on the features generated by the signal pre-processing unit. Figure 2 illustrates the algorithm used by the iTire system, which involves data acquisition via the acceleration sensor, processing of the signal using FFT, and determining the road conditions using ANN analysis. After initialization, the road condition classification algorithm acquires K signal values from the acceleration sensor. Determining the road condition is made more difficult by vibrations that increase with the shock applied to the tire, and by the noise generated in a real road environment. K was determined through trial and error and was acquired 2000 times over 2 seconds at 1 kHz. Once K acceleration sensor values are acquired, they can be processed via FFT. The values are divided into frequency bands, with segmentation at 50 Hz intervals from 100-500 Hz. The 0 Hz band is a direct current component providing velocity information about the vehicle. Values at 1-100 Hz are considered to be unsuitable for determining the road condition and are thus excluded from the analysis. The power spectrum values of each 100-500 Hz segment are summed. Figure 3 shows the results of summing each band, in terms of the road conditions (dry, wet, or rough) and velocity (40, 60 or 80 kph). In the figure, the value for the 0 Hz band indicates the velocity of the vehicle sufficiently well regardless of the road conditions, although there is a difference in that value according to the road conditions. The nine values obtained at 0 Hz, and at each interval in the range 100-500 Hz, were normalized so that they had a value between 0 and 1. These data were then used as input for the ANN, which determined the road condition (dry, wet, or rough). Back-propagation algorithm [21], an optimization technique, was used as the ANN learning algorithm. The hidden layer of the ANN consisted of six hidden nodes. The final output value of the ANN corresponded to the road conditions (dry, wet, or rough); the ANN was trained to converge to 1 for the likely road condition and 0 for the other road conditions. In total, 40% of the driving data were used for learning, and the remaining 60% for testing the results of the learning.
Discussion Performance Evaluation of the Road Condition Classification Algorithm
Acceleration sensor values were collected from a dedicated test site (proving ground, PG) that simulated various road conditions. These values were used to evaluate the learning performance and road condition classification algorithm. Figure 4 shows the experimental setup of the iTire system for collecting acceleration data. Figure 4a shows the acceleration sensor board, consisting of a sensor module, an analog-digital converter, and a Bluetooth transmitter. Figure 4b shows the road condition classification module, consisting of a Bluetooth receiver, a signal pre-processing unit, and an ANN. Figure 4c shows the acceleration sensor board attached to the inside of a tire, and Figure 4d shows the power supply module (center of the wheel). The power supply provides the power needed to operate the acceleration sensor, for which a 7.4 V, 2000 mA Li-ion battery was used.
To acquire sensor data, three conditions (dry, wet, and rough road surfaces) were set up in the dedicated PG. Here, dry refers to a typical asphalt road surface, wet to a road surface on which a water film about 8 mm deep was maintained, and rough to a section paved with rough asphalt. In total 426,000 data points were collected through repeated driving on straight road surfaces under various surface (dry, wet, rough) and velocity (40, 60, 80 kph) conditions. Table 1 shows the amount of data collected for evaluating the performance of the road condition classification algorithm. Because the dedicated PG had short wet and rough sections, fewer data were collected per pass under those conditions compared to the dry section. Also, as speed increased, the passing time was shorter, so fewer data were collected per pass. Figure 5 shows example acceleration sensor data for the dry road, before they were subjected to FFT processing. In the figure "Leading edge" indicates the point where the acceleration sensor comes into contact with the road surface, "Contact" denotes the section maintaining road contact, and "Trailing edge" indicates the point where road contact ends. Here, the x-axis indicates the front and rear parts of the vehicle, and the y-axis the left and right parts. The z-axis is the vertical dimension. In Figure 5a, the acceleration on the x-axis shows that the output decreased at the leading edge and returned to the original output waveform after passing the trailing edge. Acceleration on the y-axis in Figure 5b, and on the z-axis in Figure 5c, show that the output was lowest at the leading edge and highest at the trailing edge. Acceleration on the x-and y-axes showed similar characteristics, while acceleration on the z-axis showed different values. These acceleration sensor values were used as input for determining the road surface condition via ANN analysis after FFT processing. Table 1. Number of acceleration data points measured while driving in the proving ground (PG). In this paper, the relationship between the acceleration data measured through iTire and the road surface condition was modeled using MLP. Nine values obtained through pre-processing (at 0 Hz, and then at each 50 Hz interval from 100-500 Hz, where these data were summed) and a bias value were selected as input variables for the input layer. The hidden layer had six nodes, and the final layer output one of three road conditions (dry, wet, or rough). To enable the edge connecting the nodes to learn the optimal weight, the learning rate was set to 0.05 and the target error to 0.01.
Road condition Velocity (kph) Number of data points
Sigmoid was chosen as the activation function. About 40% of the collected data were used for learning, and the remaining 60% were used as input to verify performance. Table 2 shows the performance evaluation results of the road condition classification algorithm. As the table shows, performance was evaluated based on at least 32,000, and up to 160,000, acceleration sensor values, depending on the road conditions. The target value representing the road surface condition was set to 1. The target error, which was the basis of the road classification, was defined as the value obtained after subtracting the output value of the ANN from the target value. In one dataset for the dry road condition (2000 data points), the target error was calculated as 0.3062 (= 1 − 0.6938), which we determined to be a recognition error (where, in this paper, the acceptance criterion for classifying a road surface condition was an output value of more than 0.9). The reason for this large error value is not clear, but it is presumed that some of the driving occurred on another road condition during the course of the experiment, or that a mistake was made during data processing. Since an error occurred in 1 of 80 data sets (160,000 data points) under the dry road condition, the road condition classification accuracy was 98.75%. No error occurred under wet or rough road conditions, therefore their accuracies were 100%.
Overall, the road condition classification algorithm of the iTire system proposed herein showed excellent performance. Therefore, it is expected that the proposed road condition classification algorithm could be applied to a real vehicle, thereby improving driving safety.
Conclusions
This paper proposed an algorithm that can determine the road surface condition of a driving vehicle using an acceleration sensor attached to the inside of a tire. Specifically, a method was proposed for generating an input to an ANN after applying FFT. The algorithm determines the road surface condition using the acceleration sensor values extracted from the vehicle environment. Its applicability was verified, based on which the following conclusions can be drawn.
First, it was confirmed that the iTire system can be functionally implemented using a sensor board, a classification module, and a power supply module. Bluetooth wireless communication was confirmed as being suitable for collecting sufficient acceleration sensor data to determine the road conditions.
Second, it was confirmed that the acceleration sensor data could be used to determine the road surface conditions via signal processing. Specifically, the acceleration sensor output was divided into signals representing velocity and road surface condition.
Third, the proposed method for determining road surface conditions based on ANN analysis was highly accurate, suggesting its practical applicability to a real vehicle.
However, in this paper, only three kinds of road conditions (dry, wet, rough) were considered, therefore further studies are needed to classify more diverse environments, such as icy or snowcovered roads. Thus, more data are required, as well as research on more advanced learning processes, such as deep learning. Finally, further research on the proposed iTire system is necessary to achieve sustainability, via power supply methods such as energy harvesting. | 2020-03-05T11:03:11.192Z | 2020-02-28T00:00:00.000 | {
"year": 2020,
"sha1": "75faef54364e1915a50a95f8a11b00565bcaf71d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/9/3/404/pdf?version=1584583538",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "ebf0182aacb10f06d935caf1f750799419850359",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252091557 | pes2o/s2orc | v3-fos-license | Methodologic attributes of quality improvement studies in neonatology: a systematic survey
Introduction Quality improvement (QI) is a growing field of inquiry in healthcare, including neonatology. However, there is limited information on the study setting, and the methodologic approaches used to develop, implement and evaluate QI interventions in neonatology studies. In this study, we describe these intervention characteristics and approaches. Methods Articles were taken from a previous publication. There, we searched MEDLINE for publications of QI studies from 2016 to 16 April 2020. We retrieved all relevant full-text publications and sampled 100 of these articles for data abstraction, stratified by the year of publication. For each QI study, we described several methodological characteristics that included: the clinical topic of QI, setting, whether the study was multicentre, stakeholder engagement, root cause analysis and related problem identification methods, implementation techniques for QI interventions, types of outcomes and statistical analysis methods used. Results We assessed 100 studies; most were conducted in the USA (56%). Academic settings and multicentre settings comprised 44% and 24% of studies, respectively. Most studies reported stakeholder engagement (81%), but infrequently reported engagement with leadership (32%) and caregivers (10%). Frequently used techniques for implementing interventions include provider education (82%), formal QI methods (42%) and audit, feedback and benchmarking (40%). Both patient-important clinical outcomes (78%) and process outcomes (89%) were frequently reported. P values were frequently reported (80%), but other statistical techniques were infrequently used. Conclusion QI studies in neonatology use diverse multicomponent interventions. Reporting of these methodologic details can be useful in designing, implementing and evaluating QI studies in clinical practice.
INTRODUCTION
Quality improvement (QI) in healthcare, defined broadly as systematic activities conducted to achieve sustained improvements in patient outcomes and health systems performance, has become an increasingly active field of inquiry. 1 There is a wide variation in the definition, objectives and methodologic features of studies that can be termed as a 'quality improvement' study. The Agency for Healthcare Research and Quality largely sees QI as activities that bridge the gap between the 'ideal' evidence-based standards and local clinical practice. 2 The SQUIRE V.2.0 QI reporting guideline considers QI to be any 'systematic effort to improve the quality, safety and value of healthcare', 3 which can also include cohort studies, randomised controlled trials and comparison studies. QI is also interwoven with implementation science, health services research and many other study types. 4 Hence, it may take on research methods and publication practices of these fields of inquiry.
Understanding how QI work is defined and conducted amid this diversity can inform efforts to improve the reporting, effectiveness and adaptation of QI activities and evidence in clinical practice. Previous literature reviews have examined the reporting of methodologic attributes for specific types of QI studies, 5 6 summarised the various QI methodologies that are used and examined QI reporting among the general healthcare literature, 7 and in the specialties of diabetes, 8 antimicrobial selection 9 and perioperative care. 10 However, to our knowledge, a methodologic review of QI studies has not been performed in neonatology.
Furthermore, numerous reporting guidelines and QI primers have been published to guide clinicians on how to contextualise, develop, implement and evaluate QI studies in hospital and community settings. These methodologic attributes include identifying change ideas through stakeholder engagement and problem-framing methods, implementing interventions using QI methodologies such as plan-do-study-act cycles or total quality management and evaluating the project using outcome, process and balancing measures and statistical analysis using statistical process controls. [11][12][13][14] Understanding how recently published QI work has incorporated these characteristics can Open access inform future directions in QI reporting and effectiveness efforts.
The objective of our study is to describe the characteristics of the setting, intervention development, implementation and evaluation approaches of QI studies in neonatology. Herein, we will quantify how often various techniques for developing, implementing and evaluating QI activities are used or reported in published articles.
METHODS
This study uses articles from a previous systematic survey of the literature. 15 The literature search, full-text screening and random selection of articles are described in a previous publication. 8 In summary, we searched the Medline database for publications from 2016 to 16 April 2020, as defined by the 'year of publication' field using the search strategy shown in online supplemental appendix A1. Subsequently, for the title/abstract and full-text screening, we sought to include all QI articles. Here, we defined QI as any study that described a systematic effort to improve the quality, safety or value of healthcare, in line with the definition stated in SQUIRE V.2.0, the current guideline for reporting quality improvement studies. 3(p0) After obtaining all relevant full-text articles, we selected a random sample of 100 articles to assess, stratified by the year of publication. The random selection process involved first determining the number of articles that were published each year (2016-2020), and then assigning a probability of selecting each stratum. Second, within each year-of-publication stratum, we sorted the articles randomly using Excel. Finally, we sampled the articles in their sorted order, based on the probability that the article belonged to a specific stratum. A flow diagram of our selection process, adapted from our previous study, is shown in figure 1.
Determining attributes to evaluate
The methodologic attributes to assess, such as stakeholder engagement or statistical methods, and the categories within each attribute, such as the type of statistical method used, were selected by ZJH. These attributes and its categories were adopted from published QI primers by Silver et al 14 (Quality improvement in emergency medicine), Chartier et al [11][12][13] (Quality improvement in haemodialysis), and Shojana et al 16 (Closing the quality gap reports). After selecting these attributes and its categories, ZJH consulted with LT, SeH and GF to determine whether these attributes were important features of quality improvement reports and if they were feasible to evaluate. The categories for each methodologic attribute were further adjusted as the review progressed.
Some of the attributes that describe the study were chosen to examine specific questions of interest. Most attributes were chosen as they represented the design/ framing, implementation and evaluation of QI interventions. These attributes, our rationale for selecting them and our questions of interest, are shown in table 1.
A posteriori attributes Following comments from peer reviewers for deeper analysis, we added additional categories for 'intervention implementation' based on intervention categories derived from Chartier et al and Shojana et al. 12 16 The categories of intervention implementation, and its definition, are shown in table 2.
Data abstraction
We provided training to the student reviewers and then pilot tested two articles each to assess their agreement with ZJH and with each other. Each reviewer was provided a codebook that briefly defined each methodologic attribute. Following the pilot testing, we updated the codebook based on feedback to ensure a shared understanding. Finally, the remaining articles were divided between three student reviewers and assessed independently with ZJH. Disagreements were resolved arbitrarily by ZJH or by discussion for more subjective cases.
Analysis/synthesis
We summarised the characteristics of the included studies using descriptive statistics with categorical variables reported as frequencies. ZJH noted the clinical topic of quality improvement of each study using free text, and generated Word clouds to describe them.
Study setting and context
The frequency that each item was reported is shown in table 3. Most studies were conducted in the USA (56%) and considered the individual patient as a unit of intervention (70%). A large number (44%) of publications reported that their activities were conducted in an academic or teaching hospital setting, though we note that this may be higher as studies may not report this detail. Only 24% of studies were multicentre; these included QI interventions implemented at multiple centres, a QI programme implemented in hospitals participating in a statewide quality improvement network or in multiple communities. The clinical topics of QI studies were very diverse, and there Understand which problems are being addressed and provide insights into which problems may not be frequently addressed.
Healthcare setting Was the study conducted in an academic/teaching hospital?
Are quality improvement activities mostly implemented in an academic hospital setting, which have teaching and research mandates?
Multicentre studies Are studies being conducted at multiple hospitals/ communities/centres? 2 9 How often do published QI activities involve, or report on, collaboration between multiple hospitals?
Ethics approval Whether a review board approved or waived ethics requirement?
How often did QI studies receive an ethics waiver, and hence, be considered as a QI study by the review board?
Root cause analysis (Problem identification method) Did the study use root-cause analysis or related methods (Fishbone/Ishikawa diagram, Pareto charts, or process mapping) when identifying the specific QI problem to tackle in their local setting? 11 Root cause analysis, or the three related methods, is a technique for understanding the problem and adapting a locally tailored solution; it is emphasised in QI primers.
Stakeholder engagement
Were healthcare providers, caregivers or healthcare leaders involved in the design and adaptation of the QI activities?
QI projects succeed when clinical expertise is used, the activities are backed by the hospital management, and when parents or caregivers are involved in shaping the QI programme. 11 14 Components of QI interventions Modalities of implementing interventions. (See table 2 for definitions) Develop an understanding of the approaches to implementing QI interventions, and how often they are used.
Measures
What types of outcomes were evaluated and how often were they reported? 1. Patient-important outcome measures-outcomes that directly affect a patient's health and well-being. 12
Open access
were no dominating clinical topic (see figure 2). Nearly half (48%) of studies were waived for ethics approval, and, hence, was viewed by the institutional review board as a quality improvement study. More studies may have been waived but did not declare it.
Intervention development, implementation and evaluation
The frequency that each intervention implementation technique was used or reported is shown in figure 3. Few studies used root cause analysis and its related techniques to explore and identify problems requiring QI. Most studies reported stakeholder engagement (81%). However, few reported engagement with hospital leadership (32%) and even fewer with parents and caregivers (10%). The majority of studies included healthcare provider education and training (76%) in implementing their intervention. Other implementation strategies that were used often (greater than one-third) include clinical guidelines, standardisation of practices, audit, feedback and benchmarking, communication (including teamwork and simulation) and QI techniques such as Plan-Do-Study-Act \ cycles and Total Quality Management.
Most studies incorporated patient-relevant outcome measures (78%) and measures that examine or monitor the processes of care and implementation (89%). Most studies also assessed the statistical significance of their interventions (80%). However, the use of other statistical methods including CIs, statistical process control and adjustment of confounders through statistical modelling was seldom mentioned. Therefore, most studies did not examine the variability of their intervention effect estimates, assess for special cause variation over time, nor adjust for confounders when estimating intervention effects.
DISCUSSION
This systematic survey investigated various methodologic approaches in the development, implementation and evaluation of quality improvement reports in the neonatology literature. Studies were conducted mostly in the USA, targeted to patients as a unit of intervention, Table 2 Components of quality improvement interventions and their definitions 12 16
Methods for implementing QI interventions Definition
Checklist A checklist is used as part of the quality improvement intervention Clinical guideline The word 'guideline' is mentioned in the intervention section, and the context implies that guidelines are used to implement the intervention.
Standardisation of care processes
The word 'standardisation' or similar wording (such as 'standard', 'standardised') is mentioned in the intervention section, and the context implies that guidelines are used to implement the intervention. Open access mostly engaged healthcare providers as stakeholders, used education and training as an implementation technique, frequently included patient-important outcome and process of care measures and frequently reported p values in their statistical analysis.
A key strength of this study is that we assessed methodologies of design, implementation and evaluation all together. This approach allows researchers to better understand the methodologies employed in QI studies across its entire lifecycle, which can inform future directions on QI education efforts and publication practices. In comparison, previous QI literature syntheses did not examine these attributes all together, or only focused on specific types of QI methodologies. 5 7 8 10 A key limitation is in accurately assessing the frequency of numerous attributes is the way they are reported. On study setting, we may have undercounted the number of studies that occurred in academic or teaching hospitals because the manuscript did not always mention this detail in the study setting. Furthermore, we were unable to assess how many studies took place in community nonacademic hospitals because many studies did not report whether their studies occurred in community hospitals or not. Thus, we could not claim whether QI studies
Open access
were dominated by academic/teaching hospitals or not. The same issue arose for ethics approval, where we cannot ascertain whether all studies have reported their waiver of consent. Likewise, many studies may not have reported their engagement with leadership or parental stakeholders, as such information is not required directly by SQUIRE V.2.0 reporting guidelines. 3(p2) Herein, we have identified several aspects of QI study design, implementation and evaluation that can be targeted for future improvement efforts. First, studies can engage in more stakeholder engagement with leadership and caregiver stakeholders. Successful engagement with hospital leadership may ensure that leadership will use their influence, and devote resources towards conducting a rigorous, successful and sustainable QI project. 14 17 Likewise, engagement with caregivers is central to ensure patient-centred healthcare delivery in the neonatal context. 1 Reporting these details will allow researchers to share information and appraise engagement strategies.
Second, efforts for improving QI in neonatology can focus on improving the reporting, implementation and effectiveness of intervention strategies we identified above: clinical practice guidelines, standardisation of care, provider education and training, audit, feedback and benchmarking and communication. Each of these strategies have its own methodologic nuances and frameworks. Education efforts can focus on these techniques to better support improving QI.
Finally, the infrequent use of other statistical techniques among publications indicate that improvements are needed in the statistical analysis of QI studies. Studies can enhance their statistical analysis by employing statistical process control, as doing this would imply that studies both monitored continuously and used statistical techniques to infer if the intervention has achieved sustained change in outcome. 18 Interventions focused on pre-post change can also benefit from employing CIs, as this indicates the magnitude of change seen, 19 which is more in line with goals of improvement.
CONCLUSION
Overall, QI studies in neonatology are characterised by diverse forms of interventions. Additionally, this study has identified areas for further methodologic work, such as the reporting of stakeholder engagement with leaders and caregivers, targeting specific intervention techniques and approaches to statistical analysis. These improvements can strengthen the effectiveness of quality improvement activities and contribute to advanced healthcare practices for providers and improved health outcomes for patients.
Contributors ZJH drafted the manuscript, designed the study, reviewed articles and made the final decision on data abstraction. LT mentored ZJH and provided expertise on manuscript writing and items to include in data abstraction. SeH and GF were consulted in formulating items for the data abstraction form. CH, JYW and MM acted as second reviewers for ZJH. All authors read, provided feedback and approved the final manuscript.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not applicable.
Ethics approval Not applicable.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, | 2022-09-07T02:28:51.652Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "530b91785881fc5022fce2f52363daa41eff78ab",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopenquality.bmj.com/content/bmjqir/11/3/e001898.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "049148f7297a395a2fe7ff2e60e45cbb18765513",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
263787361 | pes2o/s2orc | v3-fos-license | Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text
The ability to comprehend wishes or desires and their fulfillment is important to Natural Language Understanding. This paper introduces the task of identifying if a desire expressed by a subject in a given short piece of text was fulfilled. We propose various unstructured and structured models that capture fulfillment cues such as the subject's emotional state and actions. Our experiments with two different datasets demonstrate the importance of understanding the narrative and discourse structure to address this task.
Introduction
Understanding expressions of desire is a fundamental aspect of understanding intentional human-behavior.The strong connection between desires and the ability to plan and execute appropriate actions was studied extensively in contexts of rational agent behavior [16], and modeling human dialog interactions [19].
In this paper we recognize the significant role that expressions of desire play in natural language understanding.Such expressions can be used to provide rationale for character behaviors when analyzing narrative text [18,10], extract information about human wishes [17], explain positive and negative sentiment in reviews, and support automatic curation of community forums by identifying unresolved issues raised by users.
We follow the intuition that at the heart of the applications mentioned above is the ability to recognize whether the expressed desire was fulfilled or not, and suggest a novel reading comprehension task: Given text, denoted as Desire-expression (e.g., "Before Lenin died, he said he wished to be buried beside his mother.")containing a desire ("be buried beside his mother") by the Desire-subject ("he"), and the subsequent text (denoted Evidence fragments or simply Evidences) appearing after the Desire-expression in the paragraph, we predict if the Desire-subject was successful in fulfilling their desire.Fig. 1 illustrates our setting.
Similar to many other natural language understanding tasks [8,28,2], performance is evaluated using prediction accuracy.However, unlike tasks such as text categorization or sentiment classification which rely on lexical information, understanding desire fulfillment requires complex inferences connecting expression of desire, actions affecting the Desire-subject, and the extent to which these actions contribute to fulfilling the subject's goals.For example, in Fig. 1 the action of 'preserving' Lenin's body led to non-fulfillment of his desire.
We address these complexities by representing the narrative flow of Evidence fragments, and assessing if the events (and emotional states) mentioned in this flow con-tribute to (or provide indication of) fulfilling the desire expressed in the preceding Desire-expression.Following previous work on narrative representation [4], we track the events and states associated with the narrative's central character (the Desire-subject).
While this representation captures important properties required by the desire-fulfillment prediction task, such as the actions taken by the Desire-subject, it does not provide us with an indication about the outcome of these actions.Recent attempts to support supervised learning of such detailed narrative structures by annotating data [11], result in highly complex structures even for restricted domains.Instead we model this information by associating a state, indicating if the outcome of an action (or the mention of an emotional state) provides evidence for making progress towards achieving the desired goal.We model the transitions between states as a latent sequence model, and use it to predict if the value of the final latent state in this sequence is indicative of a positive or negative prediction for our task.
We demonstrate the strength of our approach by comparing it against two strong baselines.First, we demonstrate the importance of analyzing the complete text by comparing with a textual-entailment based model that analyzes individual Evidence fragments independently.We then compare our latent structured model, which incorporates the narrative structure with an unstructured model, and show improvements in prediction performance.Our key contributions are:
Problem Setting
Our problem consists of instances of short texts (called Desire-expressions), which were collected in a manner so that each consists of an indication of a desire (characterized using a Desire-verb) by a Desire-subject(s).The Desire-verb is identified by the following verb phrases: 'wanted to', 'wished to' or 'hoped to' 1 .The three Desireverbs were identified using lexical matches while the Desire-subject(s) was marked manually.Each Desireexpression is followed by five or fewer pieces of Evidence fragments (or simply Evidences).The Desireexpression and the Evidences (in order) consist of individual sentences that appeared contiguously in a paragraph.We address the binary classification task of predicting the Desire Fulfillment status, i.e. whether the indicated desire was fulfilled in the text, given the Evidences and the Desire-expression with Desire-verb and Subject identified.Fig. 1 shows an example of the problem.
2 Inference Models for Understanding Desire Fulfillment in Narrative Text In this section we present three textual inference approaches, each following different assumptions when approaching the desire-fulfillment task, thus allowing a principled discussion about which aspects of the narrative text should be modeled.
Our first approach assumes the indication of desire fulfillment will be contained in a single Evidence fragment.We test this assumption by adapting the well-known Textual Entailment task to our settings, by generating entailment candidates from Desire-expression and Evidence fragments.
Our second approach assumes the decision depends on the Evidence text as a whole, rather than on a single Evidence fragment.We test this assumption by representing relevant information extracted from the entire Evidence text.This representation (depicted in Fig. 3) connects the central character in the narrative, the Desiresubject, with their actions and emotional states exhibited in the Evidence text.This representation is then used for feature extraction when training a binary classifier for the desire-fulfillment task.
Our final model provides a stronger structure for the actions and emotional states expressed in the Evidence text.The model treats individual Evidence fragments as parts of a plan carried out by the Desire-subject to achieve the desired goal, and makes judgments about the contribution of each step towards achieving the desired goal.
that in future work.
Textual Entailment (TE) Model
Recognizing Textual Entailment (RTE) is the task of recognizing the existence of an entailment relationship between two text fragments [8].From this perspective, a textual entailment based method might be a natural way to address the desire fulfillment task.RTE systems often rely on aligning the entities appearing in the text fragments.Hence we reduce the desire fulfillment task into several RTE instances consisting of text-hypothesis pairs, by pairing the Desire-expression (hypothesis) with each of the Evidence fragments (text) in that example.However, we "normalized" the Desire-expression, so that it would be directly applicable for the RTE task.For example, the Desire-expression, "One day Jerry wanted to paint his barn.",gets converted to "Jerry painted his barn.".This process followed several steps: • If the Desire-subject is pronominal, replace it with the appropriate named entity when possible (we used the Stanford CoreNLP coreference resolution system) [23].• Ignore the content of the Desire-expression appearing before the Desire-subject.• Remove the clause containing the Desire-verb ('wanted to', 'wished to' etc.), and convert the succeeding verb to its past tense.
The desire was considered 'fulfilled' if the RTE model predicted entailment for at least one of the texthypothesis pairs of the example.E.g., the model could infer that the normalized Desire-expression example mentioned above, would be entailed by the following Evidence fragment-"It took Jerry six days to paint his barn that way." and hence it would conclude that the desire was fulfilled.Table 1 shows the performance of BIUTEE [30,21], an RTE system, on the two datasets (see Sec. 4) used in our experiments2 .Our results show that the RTE Model performs better with normalization.We use this model (with normalization) as a baseline in Sec. 5.
Unstructured Model
The Textual Entailment model described above assumes that the Desire-expression would be entailed by one of the individual Evidences.This assumption might not hold in all cases.Firstly, the indication of desire fulfillment (or its negation) can be subtle and expressed using indirect cues.More commonly, multiple Evidence fragments can collectively provide the cues needed to identify desire fulfillment.This suggests a need to treat the entire text as a whole when identifying cues about desire fulfillment.
We begin by identifying the Desire-subject and the desire expressed (using 'focal-word' described in Sec. 3) in the Desire-expression.Thereafter, we design several semantic features to model coreferent mentions of the Desire-subject, actions taken (and respective semanticroles of the Desire-subject), and emotional state of the Desire-subject in the Evidences.We enhance this representation using several knowledge resources identifying word connotations [15] and relations.Fig. 3 presents a visual representation of this process and Sec. 3 presents further details.
Based on these features, extracted from the collection of all Evidences instead of individual Evidence fragments, we train supervised binary classifiers (Unstructured models).
Latent Structure Narrative Model (LSNM)
The Unstructured Model described above captures nuanced indications of desire-fulfillment, by associating the Desire-subject with actions, events and mental states.However, it ignores the narrative structure as it fails to model the 'flow of events' depicted in the transition between the Evidences.Our principal hypothesis is that the input text presents a story.The events in the story describe the evolving attempts of the story's main character (the Desire-subject) to fulfill its desire.Therefore, it is essential to understand the flow of the story to make better judgments about its outcome.
We propose to model the evolution of the narrative using latent variables.We associate a latent state (denoted h j ), with each Evidence fragment (denoted e j ).The latent states take discrete values (out of H possible values, where H is a parameter to the model), which abstractly represent various degrees of optimism or pessimism with respect to fulfillment, f of the desire expressed in the Desire-expression, d.These latent states are arranged sequentially, in the order of occurrence of the corresponding Evidence fragments, and hence capture the evolution of the story (see Fig. 2).
The linear process assumed by our model can be summarized as: The model starts by predicting the latent state, h 0 , based on the first Evidence, e 0 .Thereafter, depending on the current latent state, and the content of the following Evidence fragment, the model transitions to another latent state.This process is repeated until all the Evidence fragments are associated with a latent state.We formulate the transition between narrative states as sequence prediction.We associate a set of Content features with each latent state, and Evolution features with the transitions between states.
Note that the desire fulfillment status, f , is viewed as an outcome of this inference process and is modeled as the last step of this chain using a discriminative classifier which makes its prediction based on the final latent state and a Structure-independent feature set, φ(d).This feature set can be handcrafted to include information that could not be modeled by the latent states, such as longrange dependencies, and other cumulative features based on the Desire-expression, d, and the Evidence fragments, e j s.
We quantify these predictions using a linear model which depends on the various features, φ, and corresponding weights, w.Using the Viterbi algorithm we can compute the score associated with the optimal state sequence, for a given input story as:
Learning and Inference
During training, we maximize the cumulative scores of all data instances using an iterative process (Alg.1).Each iteration of this algorithm consists of two steps.In the first step, for every instance, it uses Viterbi algorithm (and weights from previous iteration, w t−1 ) to find the highest scoring latent state sequence, h, that agrees with the provided label (the fulfillment state), f .In the following step, it uses the state sequence determined above Ei refers to the i th evidence out of a total of N evidences.
to get refined weights for the t th iteration, w t , using structured perceptron [7].The algorithm is similar to an EM algorithm with 'hard' assignments albeit with a different objective.While testing, we use the learned weights and Viterbi decoding to compute the fulfillment state and the best scoring state sequence.Our approach is related to latent structured perceptron though we only use the last state (and structure-independent features) for prediction.
Features
We now describe our features and how they are used by the models.Table 2 defines our features and Fig. 3 describes their extraction for an example.They capture different semantic aspects of the desire-expression and evidences, such as entities, their actions and connotations, and their emotive states using lexical resources like Connotation Lexicon [15], WordNet and our lexicon of conforming and dissenting phrases.Before extracting features, we pre-processed the text 3 and extracted all adjectives and verbs (with their negation statuses and connotations) associated with the Desire-subject using isConforming, isDissenting: Binary features indicating if the Evidence starts with a conforming or dissenting phrase (respectively).See Table 3 for example phrases.
Emotional State (F10-F11
): Signals about the fulfillment status could also emanate from the emotional state of the Subject.A happy or content Desire-subject can be indicative of a fulfilled desire (e.g. in Evidence e3 in Fig. 4), and vice versa.We quantify the emotional state of the Subject(s) using connotations of the adjectives modifying their mentions.
Action features (F12-F15):
These features analyze the intended action and the actions taken by various entities.We first identify the intended action -the verb immediately following the Desire-verb in the Desire Expression.e.g., in Fig. 4 the intended action is to 'help'.Thereafter, we design features that capture the connotative agreement between the intended action and the actions taken by the Desire-subject(s) in the Evidences.We also include features that describe connotations of actions (verbs) affecting the Desire-subject(s).E.g. in e1 of Fig. 4, the action by the Desire-subject (marked in blue), 'offered', is in connotative agreement with the intended action, 'help' (both have positive connotations according to [15]).Also, the actions affecting the subject ('thanked', 'gifted') have positive connotations indicating desire fulfillment.7. Sustenance Features (F16-F17): LSNM uses a chain of latent states to abstractly represent the content of the Evidences with respect to Desire fulfillment Status.At any point in the chain, the model has an expectation of the fulfillment status.The sustenance features indicate if the expectation should intensify, remain the same or be reversed by the incoming Evidence fragment.This is achieved by designing features indicating if the Evidence fragment starts with a 'conforming' or a 'dissenting' phrase.E.g. e3 in Fig. 4 starts with a conforming phrase, 'Overall', indicating that the fulfillment status expectation (positive in e2) should not change.Table 3 presents some examples of the two categories.These phrases were chosen using various discourse senses mentioned in [27].The complete list is available on the first author's webpage.
Unstructured Models
For the unstructured models, we directly used the Entailment and Discourse features (F1 to F3 in Table 2).For features F4 to F15, we summed their values across all Evidences of an instance.This ensured a constant size of the feature set in spite of variable number of Evidence fragments per instance.
Latent Structure Narrative Model
Our Structured model requires three types of features: (a) Content features that help the model assign latent states to Evidence fragments based on their content, (b) Evolution features that help in modeling the evolution of the story expressed by the Evidence fragments (c) Structure Independent features used while making the final prediction.Content features: These features depend on the latent state of the model, h j , and the content of the corresponding Evidence, e j (expressed using features F4 to F15 in Table 2).1. φ(h j , e j ) = α if the current state is h j ; 0 otherwise where α ∈ F4 to F15 Evolution features: These features depend on the current and previous latent states, h j and h j−1 and/or the current Evidence fragment, e j : 1. φ(h j−1 , h j ) = 1 if previous state is h j−1 and current state is h j ; 0 otherwise.2. φ(h j−1 , h j , e j ) = α if previous state is h j−1 , current state is h j ; 0 otherwise where α ∈ F16 and F17 3. φ(h 0 ) = 1 if start state is h 0 ; 0 otherwise.Structure Independent features φ(d): This feature set is exactly same as that used by the Unstructured models.
Datasets
We have used two real-world datasets for our experiments: MCTest and SimpleWiki consisting of 174 and 1004 manually annotated instances respectively.Both the datasets (available on the first author's webpage) were collected and annotated in a similar fashion.
Collection and annotation: The MCTest data originated from the Machine Comprehension Test dataset [28] which contained of a set of 660 stories and associated questions.The vocabulary and concepts are limited to the extent that the stories would be understandable by 7 year olds.We discard the questions and only consider the free text of the stories.
The SimpleWiki dataset was created from the textual content of an October, 2014 4 dump of the Simple English Wikipedia.We discarded all lists, tables and titles in the wiki pages.We chose Simple English Wikipedia instead of Wikipedia articles to limit the complexity of the vocabulary and world knowledge required to comprehend the content thus making the task simpler and manageable.
The Desire-subject(s) and the Desire Fulfillment Status were manually annotated on CrowdFlower 5 .Each instance was annotated by 3 or more annotators as determined by CrowdFlower using expected annotation accuracy.Annotators were also required to demonstrate proficiency on an initial set of 5 test instances.To avoid annotator fatigue, each annotator was presented only 3 instances per session.The mean CrowdFlower confidence (inter-annotator agreement weighted by their trust scores) of the annotations was 0.92.
Training and Test Sets: The SimpleWiki and MCTest data consisted of about 1000 and 175 instances, 20% of which was held-out as test sets.In the test sets of SimpleWiki and MCTest, 28% and 56% of the data belonged to the positive (desire fulfilled) class respectively.
Empiricial Evaluation
For evaluation, we compared test set performances using F1 score of the positive (desire fulfilled) class.We also included a simple Logistic Regression baseline based on Bag-of-Words (BoW) features.Table 4 and show the results for the best two models: LR (Logistic Regression) and DT (Decision Trees).We report median performance values over 100 random restarts of our model since its performance depends on the initialization of the weights.Also, our model requires the number of latent states, H, as input which was set to be 2 and 15 for the MCTest and SimpleWiki datasets respectively using cross-validation.The difference in optimal H values (and F1 scores) for the two datasets could be attributed to the difference in complexity of the language and concepts used in them.The MCTest dataset consists of children stories, focusing on simple concepts and goals (e.g., 'wanting to go skating') and their fulfillment is indicated explicitly, in simple and focused language (e.g., They went to the skating rink together.).On the other hand, SimpleWiki describes real-life desires (e.g., 'wanting to conquer a country'), which require sophisticated planning over multiple steps, which may provide only indirect indication of the desire fulfillment status.This added complexity resulted in a harder classification problem, and increased the complexity of inference over several latent states.
The table shows that LSNM outperforms the unstructured models indicating the benefit of modeling narrative structure.Also, the unstructured models perform better than the TE model emphasizing the need for simultaneous analysis all of the Evidence text.We obtained similar results during cross validation.For instance, the TE, unstructured models (best) and LSNM yielded F1 scores of 56.9, 67.9 and 70.2 respectively on the MCTest data.This shows that modeling the narrative presented by the Evidences results in better prediction of the desire fulfillment status.
Related Work
Expressions of desires and wishes have attracted psycholinguists [29] and linguists [1] alike.[17] detect wishes from text.Analyzing desires adds a new dimension to more general tasks like opinion mining [26] where the manufacturers and advertisers want to discover users' desires or needs from online reviews etc.Another use-case would be in resolving issues for community forum users.For instance, the number of posts in Massive Open Online Courses forums often overwhelm the instructional staff [6].Identifying posts containing unresolved issues can help focus the efforts of the instructional staff.
Our problem is related to Machine Comprehension [28].However, unlike most systems, designed for understanding large textual collections (macroreading) [12,3,13], this work focuses on Micro-reading, understanding short pieces of text.[2] also address micro-reading but with a different goal -answering domain-specific questions about entities in a paragraph.
Our task is also related to Recognizing Textual Entailment (RTE) [8,9].However, we show that solving it additionally requires modeling the narrative structure of the text.
There have been several attempts at modeling narrative structures which include narrative schemas [5,4], plot units [20] and Story Intention Graphs [11].Previous work has also studied connotations and word effects on narrative modeling [15,18].Our approach is closely related to these methods.While focusing on a specific classification task, our structured model and features, share similar motivation.
The AI task of recognizing plans of characters in a narrative viewing them as intentional agents [25,32,22] is also relevant.However, the focused nature of our task lets us employ latent variables to model the transitions between expectations and plans.
Latent structured models have been used previously for solving various problems in computer vision and NLP [31,33,14] though their problem settings and goals are different.
Conclusion
In this paper we have addressed the novel task of analyzing small pieces of text containing expression of a desire to identify if the desire was fulfilled in the given text.For solving this problem, we adopt three approaches based on different assumptions.We first use a textual entailment model to analyze small fragments of texts independently.Our second approach, an unstructured model, assumes that it is not sufficient to analyze different pieces of text independently.Instead, the complete text should be analyzed as a whole to identify desire fulfillment.Our third approach, a structured model, is based on the hypothesis that identifying desire fulfillment requires an understanding of the narrative structure and models the same using latent variables.We compare performances of these models on two different datasets that we have annotated and release.Our experiments establish the need to incorporate the narrative structure of the storyline offered by the text to better understand desire fulfillment.
Figure 1 :
Figure 1: Example of a Desire Expression (d), Evidence fragments (e1. ..e5) and a binary Desire Fulfillment Status (f).The Desire-subject and Desire-verb are marked in blue and bold fonts respectively in the Desire-expression.
Figure 3 :
Figure 3: Framework for feature extraction for an example.
Binary prediction of the Textual Entailment model[30].Discourse F2, F3ButPresent, SoPresent: Binary features indicating if a 'but' or 'so' (respectively) followed the Desire-verb ('wanted to', 'wished to' etc.) in the Desire-expression.Focal Word F4, F5, F6focal count, focal syn and focal ant count: Count of occurrences of the focal word(s), their WordNet[24] synonyms and antonyms (respectively) in the Evidence.Occurrences of synonyms or antonyms were identified only when they had the same POS tag as the focal word(s).F7 focal+syn count: Sum of F4 and F5 F8 focal lemm count: Count of occurrences of lemmatized forms of the focal word(s) in the Evidence.Desire-subject mentions F9 sub count: Count of all mentions (direct and co-referent) of the Desire-subject in the Evidence.Emotional State F10, F11 +adj, -adj count: Counts of occurrences of 'positive' and 'negative' adjectives (respectively) modifying the direct and co-referent mentions of the Desire-subject in the Evidence.Action F12, F13 +Agent, -Agent count: Number of times the connotation of verbs appearing in the Evidence agreed with and disagreed with (respectively) that of the intended action.F14, F15 +Patient, -Patient count: Count of occurrences of 'positive' and 'negative' verbs (respectively) in the Evidence which had the Desire-subject as the patient.Sustenance F16, F17
Figure 4 :
Figure 4: Artificial example indicating feature utility.The Desire-subject mentions are marked in blue, actions in bold and emotions in italics.Discourse feature is underlined.
Table 1 :
Normalizing the Desire-expression helps the TE model.
Table 3 :
Some examples of conforming and dissenting phrases.
reports the performances of these models.For training the unstructured model, we experimented with different algorithms
Table 4 :
Test set performances.Our structured model, LSNM, outperforms the unstructured, TE and BoW models. | 2015-11-30T20:37:03.000Z | 2015-11-30T00:00:00.000 | {
"year": 2015,
"sha1": "d47efe6d595588f72213496ab5813b4906d082f3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d47efe6d595588f72213496ab5813b4906d082f3",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
92990867 | pes2o/s2orc | v3-fos-license | Learning to Denoise Distantly-Labeled Data for Entity Typing
Distantly-labeled data can be used to scale up training of statistical models, but it is typically noisy and that noise can vary with the distant labeling technique. In this work, we propose a two-stage procedure for handling this type of data: denoise it with a learned model, then train our final model on clean and denoised distant data with standard supervised training. Our denoising approach consists of two parts. First, a filtering function discards examples from the distantly labeled data that are wholly unusable. Second, a relabeling function repairs noisy labels for the retained examples. Each of these components is a model trained on synthetically-noised examples generated from a small manually-labeled set. We investigate this approach on the ultra-fine entity typing task of Choi et al. (2018). Our baseline model is an extension of their model with pre-trained ELMo representations, which already achieves state-of-the-art performance. Adding distant data that has been denoised with our learned models gives further performance gains over this base model, outperforming models trained on raw distant data or heuristically-denoised distant data.
Introduction
With the rise of data-hungry neural network models, system designers have turned increasingly to unlabeled and weakly-labeled data in order to scale up model training.For information extraction tasks such as relation extraction and entity typing, distant supervision (Mintz et al., 2009) is a powerful approach for adding more data, using a knowledge base (Del Corro et al., 2015;Rabinovich and Klein, 2017) or heuristics (Ratner et al., 2016;Hancock et al., 2018) to automatically label instances.One can treat this data just like any other supervised data, but it is noisy; more effective approaches employ specialized probabilistic models (Riedel et al., 2010;Ratner et al., 2018a), capturing its interaction with other supervision (Wang and Poon, 2018) or breaking down aspects of a task on which it is reliable (Ratner et al., 2018b).However, these approaches often require sophisticated probabilistic inference for training of the final model.Ideally, we want a technique that handles distant data just like supervised data, so we can treat our final model and its training procedure as black boxes.
This paper tackles the problem of exploiting weakly-labeled data in a structured setting with a two-stage denoising approach.We can view a distant instance's label as a noisy version of a true underlying label.We therefore learn a model to turn a noisy label into a more accurate label, then apply it to each distant example and add the resulting denoised examples to the supervised training set.Critically, the denoising model can condition on both the example and its noisy label, allowing it to fully leverage the noisy labels, the structure of the label space, and easily learnable correspondences between the instance and the label.
Concretely, we implement our approach for the task of fine-grained entity typing, where a single entity may be assigned many labels.We learn two denoising functions: a relabeling function takes an entity mention with a noisy set of types and returns a cleaner set of types, closer to what manually labeled data has.A filtering function discards examples which are deemed too noisy to be useful.These functions are learned by taking manuallylabeled training data, synthetically adding noise to it, and learning to denoise, similar to a conditional variant of a denoising autoencoder (Vincent et al., 2008).Our denoising models embed both entities and labels to make their predictions, mirroring the structure of the final entity typing model itself.
We evaluate our model following Choi et al. (2018).We chiefly focus on their ultra-fine en- tity typing scenario and use the same two distant supervision sources as them, based on entity linking and head words.On top of an adapted model from Choi et al. (2018) incorporating ELMo (Peters et al., 2018), naïvely adding distant data actually hurts performance.However, when our learned denoising model is applied to the data, performance improves, and it improves more than heuristic denoising approaches tailored to this dataset.Our strongest denoising model gives a gain of 3 F 1 absolute over the ELMo baseline, and a 4.4 F 1 improvement over naive incorporation of distant data.This establishes a new state-ofthe-art on the test set, outperforming concurrently published work (Xiong et al., 2019) and matching the performance of a BERT model (Devlin et al., 2018) on this task.Finally, we show that denoising helps even when the label set is projected onto the OntoNotes label set (Hovy et al., 2006;Gillick et al., 2014), outperforming the method of Choi et al. (2018) in that setting as well.
Setup
We consider the task of predicting a structured target y associated with an input x.Suppose we have high-quality labeled data of n (input, target) pairs D = { x (1) , y (1) , . . ., (x (n) , y (n) )}, and noisily labeled data of n (input, target) pairs noisy ), . . ., (x (n ) , y noisy )}.For our tasks, D is collected through manual annotation and D is collected by distant supervision.We use two models to denoise data from D : a filtering function f disposes of unusable data (e.g., mislabeled examples) and a relabeling function g transforms the noisy target labels y noisy to look more like true labels.This transformation improves the noisy data so that we can use it to D without introducing damaging amounts of noise.In the second stage, a classification model is trained on the augmented data (D combined with denoised D ) and predicts y given x in the inference phase.
Case Study: Ultra-Fine Entity Typing
The primary task we address here is the finegrained entity typing task of Choi et al. (2018).Instances in the corpus are assigned types from a vocabulary of more than 10,000 types, which are divided into three classes: 9 general types, 121 finegrained types, and 10, 201 ultra-fine types.This dataset consists of 6K manually annotated examples and approximately 25M distantly-labeled examples.5M examples are collected using entity linking (EL) to link mentions to Wikipedia and gather types from information on the linked pages.20M examples (HEAD) are generated by extracting nominal head words from raw text and treating these as singular type labels.
Figure 1 shows examples from these datasets which illustrate the challenges in automatic annotation using distant supervision.The manuallyannotated example in (a) shows how numerous the gold-standard labeled types are.By contrast, the HEAD example (b) shows that simply treating the head word as the type label, while correct in this case, misses many valid types, including more general types.The EL example (c) is incorrectly annotated as region, whereas the correct coarse type is actually person.This error is characteristic of entity linking-based distant supervision since identifying the correct link is a challenging problem in and of itself (Milne and Witten, 2008): in this case, Gascoyne is also the name of a region in Western Australia.The EL example in (d) has reasonable types; however, human annotators could choose more types (grayed out) to describe the mention more precisely.The average number of types annotated by humans is 5.4 per example while the two distant supervision techniques combined yields 1.5 types per example on average.
In summary, distant supervision can (1) produce completely incorrect types, and (2) systematically miss certain types.
Denoising Model
To handle the noisy data, we propose to learn a denoising model as shown in Figure 2.This denoising model consists of filtering and relabeling functions to discard and relabel examples, respectively; these rely on a shared mention encoder and type encoder, which we describe in the following sections.The filtering function is a binary classifier that takes these encoded representations and predicts whether the example is good or bad.The relabeling function predicts a new set of labels for the given example.
We learn these functions in a supervised fashion.Training data for each is created through synthetic noising processes applied to the manuallylabeled data, as described in Sections 3.3 and 3.4.
For the entity typing task, each example (x, y) takes the form ((s, m), t), where s is the sentence, m is the mention span, and t is the set of types (either clean or noisy).
Mention Encoder
This encoder is a function Φ m (s, m) which maps a sentence s and mention m to a real-valued vector v m .This allows the filtering and relabeling function to recognize inconsistencies between the given example and the provided types.Note that these inputs s and m are the same as the inputs for the supervised version of this task; we can therefore share an encoder architecture between our denoising model and our final typing model.We use an encoder following Choi et al. (2018) with a few key differences, which are described in Section 4.
Type Encoder
The second component of our model is a module which produces a vector v t = Φ t (t).This is an encoder of an unordered bag of types.Our basic type encoder uses trainable vectors as embeddings for each type and combines these with summing.That is, the noisy types t 1 , . . ., t m are embedded into type vectors {t 1 , . . ., t m }.The final embedding of the type set t = j t j .
Type Definition Encoder Using trainable type embeddings exposes the denoising model to potential data sparsity issues, as some types appear only a few or zero times in the training data.Therefore, we also assign each type a vector based on its definition in WordNet (Miller, 1995).Even low-frequent types are therefore assigned a plausible embedding. 1et w j i denote the ith word of the jth type's most common WordNet definition.Each w j i is embedded using GloVe (Pennington et al., 2014).The resulting word embedding vectors w j i are fed into a bi-LSTM (Hochreiter and Schmidhuber, 1997;Graves and Schmidhuber, 2005), and a concatenation of the last hidden states in both directions is used as the definition representation w j .The final representation of the definitions is the sum over these vectors for each type: w = k w k .Our final v t = [t; w], the concatenation of the type and definition embedding vectors.
Filtering Function
The filtering function f is a binary classifier designed to detect examples that are completely mislabeled.Formally, f is a function mapping a labeled example (s, m, t) to a binary indicator z of whether this example should be discarded or not.
In the forward computation, the feature vectors v m and v t are computed using the mention and type encoders.The model prediction is defined as , where σ is a sigmoid function, u is a parameter vector, and Highway(•) is a 1-layer highway network (Srivastava et al., 2015).We can apply f to each distant pair in our distant dataset D and discard any example predicted to be erroneous (P (error) > 0.5).
Training data
We do not know a priori which examples in the distant data should be discarded, and labeling these is expensive.We therefore construct synthetic training data D error for f based on the manually labeled data D. For 30% of the examples in D, we replace the gold types for that example with non-overlapping types taken from another example.The intuition for this procedure follows Figure 1: we want to learn to detect examples in the distant data like Gascoyne where heuristics like entity resolution have misfired and given a totally wrong label set.
Formally, for each selected example ((s, m), t), we repeatedly draw another example ((s , m ), t ) from D until we find t error that does not have any common types with t.We then create a positive training example ((s, m, t error ), z = 1).We create a negative training example ((s, m, t), z = 0) using the remaining 70% of examples.f is trained on D error using binary cross-entropy loss.
Relabeling Function
The relabeling function g is designed to repair examples that make it through the filter but which still have errors in their type sets, such as missing types as shown in Figure 1b and 1d.g is a function from a labeled example (s, m, t) to an improved type set t for the example.
Our model computes feature vectors v m and v t by the same procedure as the filtering function f .The decoder is a linear layer with parameters where σ is an element-wise sigmoid operation designed to give binary probabilities for each type.
Once g is trained, we make a prediction t for each (s, m, t) ∈ D and replace t by t to create the denoised data D denoise = {(s, m, t), . . .}.For the final prediction, we choose all types t where e > 0.5, requiring at least two types to be present or else we discard the example.Training data We train the relabeling function g on another synthetically-noised dataset D drop generated from the manually-labeled data D. To mimic the type distribution of the distantly-labeled examples, we take each example (s, m, t) and randomly drop each type with a fixed rate 0.7 independent of other types to produce a new type set t .We perform this process for all examples in D and create a noised training set D drop , where a single training example is ((s, m, t ), t).g is trained on D drop with a binary classification loss function over types used in Choi et al. (2018), described in the next section.
One can think of g as a type of denoising autoencoder (Vincent et al., 2008) whose reconstructed types t are conditioned on v as well as t.
Typing Model
In this section, we define the sentence and mention encoder Φ m , which is use both in the denoising model as well as in the final prediction task.We extend previous attention-based models for this task (Shimaoka et al., 2017;Choi et al., 2018).At a high level, we have an instance encoder Φ m that returns a vector v m ∈ R d Φ , then multiply the output of this encoding by a matrix and apply a sigmoid to get a binary prediction for each type as a probability of that type applying.
Figure 3 outlines the overall architecture of our typing model.The encoder Φ m consists of four vectors: a sentence representation s, a wordlevel mention representation m word , a characterlevel mention representation m char , and a headword mention vector m head .The first three of these were employed by Choi et al. (2018).We have modified the mention encoder with an additional bi-LSTM to better encode long mentions, and additionally used the headword embedding directly in order to focus on the most critical word.These pieces use pretrained contextualized word embeddings (ELMo) (Peters et al., 2018) as input.
Pretrained Embeddings Tokens in the sentence s are converted into contextualized word vectors 2018), we learn task specific parameters γ task ∈ R and s task ∈ R 3 governing these embeddings.We do not fine-tune the parameters of the ELMo LSTMs themselves.
Sentence Encoder Following Choi et al. (2018), we concatenate the mth word vector s m in the sentence with a corresponding location embedding m ∈ R d loc .Each word is assigned one of four location tokens, based on whether (1) the word is in the left context, (2) the word is the first word of the mention span, (3) the word is in the mention span (but not first), and (4) the word is in the right context.The input vectors [s ; ] are fed into a bi-LSTM encoder, with hidden dimension is d hid , followed by a span attention layer (Lee et al., 2017;Choi et al., 2018): , where s is the final representation of the sentence s.
Mention Encoder To obtain a mention representation, we use both word and character information.For the word-level representation, the mention's contextualized word vectors m are fed into a bi-LSTM with hidden dimension is d hid .
The concatenated hidden states of both directions are summed by a span attention layer to form the word-level mention representation: m word = Attention(bi-LSTM(m )).
Second, a character-level representation is computed for the mention.Each character is embedded and then a 1-D convolution (Collobert et al., 2011) is applied over the characters of the mention.This gives a character vector m char .
Finally, we take the contextualized word vector of the headword m head as a third component of our representation.This can be seen as a residual connection (He et al., 2016) specific to the mention head word.We find the headwords in the mention spans by parsing those spans in isolation using the spaCy dependency parser (Honnibal and Johnson, 2015).Empirically, we found this to be useful on long spans, when the span attention would often focus on incorrect tokens.
The final representation of the input x is a concatenation of the sentence, the word-& characterlevel mention, and the mention headword representations, v = s; m word ; m char ; m head ∈ R d Φ .
Decoder We treat each label prediction as an independent binary classification problem.Thus, we compute a score for each type in the type vocabulary V t .Similar to the decoder of the relabeling function g, we compute e = σ (Ev), where E ∈ R |V t |×d Φ and e ∈ R |V t | .For the final prediction, we choose all types t where e > 0.5.If none of e is greater than 0.5, we choose t = arg max e (the single most probable type).
Loss Function We use the same loss function as Choi et al. (2018) for training.This loss partitions the labels in general, fine, and ultra-fine classes, and only treats an instance as an example for types of the class in question if it contains a label for that class.More precisely: where L ... is a loss function for a specific type class: general, fine-grained, or ultra-fine, and 1 ... (t) is an indicator function that is active when one of the types t is in the type class.Each L ... is a sum of binary cross-entropy losses over all types in that category.That is, the typing problem is viewed as independent classification for each type.
Note that this loss function already partially repairs the noise in distant examples from missing labels: for example, it means that examples from HEAD do not count as negative examples for general types when these are not present.However, we show in the next section that this is not sufficient for denoising.
Implementation Details
The settings of hyperparameters in our model largely follows Choi et al. (2018) and recommendations for using the pre-trained ELMo-Small model. 2 The word embedding size d ELMo is 1024.The type embedding size and the type definition embedding size are set to 1024.For most of other model hyperparameters, we use the same settings as Choi et al. (2018): d loc = 50, d hid = 100, d char = 100.The number of filters in the 1-d convolutional layer is 50.Dropout is applied with p = 0.2 for the pretrained embeddings, and p = 0.5 for the mention representations.We limit sentences to 50 words and mention spans to 20 words for computational reasons.The character CNN input is limited to 25 characters; most mentions are short, so this still captures subword information in most cases.The batch size is set to 100.For all experiments, we use the Adam optimizer (Kingma and Ba, 2014).The initial learning rate is set to 2e-03.We implement all models3 using PyTorch.To use ELMo, we consult the AllenNLP source code.
Experiments
Ultra-Fine Entity Typing We evaluate our approach on the ultra-fine entity typing dataset from Choi et al. (2018).The 6K manually-annotated English examples are equally split into the training, development, and test examples by the authors of the dataset.We generate syntheticallynoised data, D error and D drop , using the 2K training set to train the filtering and relabeling functions, f and h.We randomly select 1M EL and 1M HEAD examples and use them as the noisy data D .Our augmented training data is a combination of the manually-annotated data D and D denoised .
OntoNotes In addition, we investigate if denoising leads to better performance on another dataset.We use the English OntoNotes dataset (Gillick et al., 2014), which is a widely used benchmark for fine-grained entity typing systems.The original training, development, and test splits contain 250K, 2K, and 9K examples respectively.Choi et al. (2018) created an augmented training set that has 3.4M examples.We also construct our own augmented training sets with/without denoising using our noisy data D , using the same label mapping from ultra-fine types to OntoNotes types described in Choi et al. (2018).
Ultra-Fine Typing Results
We first compare the performance of our approach to several benchmark systems, then break down the improvements in more detail.We use the model architecture described in Section 4 and train it on the different amounts of data: manually labeled only, naive augmentation (adding in the raw distant data), and denoised augmentation.We compare our model to Choi et al. (2018) as well as to BERT (Devlin et al., 2018), which we finetuned for this task.We adapt our task to BERT by forming an input sequence "[CLS] sentence [SEP] mention [SEP]" and assign the segment embedding A to the sentence and B to the mention span. 4Then, we take the output vector at the position of the [CLS] token (i.e., the first token) as the feature vector v, analogous to the usage for sentence pair classification tasks.The BERT model is fine-tuned on the 2K manually annotated examples.We use the pretrained BERT-Base, uncased model5 with a step size of 2e-05 and batch size 32.
Results Table 1 compares the performance of these systems on the development set.Our model with no augmentation already matches the system of Choi et al. (2018) with augmentation, and incorporating ELMo gives further gains on both precision and recall.On top of this model, adding the distantly-annotated data lowers the performance; the loss function-based approach of (Choi et al., 2018) does not sufficiently mitigate the noise in this data.However, denoising makes the distantlyannotated data useful, improving recall by a substantial margin especially in the general class.A possible reason for this is that the relabeling function tends to add more general types given finer types.BERT performs similarly to ELMo with denoised distant data.As can be seen in the performance breakdown, BERT gains from improvements in recall in the fine class.
Table 2 shows the performance of all settings on the test set, with the same trend as the performance on the development set.Our approach outperforms the concurrently-published Xiong et al. (2019); however, that work does not use ELMo.Their improved model could be used for both de- Table 1: Macro-averaged P/R/F1 on the dev set for the entity typing task of Choi et al. (2018) comparing various systems.ELMo gives a substantial improvement over baselines.Over an ELMo-equipped model, data augmentation using the method of Choi et al. (2018) gives no benefit.However, our denoising technique allow us to effectively incorporate distant data, matching the results of a BERT model on this task (Devlin et al., 2018).
noising as well as prediction in our setting, and we believe this would stack with our approach.
Usage of Pretrained Representations Our model with ELMo trained on denoised data matches the performance of the BERT model.We experimented with incorporating distant data (raw and denoised) in BERT, but the fragility of BERT made it hard to incorporate: training for longer generally caused performance to go down after a while, so the model cannot exploit large external data as effectively.Devlin et al. (2018) prescribe training with a small batch size and very specific step sizes, and we found the model very sensitive to these hyperparameters, with only 2e-05 giving strong results.The ELMo paradigm of incorporating these as features is much more flexible and modular in this setting.Finally, we note that our approach could use BERT for denoising as well, but this did not work better than our current approach.Adapting BERT to leverage distant data effectively is left for future work.
Comparing Denoising Models
We now explicitly compare our denoising approach to several baselines.For each denoising method, we create the denoised EL, HEAD, and EL & HEAD dataset and investigate performance on these datasets.Any denoised dataset is combined with the 2K manually-annotated examples and used to train the final model.Heuristic Baselines These heuristics target the same factors as our filtering and relabeling functions in a non-learned way.SYNONYMS AND HYPERNYMS For each type observed in the distant data, we add its synonyms and hypernyms using WordNet (Miller, 1995).This is motivated by the data construction process in Choi et al. (2018).Table 2: Macro-averaged P/R/F1 on the test set for the entity typing task of Choi et al. (2018).Our denoising approach gives substantial gains over naive augmentation and matches the performance of a BERT model.
COMMON TYPE PAIRS We use type pair statistics in the manually labeled training data.For each base type that we observe in a distant example, we add any type which is seen more than 90% of the time the base type occurs.For instance, the type art is given at least 90% of the times the film type is present, so we automatically add art whenever film is observed.
OVERLAP We train a model on the manuallylabeled data only, then run it on the distantlylabeled data.If there is an intersection between the noisy types t and the predicted type t, we combine them and use as the expanded type t.Inspired by tri-training (Zhou and Li, 2005), this approach adds "obvious" types but avoids doing so in cases where the model has likely made an error.
Results Table 3
OntoNotes Results
We compare our different augmentation schemes for deriving data for the OntoNotes standard as well. of providing correct general types.In the EL setting, this yields 730k usable examples out of 1M (vs 540K for no denoising), and in HEAD, 640K out of 1M (vs.73K).
Analysis of Denoised Labels
To understand what our denoising approach does to the distant data, we analyze the behavior of our filtering and relabeling functions.Figure 4 shows examples of the original noisy labels and the denoised labels produced by the relabeling function.In example (a), taken from the EL data, the original labels, {location, city}, are correct, but human annotators might choose more types for the mention span, Minneapolis.The relabeling function retains the original types about the geography and adds ultra-fine types about administrative units such as {township, municipality}.In example (b), from the HEAD data, the original label, {dollar}, is not so expressive by itself since it is a name of a currency.The labeling function adds coarse types, {object, currency}, as well as specific types such as {medium of exchange, monetary unit}.In another EL example (c), the relabeling function tries to add coarse and fine types but struggles to assign multiple diverse ultra-fine types to the mention span Michelangelo, possibly because some of these types rarely cooccur (painter and poet).
Related Work
Past work on denoising data for entity typing has used multi-instance multi-label learning (Yaghoobzadeh andSchütze, 2015, 2017;Murty et al., 2018).One view of these approaches is that they delete noisily-introduced labels, but they cannot add them, or filter bad examples.Other work focuses on learning type embeddings (Yogatama et al., 2015;Ren et al., 2016a,b); our approach goes beyond this in treating the label set in a structured way.The label set of Choi et al. (2018) is distinct in not being explicitly hierarchical, making past hierarchical approaches difficult to apply.
Denoising techniques for distant supervision have been applied extensively to relation extraction.Here, multi-instance learning and probabilis- tic graphical modeling approaches have been used (Riedel et al., 2010;Hoffmann et al., 2011;Surdeanu et al., 2012;Takamatsu et al., 2012) as well as deep models (Lin et al., 2016;Feng et al., 2017;Luo et al., 2017;Lei et al., 2018;Han et al., 2018), though these often focus on incorporating signals from other sources as opposed to manually labeled data.
Conclusion
In this work, we investigated the problem of denoising distant data for entity typing tasks.We trained a filtering function that discards examples from the distantly labeled data that are wholly unusable and a relabeling function that repairs noisy labels for the retained examples.When distant data is processed with our best denoising model, our final trained model achieves state-of-the-art performance on an ultra-fine entity typing task.
Acknowledgments
This work was partially supported by NSF Grant IIS-1814522, NSF Grant SHF-1762299, a Bloomberg Data Science Grant, and an equipment grant from NVIDIA.The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research.Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation.Thanks as well to the anonymous reviewers for their thoughtful comments, members of the UT TAUR lab and Pengxiang Cheng for helpful discussion, and Eunsol Choi for providing the full datasets and useful resources.
Figure 1 :
Figure 1: Examples selected from the Ultra-Fine Entity Typing dataset of Choi et al. (2018).(a) A manuallyannotated example.(b) The head word heuristic functioning correctly but missing types in (a).(c) Entity linking providing the wrong types.(d) Entity linking providing correct but incomplete types.
Figure 2 :
Figure 2: Denoising models.The Filter model predicts whether the example should be kept at all; if it is kept, the Relabel model attempts to automatically expand the label set.Φ m is a mention encoder, which can be a state-of-the-art entity typing model.Φ t encodes noisy types from distant supervision.
Figure 3 :
Figure3: Sentence and mention encoder used to predict types.We compute attention over LSTM encodings of the sentence and mention, as well as using characterlevel and head-word representations to capture additional mention properties.These combine to form an encoding which is used to predict types.
Table 3 :
Choi et al. (2018)/F1 on the dev set for the entity typing task ofChoi et al. (2018)with various types of augmentation added.The customized loss fromChoi et al. (2018)actually causes a decrease in performance from adding any of the datasets.Heuristics can improve incorporation of this data: a relabeling heuristic (Pair) helps on HEAD and a filtering heuristic (Overlap) is helpful in both settings.However, our trainable filtering and relabeling models outperform both of these techniques.
Table 4 :
Test results on OntoNotes.Denoising helps substantially even in this reduced setting.Using fewer distant examples, we nearly match the performance using the data from Choi et al. (2018) (see text).
Table 5 :
Table 5 reports the average numbers of types added/deleted by the relabeling function and the ratio of examples discarded by the filtering function.Overall, the relabeling function tends to add more and delete fewer number of types.The HEAD examples have more general types added than the EL examples since the noisy HEAD labels are typically finer.Fine-grained types are added to both EL and HEAD examples less frequently.Ultra-fine examples are frequently added to both datasets, with more added to EL; the noisy EL labels are mostly extracted from Wikipedia defini-The average number of types added or deleted by the relabeling function per example.The right-most column shows that the rate of examples discarded by the filtering function. | 2019-03-31T22:56:46.645Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "dc138300b87f5bfccec609644d5edc08c4d783e9",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1905.01566",
"oa_status": "GREEN",
"pdf_src": "ArXiv",
"pdf_hash": "d463de58a2cb1f08be02fc9dc55b5649609e87c1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
9886506 | pes2o/s2orc | v3-fos-license | Constraint-Based Categorial Grammar
We propose a generalization of Categorial Grammar in which lexical categories are defined by means of recursive constraints. In particular, the introduction of relational constraints allows one to capture the effects of (recursive) lexical rules in a computationally attractive manner. We illustrate the linguistic merits of the new approach by showing how it accounts for the syntax of Dutch cross-serial dependencies and the position and scope of adjuncts in such constructions. Delayed evaluation is used to process grammars containing recursive constraints.
Introduction
Combinations of Categorial Grammar (co) and unification naturally lead to the introduction of polymorphic categories. Thus, Karttunen (1989) categorizes NP's as X/X, where x is a verbal category, Zeevat el al. (1987) assign the category X/(NP\X) to NP's, and Emms (1993) extends the Lambek-calculus with polymorphic categories to account for coordination, quantifier scope, and extraction.
The role of polymorphism has been restricted, however, by the fact that in previous work categories were defined as feature structures using the simple, nonrecursive, constraints familiar from feature description languages such as PATR. Relational constraints can be used to define a range of polymorphic categories that are beyond the expressive capabilities of previous approaches.
In particular, the introduction of relational constraints captures the effects of (recursive) lexical rules in a computationally attractive manner. The addition of such rules makes it feasible to consider truly 'lexicalist' grammars, in which a powerful lexical component is accompanied by a highly restricted syntactic component, consisting of application only.
2
Recursive Constraints In cG, many grammatical concepts can only be defined recursively. Dowty (1982) defines grammatical functions such as subject and object as being the ultimate and penultimate 'argument-in' of a verbal category. Hoeksema (1984) defines verbs as exocentric categories reducible to s. Lexical rules frequently refer to such concepts. For instance, a categorial lexical rule of passive applies to verbs selecting an object and must remove the subject.
In standard unification-based formalisms, these concepts and the rules referring to such concepts cannot be expressed directly. (1) lez(walks,X):iv(X). /ez(kisses, X) :tv(X). Subject-verb agreement can be incorporated easily if one reduces agreement to a form of subcategorization.
If, however, one wishes to distinguish these two pieces of information (to avoid a proliferation of subcategorization types or for morphological reasons, for instance), it is not obvious how this could be done without recursive constraints. For intransitive verbs one needs the constraint that (arg agr) = Agr (where Agr is some agreement value), for transitive verbs that (val arg agr) = Agr, and for ditransitive verbs that (val val arg agr) = Agr. The generalization is captured using the recursive constraint sv_agreement (2). In (2) and below, we use definite clauses to define lexical entries and constraints. Note that lexical entries relate words to feature structures that are defined indirectly as a combination of simple constraints (evaluated by means of unification) and recursive constraints. 1 (2) lex(walks, X) :iv(X), sv_agreement( sg3 , X). Relational constraints can also be used to capture the effect of lexical rules. In a lexicalist theory such as cG, in which syntactic rules are considered to be universally valid scheme's of functor-argument combination, lexical rules are an essential tool for capturing language-specific generalizations. As Carpenter (1991) observes, some of the rules that have been proposed must be able to operate recursively. Predicative formation in English, for instance, uses a lexical rule turning a category reducible to vP into a category reducing to a vP-modifier (vP\vP). As a vP-modifier is reducible to vP, the rule can (and sometimes must) be applied recursively.
2.2
Adjuncts as arguments Miller (1992) can be modified by any number of adjectives, the rule must be optional as well as recursive. The advantages of using a lexical rule in this case is that it simplifies accounting for agreement between nouns and adjectives and that it enables an account of word order constraints between arguments and modifiers of a noun in terms of obliqueness.
The idea that modifiers are introduced by means of a lexical rule can be extended to verbs. That is, adjuncts could be introduced by means of a recursive rule that optionally adds these elements to verbal categories. Such a rule would be an alternative for the standard categorial analysis of adjuncts as (endocentric) functors. There is reason to consider this alternative.
In Dutch, for instance, the position of verb modifiers is not fixed. Adjuncts can in principle occur anywhere to the left of the verb: 2 There are several ways to account for this fact. One can assign multiple categories to adjuncts or one can assign a polymorphic category x/x to adjuncts, with x restricted to 'verbal projections' (Bouma, 1988).
Alternatively, one can assume that adjuncts are not functors, but arguments of the verb. Since adjuncts are optional, can be iterated, and can occur in several positions, this implies that verbs must be polymorphic. The constraint add_adjuncts has this effect, as it optionally adds one or more adjuncts as arguments to the 'initial' category of a verb: (4) iex(veroorzaken, X):-add_adjuncts(X, NP\(NP \S)). lex(geven, X) :-add_adjuncts(X, NP\(NP\(NP\S))). The derivation of (3a) is given below (where X =~ Y indicates that add_adjuncts(Y,X) is satisfied, and IV ----NP\S).
add_adjuncts(S, S
An interesting implication of this analysis is that in a categorial setting the notion 'head' can be equated with the notion 'main functor'. This has been proposed by Barry and Pickering (1990), but they are forced to assign a category containing Kleene-star operators to verbal elements. The semantic counterpart of such category-assignments is unclear. The present proposal is an alternative for such assignments which avoids introducing new categorial operators and which does not lead to semantic complications (the semantics of add_adjuncts is presented in section 3.3). Below we argue that this analysis also allows for a straightforward explanation of the distribution and scope of adjuncts in verb phrases headed by a verbal complex.
Cross-Serial Dependencies
In Dutch, verbs selecting an infinitival complement (e.g. modals and perception verbs) give rise to so called crossserial dependencies. The arguments of the verbs involved appear in the same order as the verbs in the 'verb cluster': (7) a. The property of forming cross-serial dependencies is a lexical property of the matrix verb. If this verb is a 'trigger' for cross-serial word order, this order is obligatory, whereas if it is not, the infinitival complement will follow the verb: (8) a. *dat An wil Bea kussen. b.
dat An zich voornam Bea that An Refl. planned Bea te kussen. to kiss that An. planned to kiss Bea e. *dat An zich Bea voornam te kussen.
Generalized Division
Categorial accounts of cross-serial dependencies initially made use of a syntactic rule of composition (Steedman, 1985). Recognizing the lexical nature of the process, more recent proposals have used either a lexical rule of composition (Moortgat, 1988) or a lexical rule of 'division' (Hoeksema, 1991). Division is a rule which enables a functor to inherit the arguments of its argument :3 To generate cross-serial dependencies, a 'disharmonic' version of this rule is needed: Hoeksema proposes that verbs which trigger crossserial word order are subject to (9) In a framework using recursive constraints, generalized disharmonic division can be implemented as a recursive constraint connecting the initial category of such verbs with a derived category: (11) lez(willen,X) :-cross_serial(X, (NP\S)/(NP\S)). lez(zien, X) :-cross_serial(X, (NP\(NPkS))/(NP\S)). lez(voornemen, (NPre fl \(NP\S))/(NP \S)). aArgument inheritance is used in HPSG to account for verb clustering in German (Hinrichs and Nakazawa, 1989). The rlPSG analysis is essentially equivalent to Hoeksema's account.
[ [ + ] ] )
Only verbs that trigger the cross-serial order are subject to the division constraint. This accounts immediately for the fact that cross-serial orders do not arise with all verbs selecting infinitival complements.
3.2
Verb Clusters The verb_cluster constraint ensures that cross-serial word order is obligatory for verbs subject to cross_serial. To rule out the ungrammatical (8a), for instance, we assume that Bea kussen is not a verb cluster. The verb kussen by itself, however, is unspecified for vc, and thus (7a) is not excluded.
We do not assume that cross-serial verbs take lexical arguments (as has sometimes been suggested), as that would rule out the possibility of complex constituents to the right of cross-serial verbs altogether. If one assumes that a possible bracketing of the verb cluster in (7b) is [wil [zien kussen]] (coordination and fronting data have been used as arguments that this is indeed the case), a cross-serial verb must be able to combine with nonlexical verb clusters. Furthermore, if a verb selects a particle, the particle can optionally be included in the verb cluster, and thus can appear either to the right or to the left of a governing cross-serial verb. For a verb cluster containing two cross-serial verbs, for instance, we have the following possibilities: (13) a. dat An Bea heeft durven aan that An Bea has dared part. te spreken to speak that An has dared to speak to Bea.
A final piece of evidence for the the fact that crossserial verbs may take complex phrases as argument stems from the observation that certain adjectival and prepositional arguments can also appear as part of the verb cluster: Cross-serial verbs select a +vc argument. Therefore, all phrases that are not verb clusters must be markedvc. In general, in combining a (verbal) functor with its argument, it is the argument that determines whether the resulting phrase is -vc. For instance, NP-arguments always give rise to -VC phrases, whereas particles and verbal arguments do not give rise to -vc phrases. This suggests that NP's must be marked -vc, that particles and verbs can remain unspecified for this feature, and that in the syntactic rule for application the value of the feature vc must be reentrant between argument and resultant.
3.3
The distribution and scope of adjuncts The analysis of cross-serial dependencies in terms of argument inheritance interacts with the analysis of adjuncts presented in section 2.2. If a matrix verb inherits the arguments of the verb it governs, it should be possible to find modifiers of the matrix verb between this verb and one of its inherited arguments. This prediction is borne out (15a). However, we also find structurally similar examples in which the adjunct modifies the governed verb (15b). Finally, there are examples that are ambiguous between a wide and narrow scope reading (15c). We take it that the latter case is actually what needs to be accounted for, i.e. examples such as (15a) and (15b) are cases in which there is a strong preference for a wide and narrow scope reading, respectively, but we will remain silent about the (semantic) factors determining such preferences. On the assumption that the lexical entries for lijken en ontwijken are as in (16), example (15c) has two possible derivations ( (17) and (18)). Procedurally speaking, the rule that adds adjuncts can be applied either to the matrix verb (after division has taken place) or to the governed verb. In the latter case, the adjunct is 'inherited' by the matrix verb. Assuming that adjuncts take scope over the verbs introducing them, this accounts for the ambiguity observed above. The assumption that adjuncts scope over the verbs introducing them can be implemented as follows. We use a unification-based semantics in the spirit of Pereira and Shieber (1987). Furthermore, the semantics is head-driven, i.e. the semantics of a complex constituent is reetrant with the semantics of its head (i.e. the functor). The feature structure for a transitive verb including semantics (taking two NP's of the generalized quantifier type ((e, t), t} as argument and assigning wide scope to the subject) is: Each time an adjunct is added to the subcategorization frame of a verb, the semantics of the adjunct is 'applied' to the semantics as it has been built up so far (Sy), and the result (SA) is passed on. The final step in the recursion unifies the semantics that is constructed in this way with the semantics of the 'output' category. As an adjunct A1 that appears to the left of an adjunct A2 in the string will be added to the subcategorization frame of the governing verb after As is added, this orders the (sentential) scope of adjuncts according to left-to-right word order. Furthermore, since the scope of adjuncts is now part of a verb's lexical semantics, any functor taking such a verb as argument (e.g. verbs selecting for an infinitival complement) will have the semantics of these adjuncts in its scope.
Note that the alternative treatments of adjuncts mentioned in section 2.2 cannot account for the distribution or scope of adjuncts in cross-serial dependency constructions. Multiple (i.e. a finite number of) categorizations cannot account for all possible word orders, since division implies that a trigger for cross-serial word order may have any number of arguments, and thus, that the number of 'subcategorization frames' for such verbs is not fixed. The polymorphic solution (assigning adjuncts the category x/x) does account for word order, but cannot account for narrow scope readings, as the adjunct will always modify the whole verb cluster (i.e the matrix verb) and cannot be made to modify an embedded verb only.
Processing
The introduction of recursive lexical rules has repercussions for processing as they lead to an infinite number of lexical categories for a given lexical item or, if one considers lexical rules as unary syntactic rules, to nonbranching derivations of unbounded length. In both cases, a parser may not terminate. One of the main advantages of modeling lexical rules by means of constraints is that it suggests a solution for this problem. A control strategy which delays the evaluation of constraints until certain crucial bits of information are filled in avoids non-termination and in practice leads to grammars in which all constraints are fully evaluated at the end of the parse-process. Consider a grammar in which the only recursive constraint is add_adjuncts, as defined in section 2.2. The introduction of recursive constraints in itself does not solve the non-termination problem. If all solutions for add_adjuncts are simply enumerated during lexical look-up an infinite number of categories for any given verb will result. During processing, however, it is not necessarily the case that we need to consider all solutions. Syntactic processing can lead to a (partial) instantiation of the arguments of a constraint. If the right pieces of information are instantiated, the constraint will only have a finite number of solutions.
Consider, for instance, a parse for the following string.
NP\S S
Even if the category of the verb is left completely open initially, there is only one derivation for this string that reduces to S (remember that the syntax uses application only). This derivation provides the information that the variable Verb must be a transitive verb selecting one additional adjunct, and with this information it is easy to check whether the following constraint is satisfied:
add_adjuncts(NP\(ADJ\(NP\S) ), NP\(NP\S)).
This suggests that recursive constraints should not be evaluated during lexical look-up, but that their evaluation should be delayed until the arguments are sufficiently instantiated.
To implement this delayed evaluation strategy, we used the block facility of Sicstus Prolog. For each recursive constraint, a block declaration defines what the conditions are under which it may be evaluated. The definition of add_adjuncts (with semantics omitted for readability), for instance, now becomes: (23) add_adjuncts ([ arg Arg ]x,Y) :-add_adjuncts(X, Y, Arg).
We use add_adjuncts~2 to extract the information that determines when add_adjuncts/3 is to be evaluated. The block declaration states that add_adjuncts/3 may only be evaluated if the third argument (i.e. the argument of the 'output' category) is not a variable. During lexical look-up, this argument is uninstantiated, and thus, no evaluation takes place. As soon as a verb combines with an argument, the argument category of the verb is instantiated and add_adjuncts~3 will be evaluated. Note, however, that calls to add_adjuncts~3 are recursive, and thus one evaluation step may lead to another call to add_adjuncts~3, which in its turn will be blocked until the argument has been instantiated sufficiently. Thus, the recursive constraint is evaluated incrementally, with each syntactic application step leading to a new evaluation step of the blocked constraint. The recursion will stop if an atomic category s is found.
Delayed evaluation leads to a processing model in which the evaluation of lexieal constraints and the construction of derivational structure is completely intertwined.
Other strategies
The delayed evaluation techniques discussed above can be easily implemented in parsers which rely on backtracking for their search. For the grammars that we have worked with, a simple bottom-up (shift-reduce) parser combined with delayed evaluation guarantees termination of the parsing process.
To obtain an efficient parser more complicated search strategies are required. However, chart-based search techniques are not easily generalized for grammars which make use of complex constraints. Even if the theoretical problems can be solved (Johnson, 1993;DSrre, 1993) severe practical problems might surface, if the constraints are as complex as the ones proposed here.
As an alternative we have implemented chart-based parsers using the 'non-interleaved pruning' strategy (terminology from (Maxwell III and Kaplan, 1994)).
Using this strategy the parser first builds a parse-forest for a sentence on the basis of the context-free backbone of the grammar. In a second processing phase parses are recovered on the basis of the parse forest and the corresponding constraints are applied. This may be advantageous if the context-free backbone of the grammar is 'informative' enough to filter many unsuccessful partial derivations that the parser otherwise would have to check.
As clearly a CUG grammar does not contain such an informative context-free backbone a further step is to use 'selective feature movement' (cf. again (Maxwell III and Kaplan, 1994)). In this approach the base grammar is compiled into an equivalent modified grammar in which certain constraints from the base grammar are converted to a more complex context-free backbone in the modified grammar.
Again, this technique does not easily give good results for grammars of the type described. It is not clear at all where we should begin extracting appropriate features for such a modified grammar, because most information passing is simply too 'indirect' to be easily compiled into a context-free backbone.
We achieved the best results by using a 'handfabricated' context-free grammar as the first phase of parsing. This context-free grammar builds a parse forest that is then used by the 'real' grammar to obtain appropriate representation(s) for the input sentence. This turned out to reduce parsing times considerably.
Clearly such a strategy raises questions on the relation between this context-free grammar and the CUG grammar. The context-free grammar is required to produce a superset of the derivations allowed by the CUG. Given the problems mentioned above it is difficult to show that this is indeed the case (if it were easy, then it probably would also be easy to obtain such a contextfree grammar automatically).
The strategy can be described in somewhat more detail as follows. The context-free phase of processing builds a number of items defining the parse forest, in a format that can be used by the second processing phase. Such items are four-tuples (R, Po,P,n) where R is a rule name (consistent with the rule names from the CUG), P0, P are string positions and D describes the string positions associated with each daughter of the rule (indicating which part of the string is covered by that daughter).
Through a head-driven recursive descent the second processing phase recovers derivations on the basis of these items. Note that the delayed evaluation technique for complex constraints is essential here. Alternative solutions are obtained by backtracking. If the first phase has done a good job in pruning many failing search branches then this is not too expensive, and we do not have to worry about the interaction of caching and complex constraints.
Final Remarks
In sections 2 and 3 we have sketched an analysis of cross-serial dependency constructions and its interaction with the position and scope of adjuncts. The rules given there are actually part of a larger fragment that covers the syntax of Dutch verb clusters in more detail.
The fragment accounts for crossserial dependencies and extraposition constructions (including cases of 'partial' extraposition), infinitivus-proparticipio, modal and participle inversion, the position of particles in verb clusters, clitic climbing, partial vptopicalization, and verb second. In the larger fragment, additional recursive constraints are introduced, but the syntax is still restricted to application only.
The result of Carpenter (1991) emphasizes the importance of lexical rules. There is a tendency in both CG and HPSG to rely more and more on mechanisms (such as inheritance and lexical rules or recursive constraints) that operate in the lexicon. The unrestricted generative capacity of recursive lexical rules implies that the remaining role of syntax can be extremely simple. In the examples above we have stressed this by giving an account for the syntax of cross-serial dependencies (a construction that is, given some additional assumptions, not context-free) using application only. In general, such an approach seems promising, as it locates the sources of complexity for a given grammar in one place, namely the lexicon. | 1994-04-19T03:58:10.000Z | 1994-04-19T00:00:00.000 | {
"year": 1994,
"sha1": "4900dd87e73ca09cd5ef809df1366297024afeb7",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=981753&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "2447164c6da9a3d96af69408f60a9f7f07203ce9",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234539477 | pes2o/s2orc | v3-fos-license | Investigation of the influence of low frequency electric fields on model water-in-oil emulsions
The article describes the mechanisms of coagulation and coalescence of emulsion droplets in electric fields, and also defines the frequency ranges at which coagulation and coalescence of emulsion droplets occurs, and the establishment times of these processes. The experiments revealed the tendency to increase the intensity of the studied processes with increasing field strength and salinity of the aqueous phase.
Introduction
One of the main tasks of oil production is the destruction of water-in-oil emulsions. Water-in-oil emulsions are basically a heterogeneous system consisting of micron drops of water dispersed in oil. Each drop is surrounded by high molecular weight oil components, which prevents the coalescence of water droplets. The emulsion destruction mechanism is based on two processes: coagulation and coalescence of droplets. Stimulation of these processes is possible using low-frequency electric fields. The choice of the frequency of the electric field depends on the physicochemical properties of the emulsion. In the lowfrequency range for values from 1 to 100 kHz, the formation of chains, elongated mainly along the direction of electric field lines, is observed [1][2][3]. Adamiak [4] investigated the deformation of two perfectly conducting, uniform size drops in a uniform electric field numerically. The attraction of individual water droplets in an electric field and the formation of coagulation chains are mainly due to the action of electrophoretic forces on the droplets [5], determined by the formula.
Where, ε1 is the dielectric constant of the medium, ε2 is the dielectric constant of the drop, R is the radius of the drop, E0 is the strength of the external electric field. The rapprochement of water droplets under the action of electrophoretic forces occurs until the energy barrier of the repulsive forces is overcome, after which the coalescence of the droplets occurs. The energy barrier of repulsive forces depends on the strength properties of the droplet shell. Consequently, the patterns of coagulation and coalescence will depend on the physicochemical properties of the phases of the emulsion and dispersion, as well as the frequency and field strength.
JD McLean et al [6 ] studied the effect of a low-frequency electric field on multiple and individual drops of model emulsions of various concentrations and with different phases of water salinity, which affected the dielectric constant of the drop. To produce a model emulsion, asphaltenes (for values from 1 to 8 % of the emulsion volume) are dissolved in toluene within two hours, heptane is added to the resulting solution in equal proportions to toluene and mixed for 2 hours in a mixer at a speed of rotation 500 rpm, water with different NaCl content (0 mg/l; 10 mg/l; 20 mg/l) is added to the resulting solution in a volume of 20% of the volume of the emulsion and mixed for 20 minutes in a mixer at a speed of rotation 3000 rpm.
Material and methods
To visualize the behavior of emulsion droplets in electric fields, a laboratory bench was developed. Figure 1 shows a schematic diagram of laboratory setup. The electric field is set by the generator AG 1021 (T&C Power Conversion) with a frequency range for values from 0.1 to 15 MHz and a variable power of up to 300 W. An electromagnetic field is supplied from the generator via an RG6 radio frequency cable to two parallel copper conductors. To control the applied voltage, an oscilloscope is connected to the line.
Results
The research results showed that in emulsions with distilled water (NaCl 0 mg/L), coagulation chains in an electric field are not observed. For samples of emulsions with salt water (sample No. 1 -NaCl 10 mg/L; sample No. 2 -NaCl 20 mg/L), droplet aggregates in the form of chains extended mainly along the direction of electric field lines are formed. Figure 2 presents the frames before and after electrical exposure. The exposure parameters (field frequency, field strength, exposure time) were selected individually for each emulsion sample. The graphs show that for a sample with high salinity of water, the intensity of chain formation increases. With increasing water concentration up to 30%, therefore, the number of drops, the time of formation of chains decrease. Above 30%, a slight increase in time is observed. With increasing field strength, the time difference between the samples under study decreases. Next, the mechanism of coagulation and coalescence of individual drops was investigated. A discharged emulsion is fed into the cell and two drops of different diameters are fixed in the center between two electrodes at a distance for values from 10 to 30 microns from each other. Then, a voltage with a frequency of 100 kHz and amplitude from the range for values from 50 to 120 V, starting from the minimum amplitude, is supplied to the electrodes. The amplitude values are fixed when the attraction and coalescence of droplets occurs. Figure 5 presents fragments from the video. Exposure time was 1s. The radius of the big drop is 57 μm, the radius of a small drop is 17 μm.
Conclusions
In the absence of drops in the center of the cell between two electrodes, the electric field is uniform, i.e. field strength is constant. The appearance of a droplet violates the uniformity of the field around the droplet, and a gradient of electric field strength arises at a certain distance. Drops of a smaller radius located in the region of heterogeneity are attracted to this drop. There are 3 stages in the observed processes. When the electric field reaches 112 kV/m, a drop with a smaller radius begins to move to a large drop. When the electric field reaches 117 kV/m, the system of two drops begins to line up along the field, but the drops do not coalescence. Upon reaching electric field strength of 120 kV/m, coalescence of droplets occurs.
Thus, the mechanisms of coagulation and coalescence of emulsion droplets in electric fields are revealed. The frequency at which coagulation and coalescence of emulsion droplets occur ranges, the times of establishment of these processes are determined. A tendency to increase the intensity of the processes under study with increasing field strength and salinity of the aqueous phase has been found. The critical intensities of the processes of coagulation and coalescence of the studied emulsions are established. The results will be used in mathematical modeling of the coagulation and coalescence of water-in-oil emulsion systems. | 2020-12-24T09:12:40.878Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "c8f5bc29adcb4c4864574cd86e1c3179c0d9663b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1675/1/012104",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6d9637939f32c94bc0e4dc58dfec92b1fe972d57",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
230782525 | pes2o/s2orc | v3-fos-license | Impact of the weekend effect on outcome after microsurgical clipping of ruptured intracranial aneurysms
Background The “weekend effect” describes the assumption that weekend and/or on-call duty admission of emergency patients is associated with increased morbidity and mortality rates. For aneurysmal subarachnoid hemorrhage, we investigated, whether presentation out of regular working hours and microsurgical clipping at nighttime correlates with worse patient outcome. Methods This is a retrospective review of consecutive patients that underwent microsurgical clipping of an acutely ruptured aneurysm at our institution between 2010 and 2019. Patients admitted during (1) regular working hours (Monday–Friday, 08:00–17:59) and (2) on-call duty and microsurgical clipping performed during (a) daytime (Monday–Sunday, 08:00–17:59) and (b) nighttime were compared regarding the following outcome parameters: operation time, treatment-related complications, vasospasm, functional outcome, and angiographic results. Results Among 157 enrolled patients, 104 patients (66.2%) were admitted during on-call duty and 48 operations (30.6%) were performed at nighttime. Admission out of regular hours did not affect cerebral infarction (p = 0.545), mortality (p = 0.343), functional outcome (p = 0.178), and aneurysm occlusion (p = 0.689). Microsurgical clipping at nighttime carried higher odds of unfavorable outcome at discharge (OR: 2.3, 95%CI: 1.0–5.1, p = 0.039); however, there were no significant differences regarding the remaining outcome parameters. After multivariable adjustment, clipping at nighttime did not remain as independent prognosticator of short-term outcome (OR: 2.1, 95%CI: 0.7–6.2, p = 0.169). Conclusions Admission out of regular working hours and clipping at nighttime were not independently associated with poor outcome. The adherence to standardized treatment protocols might mitigate the “weekend effect.”
Introduction
Aneurysmal subarachnoid hemorrhage (aSAH) is a severe neurological condition caused by spontaneous rupture of intracranial aneurysms. In order to reduce the risk of aneurysm rebleeding and further potential brain damage, timely aneurysm occlusion is advocated [17].
Microsurgical clipping represents a well-established, safe and effective technique for aneurysm occlusion, in particular for complex aneurysms, which are challenging to treat by endovascular means [11][12][13]. When patients are admitted during the weekend or during nighttime, the question arises, whether microsurgical clipping should be performed during on-call duty or postponed to the next working day. Previous studies suggested that emergency hospital admissions and/or procedures performed at night and on weekends may be associated with increased morbidity and mortality when compared to admissions during the routine daytime shift [3]. This phenomenon was denoted as the "weekend effect" [20]. Although there is some controversy on this subject, this effect was reported for several neurological diseases, such as ischemic stroke [18], intracranial hemorrhage [9], and subarachnoid hemorrhage [27]. The cause of this effect has not yet been determined with absolute certainty. Possible reasons are, for example, a reduction of both the medical staff and availability of resources and organizing factors outside routine working times [20]. Moreover, due to the human circadian rhythm, cognitive performance varies throughout the day and usually reaches its lowest point at night, possibly yielding reduced overall quality of patient management [29]. For aSAH patients, the association between timing of microsurgical clipping, surgical performance and clinical outcome has not yet been analyzed systematically. The objective of this study was to evaluate, whether admission of patients with aSAH out of regular working hours and/or microsurgical clipping performed at night is associated with a worse patient outcome. For this purpose, the following outcome parameters were defined: operation time, cerebral infarction, in-hospital mortality, functional outcome and angiographic results.
Methods
Consecutive aSAH patients that underwent microsurgical clipping of the index aneurysm at the University hospital of Cologne between January 2010 and December 2019 were retrospectively reviewed. The catchment area consists around 700,000 inhabitants. Aneurysms are treated by 3 to 4 vascular neurosurgeons per year that are on call alternately. At the author's institution, around 140 aneurysms (70 ruptured, 70 unruptured) are treated per year, thereof 40 by clipping and 100 by endovascular means. There are no hybrid vascular neurosurgeons. Endovascular aneurysm treatment is performed by interventional neuroradiologists. The study was approved by the local ethics committee (Registration ID: 13-104) and was conducted in accordance with the Declaration of Helsinki.
Procedural details
Upon radiological proof of aSAH, either computed tomography angiography (CTA) and/or digital subtraction angiography (DSA) were performed in order to determine aneurysm location, size, morphology and vascular anatomy. The institutional treatment protocol consists of treating ruptured intracranial aneurysms as soon as possible after diagnosis, both for patients with intracranial hemorrhage and without. Ultimately, the operation starts at the discretion of the vascular neurosurgeon on duty, which depends on several factors such as case volume, clinical patient condition and time of the night. Microsurgical clipping was performed using an OPMI® PENTERO® 800 operation microscope with integrated FLOW 800 module (Carl Zeiss AG, Oberkochen, Germany). Aneurysm occlusion and parent artery patency was intraoperatively evaluated by the use of micro-Doppler ultrasound and/or indocyanine green (ICG)-videoangiography (VAG) with additional FLOW 800 analysis of cerebral perfusion [15,16]. After surgery, the patients were surveilled at an intensive care at least until 14 days after the initial bleeding. Within 24 h after surgery, the patients received a cranial computed tomography scan to exclude rebleeding and treatment-related cerebral infarction. Transcranial Doppler ultrasound was performed daily, considering a mean cerebral blood flow velocity ≥ 120 cm/s and/or an increase by ≥ 50 cm/s within 24 h as indicative for cerebral vasospasm. In this case, the patients underwent a cranial CT scan with angiography and perfusion for radiological proof of vasospasm.
Data collection
The following parameters were collected retrospectively from the medical charts: patient age, gender, day and time of admission, World Federation of Neurosurgical Societies (WFNS) grading scale score, Fisher score, and neurological status at discharge and at 6-month follow-up. Operation records were reviewed to retrieve the following procedural specifics: day and time of surgery, admissionto-surgery time (time interval between admission and start of surgery), operation time (interval between skin incision and suture), use of micro-Doppler and/or ICG-VAG, temporary clipping, and intraoperative rupture. Preoperative CTA and DSA scans were reviewed to determine the aneurysm location, size (i.e. largest aneurysm diameter), neck width, morphology (regular/irregular), calcification of the aneurysm wall, partial thrombosis and vessels arising from the aneurysm sac. Following the criteria proposed by Andaluz et al., and in consideration of the exclusion criteria, location at the posterior circulation, a neck width > 6 mm, lobulated morphology, calcification of the aneurysm wall, intrasaccular thrombosis, and vessels arising from the aneurysm sac were defined as complex aneurysms [1,14]. Treatment-related infarction was defined as any new ischemic lesion on postoperative CT within 48 h after surgery that could be clearly related temporally and spatially to the clipping procedure and the parent artery of the treated aneurysm. The CT and magnetic resonance imaging (MRI) scans were thoroughly reviewed to evaluate if vasospasm was present and may have caused the cerebral ischemia. To evaluate functional outcome, the modified Rankin scale (mRS) score was determined at discharge and at 6-month follow-up. A mRS score ≤ 2 was defined as a favorable outcome and a mRS score of 3-6 as unfavorable outcome. A mRS of 6 indicates death. Selected patients received angiographic control of aneurysm occlusion, either at the end of their hospital stay or at follow-up visits. The Raymond-Roy occlusion classification (RROC) was applied to assess aneurysm occlusion: (1) complete occlusion, (2) neck remnant, and (3) aneurysm remnant.
Statistical analysis
Baseline patient and aneurysm characteristics were analyzed using descriptive statistics. To compare categorical variables, the Chi-Square and the Fisher's exact tests were used, when appropriate. Continuous variables were presented as means ± standard deviation and tested for normality using the Shapiro-Wilk test. Groups were compared using the two-sided unpaired Student's t test (for normally distributed data) and the Mann-Whitney U test (for non-normally distributed data). Factors predictive for unfavorable functional outcome in the univariate analysis (p < 0.1) were entered into a binary logistic stepwise regression model to identify factors independently associated with clinical outcome. All calculations were performed using SPSS software (version 25, IBM SPSS Statistics for Windows, Armonk, NY, USA). A p value < 0.05 was considered statistically significant.
Patient and aneurysm characteristics
A total of 157 patients met the inclusion criteria and were enrolled. The distribution of patient admission and start of surgery in dependence of the time of the day is shown in Fig. 1. The mean patient age was 55.4 ± 13.4 years and 113 patients were female (72.0%). Fifty-five patients (35.0%) presented with WFNS grade 4 or 5 and 90 had a Fisher 4 hemorrhage (57.3%). Among 157 ruptured aneurysms, 45 (28.7%) were located at the anterior cerebral artery, 75 (47.8%) at the middle cerebral artery, 31 (19.7%) at the internal cerebral artery and 6 (3.8%) at the posterior circulation. The mean aneurysm size was 7.4 ± 3.4 mm and the mean neck width was 3.8 ± 1.7 mm. Irregular shape was seen in 127 aneurysms (80.9%). Seventeen aneurysms (11.1%) had a neck width > 6 mm, 55 (35.0%) were lobulated, 19 (12.1%) had calcifications of the aneurysm wall, 2 (1.3%) were partially thrombosed, and 9 (5.7%) had vessels arising from the aneurysm sac. Eighty-five aneurysms (54.1%) were defined as complex aneurysms. Microsurgical clipping was performed by 7 vascular neurosurgeons during the study period. Table 1 shows how many aneurysms were clipped during standard working hours, oncall duty, and nighttime by each individual neurosurgeon. In three patients, coiling of the ruptured aneurysm was attempted but failed and the aneurysms were subsequently treated by microsurgical clipping. The mean admission-to-surgery time was 9.2 ± 6.8 h and the mean operation time was 270 ± 80 min. Micro-Doppler ultrasound was used in 91.7%, ICG-VAG in 85.3% and temporary parent artery clipping was performed in 40.7%. Intraoperative rupture occurred in 26.1% and treatment-related cerebral infarction in 21.7%. Ninety patients developed vasospasm (57.3%) and the overall cerebral infarction rate was 45.2%. The in-hospital mortality rate was 13.4%, while 32.4% achieved a favorable outcome at discharge and 49.0% at follow-up. Among 89 patients with available angiographic follow-up, 73.0% had complete occlusion, 20.3% had neck remnants, and 6.7% had aneurysm remnants.
Admission during standard working hours vs. on-call duty
Stratifying for admission day and time, 53 patients (33.8%) were admitted during standard working hours and 104 (66.2%) during on-call duty. Thereof, 74 patients (47.1%) were admitted during nighttime. Baseline patient and aneurysm characteristics were comparable between both groups, as detailed in Table 2. For patients admitted during standard working hours, the mean admission-to-surgery time (8.4 ± 7.4 h vs. 9.7 ± 6.4 h, p = 0.289) and the mean operation time (282 ± 83 min vs. 264 ± 78 min, p = 0.173) were comparable to patients admitted during on-call duty. There were no significant differences regarding procedural specifics and procedural complications, as specified in Table 3. During the hospital stay, a similar portion of patients in both groups developed vasospasm (p = 0.833) and cerebral infarction (p = 0.491). Favorable outcome was by trend more often achieved by patients which were admitted during on-call duty (37.5% vs. 22.6%, p = 0.060), while in-hospital mortality rates were comparable between the groups (17.0% vs. 11.5%, p = 0.343). The difference in functional outcome was mitigated at 6month follow-up (52.9% vs. 41.5%, p = 0.178). Complete occlusion rates at angiographic follow-up were similar in both groups (75.9% vs. 71.7%, p = 0.689). Patient outcome is described in detail in Table 4.
Operation start daytime vs. nighttime Next, the patient cohort was stratified based on time of surgery. Microsurgical clipping was performed during daytime in 109 patients (69.4%) and during nighttime in 48 (30.6%). Among 83 patients admitted before 18:00, 26 (31.3%) were operated at night. Out of 74 patients admitted after 18:00, 22 (29.7%) underwent microsurgical clipping at the same night. In the other patients, surgery was delayed to the next day. There were no significant differences regarding baseline patient and aneurysm characteristics between both groups, as shown in Table 2. For surgeries performed during nighttime, the mean admission-to-surgery time (5.0 ± 3.9 h) was significantly shorter than that for surgeries during daytime (11.1 ± 6.9 h, p < 0.001), while the mean operation time (p = 0.337) showed no significant difference (Table 2). During nighttime, ICG-VAG was significantly less frequently used (75.0%) than during the daytime (89.9%, p = 0.015), while there was no significant difference in micro-Doppler usage (p = 0.203). Also, the frequency of temporary parent artery clipping (p = 0.966), intraoperative rupture (p = 0.545) and treatment-related cerebral infarction (p = 0.835) were comparable between both groups (Table 3). Vasospasm (p = 0.218) and overall cerebral infarction (p = 0.653) occurred in a similar portion of patients in both groups (Table 3). Infarction rates were slightly higher in patients treated without ICG-VAG (33.3%) compared to those treated with ICG-VAG
Risk factors for unfavorable outcome at discharge
As surgery during nighttime was associated with a worse functional outcome at discharge, the impact of baseline patient characteristics, aneurysm characteristics and procedural specifics on functional outcome was further evaluated. In the univariate analysis-besides surgery during nighttime (p = 0.039)-higher patient age (< 0.001), WFNS 4+5 (p < 0.001), Fisher 4 (p < 0.001), longer admission-to-surgery time (p = 0.001), and overall cerebral infarction (p < 0.001) were predictive for unfavorable outcome. In the multivariate analysis, patient age (odds ratio
Discussion
In the current study, on-call duty admission was not associated with increased morbidity or mortality. However, microsurgical clipping at nighttime carried higher odds of unfavorable outcome at discharge, representing a potential "weekend effect." However, cerebral infarction rates were independent of the time of treatment and the difference in patient outcome was mitigated at mid-term follow-up. Furthermore, nighttime surgery was not independently associated with patient outcome after multivariable adjustment. To the best of our knowledge, this is the first clinical study that investigates the impact of a potential "weekend effect" on aSAH patients undergoing microsurgical clipping.
The weekend effect was first described in the benchmark study by Bell et al. in 2001 [3]. The authors performed a population-based study and revealed a higher mortality rate for 23 of 100 investigated nontraumatic causes of death among patients admitted on weekends [3]. Since then, the weekend effect has been evaluated for various traumatic and non-traumatic diseases [5,6,21,23,26,34,36]. Recently, Pauls et al. performed a meta-analysis of 97 studies with various types of emergency admissions, confirming increased odds of mortality (OR: 1.19, 95%CI: 1.14-1.23) for weekend versus weekday admission [32]. Several potential reasons for the weekend effect were discussed: During weekends, there is generally a shortage of both physicians and nurses, which is potentially associated with an increased individual workload. In this context, there is a decreased availability and performance of hospital services, such as imaging and interventional procedures [3]. Moreover, some authors proposed that physicians on duty might be less experienced with the management of a specific emergency case leading to suboptimal patient care [3,32]. For patients with myocardial infarction, Kostis et al. demonstrated that patients admitted during on-call duty were less likely to receive invasive procedures in a timely manner [23].
Regarding aSAH, a potential weekend effect is possible given that the management of patients with an acute SAH require meticulous care, highly trained human resources, considerable technological resources and timely invasive procedures. In the available literature, however, there is conflicting evidence regarding this matter. In a population based study, Johnson et al. reported higher odds for mortality among SAH patients presenting at the weekend (OR: 1.07, 95%CI: 1.02-1.12) [22]. Likewise, Mikhail et al. observed higher mortality rates in SAH patients with a poor neurological grade (OR 6.59, 95% CI 1.62-26.88) [27]. In contrast, Pandey et al. and Crowley et al. found no association between weekend admission and mortality in population-based studies [8,31].
The authors' institution follows a "coil-first" policy; hence, microsurgical clipping is proportionally more often performed in patients with space-occupying intracranial hemorrhage and brain edema. In the multivariate analysis, a poor WFNS grade and a high Fisher grade were independently associated with unfavorable functional outcome. A good functional outcome at 6-month follow-up was achieved by 49%. For comparison, in the international subarachnoid aneurysm trial (ISAT), a good neurological outcome was reported for 69% in the clipping cohort [28]. However, the ISAT contained a preselected subset of patients with predominantly low-grade SAH. Interestingly, in the present study, on-call duty admission of aSAH patients was by trend associated with a better functional outcome at discharge, however, in-hospital mortality rates were similar and the difference in outcome was mitigated at midterm follow-up. After adjustment for confounding variables, on-call duty admission was not independently associated with morbidity. Moreover, there was no significant delay of start of surgery during on-call duty. The following considerations contradict a potential weekend effect in aSAH patients: Owing to a significant risk of aneurysm rebleeding and associated morbidity with each additional day of treatment delay, microsurgical clipping was performed within 24 h after admission both on weekdays and on weekends, which represents a key concept in SAH management [7]. Moreover, SAH patients are generally treated at specialized neurovascular units with standardized protocols. Upon admission, the patients are seen according to the same protocol regardless of day and time of the week they present, which includes immediate admission to a neurointensive care unit and evaluation by an interdisciplinary team of neurosurgeons, neurointerventionalists, and ICU physicians. For these reasons, the care of patients with SAH has become standardized and the management is familiar to the health care staff. The idea that standardized treatment protocols can mitigate the weekend effect has been demonstrated for ischemic stroke patients: McKinney et al. observed a weekend effect on ischemic stroke among non-stroke centers, however, comprehensive stroke centers that follow standardized treatment protocols were not affected [25]. These findings support the concept that standard operation procedures improve patient outcome, in particular at weekends with potentially less experienced health care workers being on duty. We further hypothesized that clipping during nighttime would be associated with a worse surgical performance, which could result in a poor outcome. Due to the human circadian rhythm, cognitive performance varies throughout the day and usually reaches its lowest point at night, possibly yielding reduced overall quality of patient management [29]. Moreover, technical and personnel resources are particularly restricted at night. Concerning neurosurgical procedures, Desai et al. reported an increased morbidity and mortality rate among pediatric neurosurgical emergencies admitted during out-of-office hours [10]. Hirose et al. found that in-hospital mortality of emergency trauma patients was significantly higher during nighttime, however, there was no difference regarding weekdays and weekends [19]. In contrast, Rumalla et al. found no association between weekend admission and mortality among traumatic subdural hematoma patients [35].
Regarding aneurysm clipping, treatment-associated morbidity is mainly related to cerebral infarction, which can occur between 0.9 and 45.3% after clipping [4,24,38,40]. The reasons for cerebral infarction include lengthy temporary clipping of the parent artery, occlusion of perforating arteries by improper clip placement and excessive brain retraction [2,30,33,37]. In our series, treatment-related infarction occurred in 21.7% and was comparable between the daytime and the nighttime group. In this context, there were no differences regarding temporary parent occlusion and vasospasm. Also, operation time was comparable between both groups, which might be considered an argument against surgeon's fatigue at nighttime. These considerations are further supported by studies which demonstrated that sleep deprivation has no impact on surgeon performance [39,41]. However, ICG-VAG was less often used at nighttime, which may be explained by personal preferences and limited resources (e.g. unexperienced staff). Infarction rates were slightly higher in patients treated without ICG-VAG; however, this difference was not statistically significant.
Although treatment-related complications were similar between daytime and nighttime surgery, patients treated at nighttime had a higher rate of unfavorable outcome at discharge. However, this difference did not remain significant after multivariable adjustment. In conclusion, we could not demonstrate a statistically significant effect of on-call duty admission and microsurgical clipping at nighttime on in-hospital mortality and mid-term functional outcome. Although, we observed worse short-term functional outcome after nighttime surgery, there were no differences in complications, vasospasm and infarction rates, making a considerable weekend effect appear unlikely. Although we could not prove a "weekend effect" for the analyzed subset of aneurysms, there might be a "weekend effect" in other institutions that follow a different treatment regimen or in specific subsets of patients with diverging baseline characteristics. Further studies will be required to draw a definite conclusion on this subject.
Limitations
The study is limited by its retrospective, single center design. The number of included patients was only moderate, and it would be possible that some results could achieve statistical significance with increasing statistical power. We report the outcome of patients from our neurovascular center, which underwent a standardized management protocol. Therefore the results may not apply to other institutions with different treatment regimen. Moreover, we did not analyze aneurysm re-rupture, which needs to be considered when deciding to delay aneurysm embolization.
Conclusions
In this study, clipping during nighttime was associated with worse patient outcome at discharge. However, this effect did not remain significant after multivariable adjustment. Moreover, in-hospital mortality rates and procedural complications were not different between daytime and nighttime. In our neurovascular center, admission outside of regular working hours did not affect patient outcome, which is most probably due to the following of standardized treatment protocols.
Funding Open Access funding enabled and organized by Projekt DEAL.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. For this type of study, formal consent is not required.
Informed consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2021-01-06T15:16:33.034Z | 2021-01-05T00:00:00.000 | {
"year": 2021,
"sha1": "f2899a7bbc449a13da6c6fa8375fe88cc77e5aa4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00701-020-04689-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2899a7bbc449a13da6c6fa8375fe88cc77e5aa4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53505012 | pes2o/s2orc | v3-fos-license | A mathematical study to control Guinea worm disease: a case study on Chad
ABSTRACT Global eradication of Guinea worm disease (GWD) is in the final stage but a mysterious epidemic of the parasite in dog population makes the elimination programme challenging. There is neither a vaccine nor an effective treatment against the disease and therefore intervention strategies rely on the current epidemiological understandings to control the spread of the disease. A novel mathematical model can predict the future outbreaks and it can quantify the dissemination rates of control interventions. Due to the lack of such novel models, a realistic mathematical model of GWD dynamics with human population, dog population, copepod population and the worm larvae is proposed and analyzed. Considering case data from Chad, we calibrate the model and perform global sensitivity analysis of the basic reproduction number with respect to the control parameters and copepod consumption rates. Furthermore, we investigate the impact of three control interventions: awareness of humans, isolation of infected dogs and copepod clearance from contaminated water sources. We also address the impact of combination interventions which leads to the conclusion that the combination of isolating the infected dogs and treating the contaminated ponds is a plausible way for eliminating the burden of GWD from Chad.
Introduction
Guinea worm disease (GWD), also known as Dracunculiasis, is one of humanity ancient scourges [24]. Individuals are infected by drinking water contaminated with copepods, which act as an intermediate host and the carrier of nematode larvae [19]. These nematodes affect the subcutaneous tissue as the adult female migrates through the human body, generally taking residence in the foot. If left untreated, the nematode will eject larvae when exposed to fresh water, which the host will do to alleviate the burning and itching caused by the worm; the lesion may also acquire a secondary infection if improperly cared for [16]. The infection is transmitted to copepods through ingestion of free-living worm larvae and thus the cycle continues (see, Figure 1). The pain from GWD can be debilitating, which is of great concern as outbreaks tend to occur at times of agricultural importance [18]. Since neither a vaccine nor an effective treatment is available for GWD, control strategies focus on the provision of clean water, isolation of infected dogs and changing people's behaviour.
As the Guinea Worm Eradication Programme progresses toward its ultimate goal of global eradication, ongoing efforts now focus on the remaining endemic countries: Chad, Ethiopia, Mali and South Sudan [14,17]. In Chad, a provisional total of 14 GWD cases have been reported during January to October 2017 and 16 cases in the previous year [14]. Added to this, there has been the observation of even more frequent GWD infections in dogs in the same geographic area in Chad where most of the human cases have occurred, which counters the historical report where infections in dogs were rarely reported even when human infections were very common [5,23,31].
Mathematical modelling has the potential to analyse the mechanisms of transmission and the complexity of epidemiological characteristics of an infectious disease and indicate new approaches to prevent and control future epidemics. Unfortunately, there are a limited number of mathematical models for GWD dynamics [2,20,26]. Smith et al. [26] developed a mathematical model of GWD to evaluate the effectiveness of chlorination. They found that despite the theoretical potential of chlorination to complete the eradication of the disease, education is far more effective.
Although, the aforementioned studies have produced useful insights on the transmission and control of GWD, none of these studies incorporated dog infection along with copepods population explicitly. A goal of this study is to design and analyse a populationlevel model for GWD dynamics that incorporate human population, dog population, copepods and the worm larvae. Another goal of this paper is to study the impacts of various control interventions, namely; awareness campaigns, isolation of infected dogs and killing copepods in the affected areas. The rest of the paper is organized as follows: in the next section, the model is formulated. Dynamics of disease-free system is studied in Section 3. In Section 4, the dynamical behaviour of the system without dog population and control interventions is studied. The full model is analysed in Section 5. We have calibrated the model in Section 6. To observe the effects of some important model parameters on the basic reproduction number, we performed global sensitivity analysis in Section 7. In Section 8, the effects of control interventions on the infected human populations are investigated. Finally, the paper ends with a concluding section.
Formulation of the model
Here, we propose a mathematical model for GWD that incorporates human population, dog population, copepods and Guinea worm larvae. We divided the total human population into susceptible, exposed and infected class. It is assumed that susceptible human population, S h (t), acquire the disease through contaminated drinking water, i.e., consumption of copepods that are infected with Guinea worm larvae. In addition, we assume no direct transmission of the disease between humans and dogs. The dynamics of humans and dogs are considered to be SEIS-type. Further, we have taken into account the effect of awareness in the population and hence divided the exposed populations into two groups: one being unaware, E 1 h , and another being aware, E 2 h . Accordingly, the infected class of humans is of two kinds: unaware infective, I 1 h , and aware infective, I 2 h . Individuals in the I 1 h class produce the free larvae of Guinea worm in fresh water, however, the aware infected people will not do so. The infected individuals become again susceptible after the worm leaves the body. Thus, the total human population, Next, we divided the dog population into three groups: susceptible, S d (t), latently infected, E d (t), and infected dogs, I d (t). We assume that dogs acquire the infection indirectly through the consumption of fish or frogs contaminated with copepods that are infected with Guinea worm larvae. The infected dogs release Guinea worm larvae in the fresh water. Therefore, the total dog population, Furthermore, we assume that the copepod population is divided into susceptible copepods, S c (t), and infected copepods, I c (t). Due to short life span of copepods, we assume that once infected, they remain infected for the rest of their lives. Let L(t) denote the concentration of the Guinea worm larvae in the environment. The compartmental flow diagram is depicted in Figure 2. In view of above considerations, the dynamics of GWD is governed by the following system of differential equations: All model parameters are assumed to be positive. The biological meanings of parameters involved in the system (1) are given in Table 1.
The disease-free model
In the absence of GWD, the related control parameters, i.e., ρ, p and c are assumed to be zero and therefore the system (1) reduces to the following subsystem:
Equilibrium analysis
When modelling infectious diseases, the most important issue that arises is whether the disease spread could attain endemic level or it could be eradicated. To have a better understanding of the dynamics of the disease, equilibrium and stability analysis is performed. System (2) exhibits two non-negative equilibria; E 0 (π h /μ h , π d /μ d , 0) and In equilibrium E 1 , the value of S c1 is given by The equilibrium E 0 is always feasible and the equilibrium E 1 is feasible provided the following condition is satisfied:
Stability analysis
In this section, we perform the local stability analysis of the equilibria of the system (2). This can be investigated by determining the sign of the eigenvalues of Jacobian matrix evaluated at each of the equilibrium [25]. The Jacobian of system (2) is given by where Eigenvalues of the Jacobian matrix (4) evaluated at E 0 are real and given by Under the above considerations, we establish a local stability result of the equilibrium point E 0 of system (2), in terms of intrinsic growth rate of copepod, r. From the above theorem, it is clear that if we consider r as a bifurcation parameter, then at r = r c there is an exchange of stability properties between the equilibria E 0 and E 1 . This is a clear indication of the presence of a transcritical bifurcation when r = r c . The next step is to investigate the nature of the bifurcation involving E 0 at r = r c . Observe that the eigenvalues of the matrix are given by Since η 3 = 0 is a simple zero eigenvalue and the other eigenvalues are real and negative, therefore at r = r c , the equilibrium E 0 is non-hyperbolic and the assumption (A1) of Theorem 4.1 in Castillo-Chavez and Song [8] is verified. Now, denote by w = (w 1 , w 2 , w 3 ) T a right eigenvector associated with the zero eigenvalue η 3 = 0. To determine the components of w, we solve the following system of equations: to obtain w 1 = w 2 = 0 and w 3 = 1. Furthermore, the components of the left eigenvector are determined by solving the following system of equations: to obtain v = (0, 0, 1), so that w.v = 1. Now, the coefficients a and b defined in Theorem 4.1 of may be explicitly computed. Taking into account system (2), it follows that The previous considerations allow us to state the following theorem. (2) and let a and b as given by (7), where a < 0 and b > 0. The local dynamics of system (2) around the equilibrium E 0 can be stated as when r < r c with r ≈ r c , E 0 is locally asymptotically stable, and there exists a negative unstable equilibrium E 1 ; when r > r c with r ≈ r c , E 0 is unstable, and there exists a positive locally asymptotically stable equilibrium E 1 .
Theorem 3.2: Consider model
Proof: It follows from [8] Theorem 4.1 pp. 373, and Remark 1 pp. 375. (2) and let a and b as given by (7) where a < 0 and b > 0. At r = r c , system (2) undergoes a supercritical (or forward) bifurcation. Proof: It is a straightforward application of Theorem 3.2.
Corollary 3.3: Consider model
To show the occurrence of transcritical bifurcation between equilibria E 0 and E 1 , we use the following set of hypothetical parameter values: π h = 23, 399, μ h = 0.0017, π d = 417, μ d = 0.0069, K c = 100, 000, φ c = 200, 000, β h = 0.5, β d = 0.5, and solve system (2) using solver ODE15s in MATLAB 2012. The existence of transcritical bifurcation between equilibria E 0 and E 1 is shown in Figure 3. From the figure, it can be seen that there is a threshold value of r below which the equilibrium E 0 is stable and the equilibrium E 1 is not feasible and above which the equilibrium E 0 is unstable and the equilibrium E 1 is stable.
Matrix J evaluated at the equilibrium E 1 leads to the eigenvalues Clearly, two eigenvalues are negative and the third one is negative under the condition for the feasibility of the equilibrium E 1 . Thus, the local stability behaviour of the equilibrium E 1 of the model (2) can be stated in the following theorem.
Theorem 3.4:
The equilibrium E 1 , if feasible, is locally asymptotically stable.
Dynamical properties of model (1) in the absence of dog population and controlinterventions
In the absence of dog populations and control interventions, system (1) reduces to the following subsystem:
Positivity and boundedness of solutions
where
Basic reproduction number and stability of disease-free equilibria
The dynamics of the disease-free model (8) is characterized by the threshold parameter R 0s , which we refer to here as the 'basic reproduction number, the expected number of secondary cases produced in completely susceptible population, by a typical infective individual' for system (8) [9,29]. The basic reproduction number, R 0s , can be computed by seeking conditions under which a non-tirvial equilibrium exists or conditions for the existence of a transcritical bifurcation. It can also be computed using the next generation operator approach, in which case the reproduction number, R 0s , is the spectral radius of the next-generation operator FV −1 , where F is the matrix of new infection terms and V is the matrix of transition terms. For the system (8), the matrices F and V are given by Thus, for the system (8), the basic reproduction number is given by Regarding local stability of the disease-free equilibria E 0 and E 1 of the system (8), we have the following theorem. (8), the disease-free equilibrium E 1 is locally asymptotically stable if R 0s < 1 and unstable if R 0s > 1.
Theorem 4.2: For system
Proof: The Jacobian of system (8) is given by Eigenvalues of the matrix M evaluated at the equilibrium E 0 are Clearly, five eigenvalues are negative and one will be positive (or negative) if the equilibrium E 1 is feasible (not feasible). Therefore, the equilibrium E 0 is related to the equilibrium E 1 via transcritical bifurcation. Matrix M evaluated at the equilibrium E 1 gives two negative eigenvalues −μ h and −rS c1 /K c , and remaining four eigenvalues are given by where J ij are values of M ij evaluated at the equilibrium E 1 . The corresponding characteristic equation is given by where Thus, if R 3 0s > 1, the equilibrium E 1 is unstable. Equation (11) can be written as Since equation has four negative roots, namely; J 22 , J 33 , J 55 and J 66 , therefore we have
Now,
is positive in view of signs of roots of equation (12). Hence, if R 3 0s < 1, the equilibrium E 1 is locally asymptotically stable.
Existence of endemic equilibrium
From the second equilibrium equation of (8), we have Using equation (13) in the third equilibrium equation of (8), we have Using equation (14) in the first equilibrium equation of (8), we have Adding the first three equilibrium equations of (8), we get Adding the fourth and fifth equilibrium equations of (8), we get Using the value of N h = π h /μ h in Equation (17) and simplifying the terms, we get the following quadratic equation in N c : Clearly, Equation (18) has either two or no positive roots. Let the two positive roots be f i (I c ), i = 1, 2, then it can be given by Using Equation (14) and the value of N c = f i (I c ) in the last equilibrium equation of (8), we have Now, using Equation (20) We note the following properties of equation (21): • For I c = 0, the Equation (21) has two roots, The above facts guarantee the existence of a positive solution of the Equation (21) if R 0s > 1.
Remark 4.1:
In this section, we have studied the model without dog population and control interventions. As the dynamics of human and dog populations are the same, it is worthy to note that the model without human population and control interventions has the same dynamical properties as the model (8). Therefore, we omit the analysis of the model without human population and control interventions.
Positivity and boundedness of solutions
Lemma 5.1: The region of attraction for all solutions of the system (1) initiating in the positive orthant is given by the following set [12,15] Proof: System (1) can be rewritten in the following form: The vector D = [π h , 0, 0, 0, 0, π d , 0, 0, 0, 0, 0] T is positive. Note that C(X) has all offdiagonal entries non-negative, i.e., C(X) is a Metzler matrix for all X ∈ R 11 + , since D ≥ 0 system (1) is positively invariant in R 11 + [1]. Therefore, any trajectory of the system (1) starting from an initial state in R 11 + remains trapped forever in R 11 + . Adding first five equations of the system (1), we get By adding the equations in the S d , E d and I d compartments of the system (1), we get By similar arguments, we have, for any By adding the equations in the S c and I c compartments of the system (1), we get From the last equation of system (1), we have Assume that Z 4 = max{(1/μ L )(λ 1 π h /μ h + λ 2 π d /π d ), L(0)}. Then 0 ≤ L ≤ Z 4 . Therefore, all mathematically and biologically feasible solutions of the system (1) enter the region * , i.e., the region * is attracting. Hence, it is now sufficient to study the dynamical properties of the model (1) in * .
Basic reproduction number and stability of disease-free equilibria
We apply the next-generation operator method [29] to determine R 0 from system (1). The matrices F (of new infection terms) and V (of the transition terms) are given, respectively, as follows: Following [29], R 0 = ρ(FV −1 ), where ρ is the spectral radius of the next-generation matrix (FV −1 ). Thus, from the model (1), we have the following expression for R 0 : Following [29], local stability of the disease-free equilibrium E 1 of the system (1) is given by the following theorem. For system (1), the disease-free equilibrium E 1 is locally asymptotically stable if R 0 < 1 and unstable if R 0 > 1.
Theorem 5.2:
Thus, if the disease starts with small influx of infected individuals (humans, dogs, copepods or Guinea worm larvae), then it will eventually die out from the population if R 0 < 1.
Existence of endemic equilibrium
From the fourth and fifth equilibrium equations of the system (1), we get the values of E 1 h and E 2 h as follows: (25) Adding second and third equilibrium equations of system (1) and using Equation (25), we get Using equation (26) in the first equilibrium equation of the system (1), we get Using Equations (25) and (27) in the second and third equilibrium equations of the system (1), we get the values of I 1 h and I 2 h as follows: From the seventh equilibrium equation of the system (1), we have Using Equation (29) in the eighth equilibrium equation of the system (1), we have Now, from the sixth equilibrium equation of the system (1), we have Adding the ninth and tenth equilibrium equations of the system (1), we get the following quadratic in N c : Clearly, Equation (32) has either two or no positive roots. Let the two positive roots be N c = f i (I c ), i = 1, 2. Using Equations (28) and (30), and the value of N c = f i (I c ) in the last equilibrium equation of (1), we have Now, from tenth equilibrium equation of the system (1), we have G(I c ) = 0, where We note the following properties of equation (34): • Put y = N c /I c in equation (32), we get the following quadratic in y: Suppose I * c is the largest value of I c for which equation (32) has solution, and let be N * The above facts ensure that there exists at least one positive root of the Equation (34) if R 0 > 1.
Calibration
Monthly GWD case data for dog populations were considered for the period 2012 to 2016 [6]. Our study focuses on the GWD outbreak in January 2012 to December 2016, a period when the disease prevalence decreased in humans but increased in dog population. We calibrate the copepod consumption rates by humans, β h , and dogs, β d , to match the GWD cases in Chad. We fit the model (1) without control interventions to equilibrium to yield the human GWD cases and the infected dog cases in the year 2012. The equilibrium solutions GWD data are fitted using the built-in (MatLab, R2017a) simplex algorithm to minimize the sum of squares of the difference between simulated indicators and data. The minimization is conducted with 100 different starting points in parameter space, chosen using Latin Hypercube Sampling, to ensure consistency and uniqueness of the parameter estimates. The estimated parameters are given in Table 1. Further, to match the infected dog cases over the period 2012-2016, we estimated the dog recruitment rate, π d , and copepod consumption rate by dogs, β d , using the Nonlinear Least Squares fitting routine lsqnonlin in the optimization tool box (MATLAB, R2017a). The fitting is displayed in Figure 5. The initial conditions are chosen as equilibrium solutions.
Sensitivity analysis
In comparison to the effects of simply varying the parameters to look at the outcome of the model, the techniques of sensitivity analysis are mathematically more sophisticated. In the present case, we use a global sensitivity analysis techniques following Marino et al. [21]. To see the effect of the control related parameters, ρ, c and p, and the copepods consumption rates by humans (β h ) and dogs (β d ), we compute partial rank correlation coefficients (PRCCs) between these parameters with respect to basic reproduction number (R 0 ). The rest of the parameter values are the same as in Table 1. Nonlinear and monotone relationships were observed for the parameters under consideration and the response parameter R 0 . We draw 1000 samples from the biologically feasible regions of the parameters of interest using Latin Hypercube Sampling (LHS). The bar diagram of the PRCC values is depicted in Figure 6.
PRCC values of these parameters suggest that the copepods consumption rates by dogs (β d ) has the maximum positive correlation with R 0 while the isolation rate of infected dogs (c) has the maximum negative correlation with R 0 . The clearance rate of copepods (p) has significant negative correlation with R 0 . It is well known that a low value of R 0 increases the likelihood of eradicating GWD. Therefore, it is highly improbable that they change in the direction that one would like, but any external measure that reduces β d and increases c and p should, therefore, be considered in order to eliminate GWD from the community.
Furthermore, to investigate the effect of the most sensitive parameters on R 0 , we draw the contour plot of R 0 with respect to the two controllable parameters β d and c for the model (1), Figure 7. The contour plot shows that the epidemic potential of GWD can be taken to below unity through interventions and aggressive efforts.
Impact of control interventions
In this section, we discuss the reduction in infected humans by varying the control parameters; percentage of aware human (ρ), isolation rate of infected dogs (c) and copepod removal rate (p).
(A) Increasing the percentage of aware humans: Health related education relevant to GWD is given to ensure that most of the individuals in the endemic region adopt behavioural practices that can prevent and interrupt the transmission cycle. Practices include voluntary reporting of GWD cases and knowledge of the reward scheme, prevention of patients from entering drinking-water bodies, regular use of drinking water from improved water sources and, in the absence of such sources, filtering water before drinking [4]. Although filtration appears to be easy and effective, challenges remain in individual and household compliance with straining all unsafe drinking water before consumption and, more importantly, in the agricultural fields or when travelling. In below poverty regions, face-to-face communication appeared to have been the most significant strategy for disseminating messages [27]. Behavioural changes have to be brought about in the community to achieve the required impact, which remains a challenge [4]. By increasing the percentage of aware humans, we mean that most of the individuals are getting information about the disease and hence they will avoid contaminated water resources and infected individuals will not eject worm larvae to any water source. It is noted that even if the number of aware individuals is very high, say ρ = 0.9, no noticeable changes in human infected with GWD is observed (Figure 8(c)). One possible reason behind this may be, no matter how large the aware individuals become, there is still a fraction (1 − ρ) contributing to disease transmission. On the other hand, there will be endemicity in the human population, reservoir population and vector population because this strategy does not affect the dog and copepod population.
(B) Increasing the isolation rate of infected dogs: Eberhard et al. [11] pointed out that infection in dogs is serving as one of the major driving force sustaining transmission in Chad, that an aberrant life cycle involving a paratenic host common to people and dogs is occurring, and that the cases in humans are sporadic and incidental. Recently, Molyneux and Sankara [22] suggested that Ministries of Health with the support of WHO and the Carter Center, must focus on interrupting transmission by the vigorous pursuit of copepod control, the containment of human cases and dog infections and through the application of what we know works. Our results suggest that dog management, i.e., isolation of the infected dogs from the society, leads to a big reduction in infected human cases (Figure 8(e,f)). Therefore, this intervention is very important to achieve the complete eradication of GWD from Chad. Despite the high efficacy of case reduction, this control is very complicated to apply [11].
(C) Controlling the copepod: This intervention consists of killing the copepods by applying a chemical called temephos [11]. When applied to unsafe drinking-water sources on monthly basis, temephos is effective in killing the copepods. By applying this control, we can effectively reduce the contact rate of Guinea worm with humans as well as dogs. Numerically, we checked the effect of copepod control at various levels ( Figure 8(g-i)). One can easily find that this intervention can effectively reduce the number of infected humans. Note that intervention (B) appears to be slightly better than this control.
(D) Comparison of cases in 2017 after applying the control interventions: A provisional total of 26 cases of GWD has been reported in 2017 among which a total of 12 apparent cases were detected in Ethiopia and 14 cases in Chad [14]. To compare the cases in 2017, we have computed the number of cases in 2017 with control interventions (see, Figure 9). Due to the application of various control interventions in Chad, it is observed that the model (1) can well reflect the 2017 cases of GWD for certain values of control parameters.
Further, we evaluated the effects of all controls in Figure 10(a-c). We plotted the number of new GWD cases by varying the control parameters, ρ, c and p. Looking at the figures, it is evident that copepod control is the most effective way to reduce human infections. The percentage of reduction in infected humans due to the application of individual control strategies are given in Table 2. Form the table, it is reinforced that the isolation rate of infected dogs is most effective in terms of case reduction of GWD.
(E) Combination of control interventions: We investigate the effect of combination strategies namely; (A,B), (B,C) and (A,C) by simulating the number of new GWD cases in 2018 (see, Figure 10(d-f)). The contour lines represent the infected human populations. It can be inferred from the contour plots that the combination strategy (B,C) is the most effective in comparison to the others.
Discussion
In this paper, we proposed and analysed a mathematical model for GWD in order to give some insights into the eradication process of GWD in Chad. To estimate the important parameters of the model (1), we fit the system to equilibrium and found the consumption rates of copepods by humans, β h , and dogs, β d . Further, to match the infected dog cases in 2012-2016, we estimated the recruitment rate of dogs, π d , and the copepods consumption rate by dogs, β d . To observe the effect of the control parameters and copepods consumption rates by humans and dogs on the basic reproduction number, we performed global sensitivity analysis which gives us a clear idea of the important control parameters. The PRCCs of R 0 to these parameters show that the most critical parameter impacting R 0 is the copepods consumption rate by dogs, β d . In addition, the isolation rate of infected dogs, c, and the clearance rate of copepods, p, have significant impacts on R 0 and consequently on the control of GWD. We conclude that external measures decreasing R 0 should be encouraged.
Numerically we investigated effects of the three control parameters, ρ, c and p, on the infected human cases. It is shown that the cases in 2017 can be predicted by the model for some values of the control parameters. We found that isolation of infected dogs is the most effective as compared to other controls, Figure 8. The isolation of infected dogs requires a huge effort and sometimes it is hard to contain all of the infected dogs. Keeping this in mind, the clearance of copepods is a much simpler strategy to apply whose effect is very close to the effect of isolating dogs (see, Table 2). In addition, we studied the combined effects of the control strategies by predicting the number of infected human cases in 2018. The first three panels of Figure 10 show the similar kind of results. Figure 10 gives a clearer picture that the combination of isolating infected dogs and clearing the copepods leads to the reduction of the largest number of cases. Implies that this combination strategy will take huge effort as there may be difficulties in identifying which sources of surface water are potentially contaminated and need to be treated. In summary, to achieve the permanent eradication of worldwide GWD cases, the infections in dog population must be reduced. The affected countries should be effectively financed to isolate infected dogs and treat the contaminated ponds with temephos. We recommend that the health-care agencies must focus on containing the infected dogs as well as treating ponds. | 2018-11-01T18:46:31.932Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "fad6acbea1dbf013a8a5014417f222547347f81d",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17513758.2018.1529829?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4177b0a19722feec880e7d7fc60417ac5e0ea187",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
201135049 | pes2o/s2orc | v3-fos-license | Active House and user-friendly visualization of sensors’ monitored data: VELUXlab, a real cognitive and smart NZEB prototype
European standards had already set Nearly Zero Energy Buildings (NZEB) as the current and mandatory goal for the construction market. Thus, several design strategies have been developed, in order to define the best practices towards NZEB targets: high performances of construction components, integrated with high energy-efficiency solutions for building systems. The Active House Vision and evaluative approach on buildings summarize the current protocols, accounting the three principles of Comfort, Energy and Environment to parametrically design and assess buildings until their “as-built” status. However, at that point of the design process, this evaluation relies mainly on design simulations, which do not properly consider the occupants’ component, resulting in a gap between forecast and real performances. Since predictive models of users-building interactions are underway, the paper focuses on the building operation stage of existing and validated NZEBuildings, addressing the performance-gap as related to the final users’ mismanagement of the building system (envelope and installations). Referring to cognitive buildings as sensors-equipped and smart Active Houses, the method proposes a user-friendly visualization of (big) real-data as a possible solution for the final-user training and awareness. This approach has been applied to the case study of VELUXlab, a real building prototype of Politecnico di Milano, already validated as the first Italian NZEB inside a university campus and “as-built” Active House. The outcomes of the paper enhance the potentials of the current knowledge and design practice to achieve a sustainable and healthier built environment, looking at the future but working today.
Introduction
The most recent European Directive [1] has once again confirmed the warning scenario of our building sector according to its environmental impact, with 40% of energy consumptions and 36% of GHG emissions related to the European construction market [2] [3]. These numbers have set the need of the Nearly Zero Energy Building (NZEB) definition [4], and its prescription as a mandatory building requirement by 2020, representing, therefore, a common and shared strategy for our built environment towards the UN's Sustainable Development Goals.
However, the NZEB concept [2] generally frames holistic strategies and best practices, relyingduring the design phase and the "as-built" status -on static and dynamic simulations to verify the related building performances. This process reflects, indeed, mainly the interaction between the outdoor/indoor climate conditions and the construction components, overcoming an important factor for the building global behavior: the final user. Retrieved data has shown that the highest levels of energy consumptions are registered during the operational phase [5] [6] [7], when the "occupants-building interaction" plays a fundamental role, showing a gap between the forecast and measured performances of the building SBE19 Milan -Resilient Built Environment for Sustainable Mediterranean Countries IOP Conf. Series: Earth and Environmental Science 296 (2019) 012042 IOP Publishing doi: 10.1088/1755-1315/296/1/012042 2 global system. This gap could be addressed to different factors: within the simulation models of the design phase, a mis-consideration of (i) the occupancy profile or (ii) the real users' behavioral drivers that could describe the realistic users-building-environment interaction [8]; or (iii) to a mismanagement of the building during its operation. Since predictive models of users-building interactions are already underway 1 [8], this paper focuses on the building operation stage of existing and validated NZEBuildings, addressing the performance-gap as related to the final users' mismanagement of the building system (envelope and installations). As in the automotive sector, indeed, also in constructions the real user -different from the virtual/forecast one -needs a guide towards efficient uses of the real product to guarantee the predicted and assured performances.
Moreover, for NZEBs, which are defined by high performances of construction components, integrated to high energy-efficiency solutions for building installations, the performance-gap could represent even a bigger issue, precisely because of the high levels that their definition requires [8].
The proposed building design and management method aims to a possible solution: at first, with the application of a user-centered design strategy and practice that has already lead to NZEB examplesthe Active House (AH) 2 approach [9]; then, the user-friendly visualization of data, retrieved and mined lively from the monitoring survey of a real NZEB operation performances. The proposal is applied, indeed, to a real case study, already validated as NZEB and "as-built" Active House: VELUXlab, a cognitive smart building prototype of Politecnico di Milano. The application to a real building prototype guarantees both to test the proposal effectiveness and to further adopt a reverse engineering design strategy as a practical way to better qualify/describe the relation environment-building-user.
Methodology
In the construction practice, even the most advanced one -towards NZEB definition, the performancegap is one of the real issues to face. This paper aims to propose a final-user training approach, through a user-friendly and smart visualization of data, in order to fill the range of the gap that refers to occupants' mismanagement of buildings components and systems.
Since the focus is the operation phase of the building process, the followed method had to refer to real constructions, here considered as sensor-equipped, smart, and cognitive buildings, able to collect quantitative data about their behavior and interaction with the occupants, and even actively reply to the latter one. This mechanism is already possible when considering Active Houses, since these demobuildings are already users-oriented designed; moreover, the AH evaluation approach already defines a tool -the AH Radar -that gives a visual interpretation of (simulated) data, according to the three principles of Comfort, Energy and Environment [15].
Within the Active House evaluation pattern, therefore, this study decides to focus on specific physical quantities, used to define the building behavior in simulated scenarios, and able to provide real-time conditions about internal comfort (at first), when related to monitored environments. Thus, inside these smart and cognitive buildings, the installed devices measure: • CO2 concentration [ppm]; • Air temperature [°C]: indoor air-temperature 3 ; • Relative Humidity [%]; 1 The referenced paper [8], indeed, describe the impact of the occupants' component through a set of different simulation scenario, taking into account several combination of users' action drivers; the simulated environment is exactly VELUXlab, the same case study of this work, presenting the paper consecutively with the previous one. 2 The Active House [10] Vision represents a well-established example of NZEB-driven design guideline [11] [12] that summarizes the current most recent and widespread protocols for building evaluation [15]. It is based on three principles, Comfort, Energy and Environment, which resume quantity parameters that are able to describe the building behavior. 3 The authors would like to underline that this work does consider the Air-Temperature instead of the Operative Temperature (much more appropriate, while discussing about users' comfort) because it was not possible to have direct measured data, through the installed sensors; however, according to the scope of the paper, it has been sufficient to evaluate the Air Temperature values actually retrieved by the installed sensors. The collected data from a wireless network of sensors are so stored in a central server, mined and visualized into three different way of dialogue with the final user: (i) a live AH Radar plot (figure 1), computed in real-time, according to the AH Specification [12]; (ii) a live web-dashboard (figure 2) and (iii) a live app-dashboard (figure 3), for a dynamic and real-time interaction.
Figure 1.
An example of the Active House Radar [12] visualization, as derived from monitoring data (in light green); in the background, the "as-built" calculation plot (in light red), obtained within a simulated environment. The radar grid reflect the AH evaluation method [10] [11] [12]. This dynamic method could be applied not only to the operational phase but also to the design phase of the building process, giving, therefore, information about the building for its entire life-cycle. The dynamic radar, indeed, is a suitable way to visualize the different design options that are considered from the first concept to the "as-built" definition (figure 4), allowing the design team to evaluate them into a holistic perspective, through the principles and criteria of the AH Vision. Finally, this approach brings up the case study of VELUXlab, a real smart building prototype of Politecnico di Milano, already validated as the first Italian NZEB inside a university campus [13] and "as-built" Active House [14].
The case study: VELUXlab, the first Italian NZEB and Active House of a University Campus
VELUXlab (table 1 and figure 5) is the first Italian NZEB located inside a University campus as a retrofitted demo-house realized by VELUX and conceived as an Active House already in 2012 [13]. In 2017, it has also been labeled as the first Italian Active House certified "as-built" [9] [14]. As it is, one of the most important features of the buildings is the possibility to remotely control the opening/closing of skylights and shading system (solar blinds and roller shutters), both via schedules and time by time, according to users' needs of visual comfort, indoor thermal environment and fresh air circulation. Whenever the envelope needs to have an adiabatic behavior, the installations integrate the global response, through the HVAC systems (activated according to monitored CO2 concentration) and the heating/cooling system, which is turned off/on according to determinate back points and set points [8]. As related to this experiment, the lab has been equipped (as illustrated in figure 6) with three sensored devices, the AmbiNodes by Leapcraft, interconnected via a Wi-Fi network, and able to measure and record all the data that are necessary to understand the impact of building management on indoor climate and comfort perception (as previously listed). Figure 6. The localization of Leapcraft sensors in the eastern wing of the lab, as three of the many devices that are operating inside VELUXlab [16]. The latter ones are used in order to attest the reliability of AmbiNodes devices, during this experiment (credit: Politecnico di Milano). The location of sensors across the room traces the different areas where tenants could affect more the building behavior, in terms of Comfort parameters (CO2 concentration, daylight, and air temperature regulation by windows openings): • L1 is located in the southern space, over the meeting desk under three openable skylights; • L2 is over the workstations, in the middle of the space, where people meanly stau all day long, working on a computer station; • L3 takes a higher place, above the second meeting desk and workstations.
According to this configuration, VELUXlab is defined as a really smart and cognitive building, able to read the occupants' presence, interact actively with them -through the derived and re-elaboration of measured data -and adapt itself (the envelope and systems components' activation) to users' modifications on the indoor climate.
Results and discussions
The so-defined monitoring campaign started in December 2018 and it is still ongoing.
The visual outputs of the monitored scenario allow getting information from data about: • the live situation and performances, through the live dashboards on web/app devices, alerting the final user when every single parameter is going out of the defined range for indoor comfort quality (figure 3); • the historical trends, whose data are collected on a cloud database and could be mined by the user (figure 7) or the building manager and/or the designer, to understand the human factor impact upon building behavior; • the AH holistic evaluation of the building, assessing also the impact of the occupants' actions 4 on building components on all the parameters of the AH Radar ( figure 8).
At first, the selected graphs on figure 7 show that, after a settlement period for all the sensors, the recorded data are almost cyclic, mirroring the cyclic use of the building during its operation.
Moreover, considering the high and attested soundproof performances of the building envelope [13], the Average Noise graph could represent a coding of the users' presence inside the room, where lower values stand for empty indoor space. Then, the same inflections are easily visible on the upper graphs: the Average Temperature and the Average Light data-plots. In particular, on the first graph of figure 7, the temperature peaks show the free contribute to heat gains when people are using the lab spaces. The latter, instead, shows the set-schedule for the shading system automatic operation (opening at 9 a.m. and closing at 8 p.m. every day, except the weekend, when the lab is closed), implying a direct impact on indoor environment conditions. In the end, the comparison between different stages of the Active House Radar plot (freezing the dynamic live graph at three different moments, as in figure 8) brings up further interesting considerations, focusing especially on the variations of the Thermal Environment (TE) and Indoor Air Quality (IAQ) parameters: • in Dec 2018, the TE higher level reveals the adiabatic strategy of the building, which relies mainly on the construction components performances for the heating season; at this time, the users' interaction with the building is scheduled -and imposed -to be minimal (no widows operation and shading system always open for solar gains); • in March 2019, the outdoor climate conditions impose users to interact with the building, since the outdoor temperature values allow direct thermal exchange between indoor and outdoor, via natural ventilation through openable windows and skylights. By controlling the opening/closing of the windows and the related shading systems, the occupants start to get the control of the building functioning, according to the Comfort drivers of a good thermal environment and fresh air. In this case, a bigger (than expected from simulations) consumption of energy for systems' operation guarantees the high levels of TE and IAQ, reflecting the low resiliency of users to interact with the building in the most proper and energy-efficient way, and causing the energy efficiency performance-gap; • in April 2019 the last AH Radar shows, instead, the users are more confident with the smarter operation to optimize the energy consumptions with the indoor conditions for comfort, as reported in the schedules of operations of windows remote-control. Here, they are indeed able to exploit as maximum the high performances of the NZEB, when considering the Primary Energy Demand 5 .
Conclusions and further developments
This analysis is a preliminary step into the investigation of the interaction between users and building systems, especially focusing on the operation of real NZEB prototypes. However, few considerations have to be done, like the need of an efficient data mining process as a base step for the managing of this volume of data, and the importance to concern the final users' privacy while collecting data about their attitudes and behavior.
In the end, beyond the current purposes of the research, this application has given also the opportunity to easily monitor the building performances checking the reliability of other older sensors that were installed during the construction phase, on envelope and HVAC system. Indeed, the newer devices have revealed that the CO2 concentration was always on levels a lot higher than the same ones registered by the sensors which control the system operation, causing a worst indoor air quality condition and a misuse of the HVAC system itself. | 2019-08-22T20:24:44.913Z | 2019-07-30T00:00:00.000 | {
"year": 2019,
"sha1": "a5ea8691cb0fb4c811f1abef6566a1f6910524b7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/296/1/012042",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "132d8514cadeccb572dcd0467bdc6729498c23a1",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
267978928 | pes2o/s2orc | v3-fos-license | ANALYSIS OF STUDENTS ERROR IN WORKING ON STORY PROBLEMS BASED ON NEWMAN'S CRITERIA IN CLASS XI MIA 3 MAN 3
The background of this research is that there are still many students who are less interested and motivated in learning mathematics so that many students make mistakes in solving math word problems. The purpose of this research is to identify and identify the types of mistakes made students in working on word problems based on Newman's Error criteria and knowing the causes of students making mistakes In doing story problems. The research method used is descriptive method with a qualitative approach . The sample of this study were 24 students of class XI MIA 3. The research instrument was tests and interviews. The test results were analyzed based on Newman's criteria, after which students were interviewed based on the types of mistakes made by students. The results of this study indicate that the types of errors made by students based on Newman's criterion analysis consist of 5 errors, namely Reading errors, Comprehension errors, Transformation errors, Process Skill errors and Endcoding errors. The biggest mistake was made at the stage of writing the final answer, namely as much as 46.67%, while the smallest error was made at the stage of understanding the questions, namely as much as 5.33%.
INTRODUCTION
Mathematics is a field of study that plays an important role in education.It can be seen from the study of mathematics from elementary to middle school and even to college.Besides that, it is also emphasized in Permendiknas No. 22 of 2006, mathematics subjects need to be given to all students starting from elementary school to equip students with the ability to think logically, analytically, systematically, critically and creatively as well as the ability to cooperate with students.According to Suherman in (Komariyah, et al, 2018 ) , "Mathematics taught in primary and secondary education is school mathematics".
Based on Permendikbud Number 22 of 2016 regarding the objectives of learning mathematics, namely: (a) understanding mathematical concepts, describing how the interrelationships between mathematical concepts and applying concepts or logarithms efficiently, flexibly, accurately, and precisely in solving problems, (b) reasoning the pattern of properties of mathematics, developing or manipulating mathematics in constructing arguments, formulating evidence, or describing mathematical arguments and statements, (c) solving mathematical problems which include the ability to understand problems, construct models of mathematical solutions, complete mathematical models, and provide appropriate solutions, and (d ) communicate arguments or ideas with diagrams, tables, symbols, or other media in order to clarify the problem or situation.The National Council of Teaching Mathematics (NCTM) formulates the objectives of learning mathematics in schools as follows: (1) mathematical communication; (2) mathematical reasoning; (3) problem solving; (4) mathematical connections; and (5) mathematical representation.It can be concluded, learning mathematics aims to hone students' mathematical abilities in solving problems related to mathematics by thinking critically and logically and being able to apply them in everyday life.
The achievement of learning goals in mathematics can be seen from the learning outcomes of students.To be able to achieve good learning outcomes, students are required to be able to solve questions correctly.But in reality students still find it difficult to solve math problems, resulting in students making mistakes in solving math problems.The mistakes that students usually make in solving math problems are understanding concepts that affect arithmetic operations so that they will have an impact on the results of solving math problems done by students.
In the learning process students are often faced with problems in order to test the abilities of each student.One form of the problem that is often given is a matter of story.According to ( Hartini, 2008) word problems are a form of questions that present problems in everyday life in the form of narratives or stories.In fact, students think that word problems are difficult and mistakes often occur in solving word problems.According to Wahyuni in (Marlina, 2013) the low ability of students in working on word problems can be seen from the many mistakes students make when working on story problems.According to Lusianadalam (Suciati & Wahyuni, 2018) the mistakes made by students can result in a decrease in students' scores in mathematics.In order to achieve optimal learning outcomes, teachers need to know the mistakes students often make in working on math word problems.According to (Lestari et al, 2018) an analysis of the mistakes made by students is needed to solve problems and can help students in solving math word problems.Mathematics is very important subject to study because mathematics is inseparable from everyday life.
One way to be able to analyze the mistakes made by students in solving math word problems is by using analysis based on Newman's criteria.Newman's criterion suggests five stages that can help analyze errors made while solving math word problems, namely: reading errors , comprehension errors , transformation errors , process skills errors.), and final answer writing errors ( encoding errors ).
Based on the results of observations made on September 27 to 8 October 202 1 in MA N 3 Padang City , in the process of learning mathematics it can be seen that students are less active in the process of learning mathematics.Students are busy with their own activities that are not related to mathematics.Students only listen, pay attention, and take notes on the material presented by the teacher.It can be seen from one of the students' daily test answers in Figure 1.
Figure 1. Example of Student Daily Deuteronomy
In Figure 1, it can be seen that students have not been able to solve a problem correctly.Based on the students' answers above, students seem unable to make good and correct answers in solving the problems of the story questions.
Based on the results of observations on June 29 2022 at MAN 3 Padang, there are still many students who are less interested and motivated in learning mathematics and this causes student achievement to be less than optimal.This is evident from the results of the daily tests of students who are still under the minimum completeness criteria (KKM) set by the school, namely 80.00.From class X MIA 1-4, X MIA 3 class has the most scores below the KKM.With the same number of students in each class, namely 30 people, X MIA 3 students whose grades were above the KKM were only 12 people.This proves that students still make many mistakes in solving word problems.And from these follow-up observations the students had gone up to class XI MIA which were then re-selected for class XI MIA 3 as research subjects by the researchers.
Based on the results of interviews with the obligatory mathematics teacher for class XI MIA 3 MAN 3 Padang, information was obtained that some students made many mistakes when working on story problems, where students experienced a lot of confusion in understanding the questions and connecting them with the material they had learned and many were also origin answer because you do not understand the meaning of the question.This resulted in students experiencing many mistakes when working on word problems.Based on this information, it is necessary to do an analysis of student errors to find out, identify, and describe clearly what mistakes students made.Based on the results of the interviews above, it is necessary to conduct an in-depth analysis of the mistakes made by students in working on word problems.By identifying the mistakes made by students, the writer can find out the types of errors such as mistakes in writing answers.Identification of the types of student errors in working on the questions, the author uses the Newman criteria guide to make it easier to identify the types of errors made by students from each step of the student's answers.If mistakes in solving problems are allowed, the learning objectives cannot be achieved.Newman's criterion is one of the guidelines for analyzing the types of errors made by students in working on math problems.
Based on this description, the writer feels the need to identify student errors in working on word problems, which the author will examine in a study entitled: "Analysis of Student Errors in Working on Story Problems Based on Newman's Criteria in Class XI MIA 3 MAN 3 Padang"
RESEARCH METHODS
The type of research used in this research is descriptive research using a quantitative approach which aims to reveal a symptom as it is.This research is carried out in the odd semester of the 2022/2023 school year.Place of implementation of this research in MAN 3 Padang.This study aims to find out and identify the types of mistakes made by students in working on word problems based on Newman's Error criteria and to find out the causes of students making mistakes in working on word problems.The subjects of this study were students of class X MIA 3. The research instruments were tests and interviews.
In this study, the data to be analyzed were daily data or processes in student learning on the test instrument in the form of story questions.Analysis of the test answers is to find out where the mistakes made by students in working on word problems.In analyzing errors based on Newman's criteria, the indicators presented in the following table are needed:
RESEARCH RESULTS
The research was conducted on class IX MIA 3 MAN 3 Padang students in the even semester of the 2022/2023 school year, a total of 23 students.The questions given at the time of the test aim to find out the types of mistakes made by students in solving the questions.The test consisted of 2 questions and was carried out for 45 minutes which was attended by 23 students.
Errors in the results of students' test answers on functional derivative material are grouped into 5 based on Newman's error indicators:
DISCUSSION
The results of the research were carried out on 25 students of class XI MIA 3 MAN 3 Padang by providing 2 essay questions that had been completed by students in solving math problems on the matter of derivative functions based on Newman's error indicators.
Student error on question number 1
Based on the error analysis that had been carried out, 3 students were taken with different types and variations of the location of the errors, namely ADS, FZ and AK students for interviews.The following are the results of the analysis of student errors in working on test questions: Based on the results of the interviews conducted, information was obtained that students experienced errors (1) Transformation errors , namely students were not careful in making formulas, and (3) Endcoding errors , namely students made mistakes in doing calculations so that students were wrong in making conclusions, because students didn't notice the error.
Figure 5. Student AK's answer sheet
Figure 5 shows that AK students made several mistakes, namely Comprehension errors , namely mistakes in determining what was asked in the questions where students wrote the questions incorrectly into their mathematical form.The next is that the students incorrectly determine the formula, and are wrong in determining the steps for completion.Finally, that is wrong in operating calculations where students are wrong in multiplication operations and do not make conclusions .
Based on the results of the interviews conducted, information was obtained that students experienced errors in (1) Comprehension errors , namely students were not careful in determining what was asked in the questions, (2) Transformation errors because students did not pay attention to the results of the answers, and (3) Process Skill errors namely the student is wrong in the calculation operation.
Based on the results of the analysis of the data obtained, it can be concluded that the students' errors in question number 1 were ADS students making mistakes (Comprehension errors) , (Transformation errors) , and (Endcoding errors), then FZ
Student error in question number 2
Based on the error analysis that had been carried out, 3 students were taken with different types and variations of the location of the errors, namely MAF, NSA and VP students for interviews.The following is an analysis of student errors in working on test questions: Figure 6.MAF student answer sheet Figure 6 shows that MAF students made mistakes, namely Reading errors , students made mistakes in reading important words or information in the questions and (Endcoding errors) mistakes in writing the final answer, where there was a step error that affected the final result of the answer itself and not make a conclusion.
Based on the results of the interviews conducted, information was obtained that students experienced errors (1) Reading errors , namely students were wrong in reading important words or information in the questions, (2) Endcoding errors , namely students did not know the calculations and the final results they had done were correct.or wrong.
Endcoding error
Figure 7 shows that the NSA students made several mistakes, namely the Transformation error where the NSA was wrong in determining the formula used, namely the NSA did not write down the complete formula used, and the Endcoding error , which was wrong in multiplying the final results of the answers and not making conclusions.
Based on the results of the interviews conducted, information was obtained that students experienced errors (1) Transformation errors , namely students were not careful in making formulas, and (2) Endcoding errors , namely students were wrong in multiplication and were not used to making conclusions.Based on the results of the interviews conducted, information was obtained that students experienced errors (1) Process Skill errors and (4) namely students did not know the calculations and final results that they had done were right or wrong.
Based on the results of the analysis of the data obtained, it can be concluded that the students' errors in question number 2 were MAF students making mistakes (Reading errors) , and (Endcoding errors), then NSA students made mistakes (Transformation errors) , and (Endcoding errors) and students VP made an error (Process Skill error), and (Endcoding error).
The results of data analysis for the 2 questions with several students that have been done, it can be seen that each error and the factors that cause students to make mistakes are as follows: 1. Errors in reading questions (Reading error) At this stage reading errors occur when students misread terms, symbols, words or important information in the problem.This is because students are less careful in reading the questions.This is in accordance with research conducted by (Rahmawati & Permata, 2018 ) which revealed that in reading errors students still
Process Skill error
Endcoding error experience errors in interpreting sentences correctly and errors in reading symbols and important information in questions.This is also in line with what was stated by (Daswarman, 2020) which stated that reading errors were due to students not paying close attention to the questions.
Error in understanding the problem (Comprehension error)
Errors in understanding occur because students do not understand the information and students cannot determine what is known and what is asked in the problem.This is in line with research (Rahmawati & Zhanty, 2019) which states that student errors occur because the process of interpreting the information given into mathematical expressions is not quite right.This is also in line with research conducted by (Darmawan et al, 2018) which revealed that errors occurred because students could not say what was known and what was asked in the questions.
Error in process transformation (Transformation error)
Errors in the transformation process occur because students incorrectly determine the formula used and students incorrectly determine the steps for completion.This is in accordance with what was stated (Magfirah et al, 2019) where transformation errors occur because students cannot determine the appropriate formula.This is also in line with what was expressed by (Dinnullah et al, 2019) which stated that students were wrong and unable to determine the right steps in solving the questions given.
Error in process skills (Process Skill error)
Errors in process skills occur because students are wrong in operating calculations.This is in line with (Sumadiasa, 2014) which states that student inaccuracy causes errors in arithmetic operations.This is also in accordance with what was expressed by (Haryati et al, 2016) which states that process skill errors are errors in performing calculations, such as errors in multiplication or addition and errors in performing algebraic operations.
Error in writing the final answer (Endcoding error)
This error occurs because the student is wrong or not quite right in writing the final answer.This is in line with what was stated by (Sudiono, 2017) that students make mistakes in the final answer if the student is able to do the completion correctly but does not write the conclusion of the final answer or is not appropriate in concluding the final answer.Furthermore, (Santoso et al, 2017) really regretted the mistake in writing the final answer because students had succeeded in reaching the completion or data processing stage but failed to write the final solution.
CONCLUSION
Based on the results of data analysis and discussion, it can be concluded that, in solving math problems, the material for the derivative function based on Newman's error analysis consists of 5 errors, namely errors in reading the questions (Reading error) , errors in understanding the questions (Comprehension error), errors in the transformation process (Transformation error), error in process skills (Process Skill error) and error in writing the final answer ( Endcoding error).The biggest mistake was made at the stage of writing the final answer, namely as much as 46.67 % , while the smallest error was made at the stage of understanding the questions, namely as much as 5.33%.
Based on the results of the interviews conducted, information was obtained that students experienced errors based on Newman's criteria described below.1. Reading errors , namely students are wrong and not careful in reading important words or information in the problem.2. Comprehension errors , namely students who are not careful, forget and are confused in writing down and determining what is known and asked about the questions because they are in a hurry to do the work.3. Transformation error , namely students do not pay attention to the formula that has been made, then students are not careful, forget, doubt and are confused in using the right formula, besides that students forget, are confused and do not know the steps in solving the problem.Process Skill error, namely students have not mastered addition and multiplication operations, and students are not careful and in a hurry in carrying out the calculation process.4. Process Skill error, namely students have not mastered addition and multiplication operations, and students are not careful and in a hurry in carrying out the calculation process. 5. Endcoding error is the result of previous errors in the calculation process and students are in a hurry to work on the final answer.
Figure 3 .Figure 4 .
Figure 3. ADS student answer sheet Figure 3 shows that ADS students made several mistakes, namely the Compretion error , where the important information known in the questions the students misunderstood, then the Transformation error the students made wrong in determining the formula and wrong in the solving steps.Based on the results of the interviews conducted, information was obtained that students experienced errors (1) Comprehension errors , namely students were confused in writing what was asked in the questions, (2) Transformation errors , namely students were wrong in writing formulas, (3) Process Skill errors and (4) Endcoding error , namely students do not know the calculations and final results that they have done are right or wrong.
students made mistakes (Transformation errors) , and (Endcoding error) and AK students make mistakes (Comprehension error) , (Transformation error) , and (Process Skill error).
Figure
Figure 7. NSA student answer sheet
Table 4 . Percentage of Error Types of Final Test Questions ERROR TYPE QUESTION 1 QUESTION 2 PERCENTAGE Reading errors (RE)
Based on Table4, it can be seen from the results of calculating the percentage of errors made on the type of error in reading the questions (Reading error) obtained 17.33%, errors in understanding the questions (Comprehension error) obtained 5.33%, process transformation errors (Transformation error) ) obtained 36%, errors in process skills (Process Skill error) obtained 17.33%, and errors in writing the final answer (Endcoding error) obtained 46.67%. | 2024-02-27T17:57:55.313Z | 2024-01-03T00:00:00.000 | {
"year": 2024,
"sha1": "344fba1286635be8586f9b5ba9bb91ccc4d9d4a8",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.ar-raniry.ac.id/index.php/alkhawarizmi/article/download/19744/8999",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "001813ce57ebbe13c23bbd414269b75729de274e",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": []
} |
241294144 | pes2o/s2orc | v3-fos-license | Traffic Pattern Analysis from Object Oriented Perspective
: The Knowledge representation is the extensively known and applied method for scrutinizing huge set of database in order to spring innovative information. Data extraction techniques are almost used in every field to retrieve the hidden information in order to reduce real world complexity. In this paper, the issues involved in traffic consequences are analyzed. The semantic net representation and inferences are used to highlight the density associated with each object. The characterized knowledge is taken to derive finest assertions using AND-OR-graphs. The assertions improve the intelligibility in accepting the prediction of traffic occurrence under different age group of people. The knowledge depiction techniques demonstrate the problem from objected oriented viewpoint which outlines the state and behavior of a thing directly related with the stated goal. Objectives are branded into sub goals based on the functionality allied within the object. The mandatory hypothesis needs to be framed and verified once the individual task is applied. In this paper, a case study of Traffic Pattern is gauged with the hazards associated with it.
The congestion comes under diminutive size and the large scale involve road usage. The mitigation measures first should involve identification of congestion [10]. Inter vehicle communication was introduced in order to reduce the condition. It is a concept for accessing advent time to the terminal through bottleneck condition. Traffic simulator like "NETSTREAM" is used [11].
Time distribution can diminish the catastrophic condition of gridlock [12]. Bottleneck condition can also be controlled by certain personal factors which each individual must take care of. This is possible through personal well defined flextime [13]. Due to the increased economy mostly all people own vehicle and this mainly causes a critical gridlock condition. Meanwhile, vehicle sharing can reduce these gridlock related problems [14]. It is mainly depended on land, where the land is of vast area the traffic is low comparatively on a narrowed land space. Real time mapping of traffic monitored in Beijing revealed purely on land area [15] [16] [17].
Existing Road Designs:
Different structures were already existed to demonstrate the patterns of roadways as per the demand of the respective cities. The most popular road designs are adopted almost in the area where vehicle density is huge. They are, The most common merits and demerits are listed below to ensure the effectiveness of the road structure.
Merits of Existing Road Design:
Rectangular blocks can be divided into minor rectangular wedges for further construction. Highly suitable for the city roads and easy to maintain and extend the construction. Decreases the level of congestion at the prime blockage location. Alternate side cannot affect if traffic occur in one side. Severe smashes essentially are eradicated as vehicles mobility in the forward direction.
De-Merits of Existing Road Design:
Not very much convenient because at the intersections between the vehicles. Intersections can be exclusively challenging for aged persons who are driving. Traffic lighting should be adequate to control the driving speed and time management. Assertions can be avoided in designing the road design.
Traffic toys:
Each of the object type (vehicles) has its own behavior and state. The behavior of an object is centralized that is used for driving or the movement of persons or things from one place to another. The state of an object is defined as start, drive, wait and stop. The existence of the state for any type of vehicle is depends on the current human requirement. The behavior of an object is used to categorize the vehicle type as restricted and non-restricted vehicles. The non-restricted vehicles such as 2-wheelers, 3-wheelers and 4-wheelers are privileged to utilize the road facility incorporated with traffic regulations.
The restricted vehicles such as 6-wheelers, 12-wheelers and heavy load vehicles are constrained to use the public facility with the intention to avoid the interruption occur to the public. While driving, the frame of mind for the person (Osp) can change dynamically. The public may exhibit almost two extreme expressions.
a)
Highly spirited b) Low spirited
Knowledge Representation using Semantic Net:
A semantic network is a knowledge base structure to represent semantic relations between the nodes denoted in the network. Each node has characterized by its properties and connected with the other node based on the relationship exist between them. The semantic net is categorized into directed or undirected to epitomize the semantic relations among the nodes which are interconnected with each other. The basic to frame the semantic net is to decide the problem solving direction. This is considered to be a crucial functionality based on the following, a) Nature of the problem (P) b) Properties of rule set (PRs) The Traffic condition is stated as The instance of the occurrence (Ti) of the traffic condition depends on the mobility of the vehicle (V1 to n) in a specific period of time t.
p Ti = D1 + D2 + ⋯ + Dn where, the traffic occurrence happens only if sum of total density reaches the maximum and overflow condition arises.
Due to the growth emerges in all the sectors are rising the demand to use the own vehicle. Disambiguation occurs when a volume of traffic or modal split generates demand for space greater than the available street capacity; the point is commonly termed "SATURATION". The external factors force the human or to lead a sophisticated life every individual likes to own a vehicle. This behavior of the individual human results in increasing number of vehicles. When there is an increase in all types of vehicle object, the density increases that causes over flowing the maximum capacity to the specific road path.
c)
Inference: The lines established between the nodes (Ni) represents the relationships(Ri) exist between them.The hidden inferences are extracted through intersection search or inheritance technique.
d) Semantic Net Partition
The constructed Semantic Net is decomposed into various regions called as spaces. Each spaces can consists of groups of nodes and arcs and denoted as partitioned network.
Knowledge Representation using AND -OR Graph
Basically, the real world problems are defined with one or more major goals. Due to the huge complexity, the goals are decomposed into sub goals with specific objective. The problems are analyzed in terms of modular part named as objects which explains the detailed behavior and functionality associated with it. Traffic has high density of vehicles.
a) Initial Fact:
General statement using backward mapping.
Assertions:
Step 1: Step 2: Step 3: Knowledge of traffic rules (Tr) should be accede from the present generation to our future generation by all possible means. From the vintage, the primitive structure of a vehicle hasn"t been changed. Although the designs and other things of features had been changed. Still bikes are driven two wheels, cars with four wheels.
Fig : Increase In Vehicle Rate
There is huge increase in traffic due to the drastic increase in number of vehicles. From the vintage ages where there was a gold age, it became one of greatest days where the most important thing in daily life "TRAFFIC RULES" was created.
Inter Vehicular Communication:
This inter-vehicular communication system is used to assist drivers in the road in order to provide faster and safer ride. These are bound to GPS and sensors. Time, Accuracy, Security are the main requirements of IVC. The vehicular collision warning communication (VCWC) protocol is a way to caution vehicles when anomalous situation comes so it can stopped before the congestion. They take place in two states: i) Active
ii) Passive
Passive state is that is a normal condition in which we can trace out the normal condition of the vehicles.
Active state occurs when only problematic situation occurs. It starts to send emergency warning signals (EWS). There are three types of EWS"s, they are categorized based on their priorities.
i) It always gives emergency warning signals (EWS). Because the most priority is given to the vehicle (V) which is at the stake.
ii) Forwarded EWS, it given a beacon signal to the approaching vehicles.
iii) Normal EWS, gives out normal signals to the approaching signals.
Fig: Without Ews
The above picture depicts the braking time interval taken by the approaching cars.
Fig: With Ems
The above picture depicts the braking time interval taken by the approaching cars by EMS.
II. CONCLUSION
Evidence depiction through knowledge extraction plays dynamic role in describing practicable elucidations to the complications exist in traffic pattern in the crowded city. In this proposed study, it visibly fixated on problems that necessitate intellectual decision production and processing mission involved in directing the objects that causes traffic. Amongst the combined analysis the variety of objects, the literature review highpoints the most focusable solutions to the respective objects. The information depiction techniques are used to view fromobjected oriented perspective which outlines the state and behavior of an individual object directly related with the stated goal. The Fact examines were conceded out to guarantee the quality of the facts under review. The universal observing declarations are signified as facts occurred from application domain. The planning progressions were discussed with prompting factors. Currently developing an application that can help the poor and needy by giving them free food, and the app name is "humanity" which traces the excess food wasting sources and can saves many lives by filling their tummy's. College. Attended workshop on "Multimedia" conducted by MCET and workshop on "Python for Business Analytics" conducted by PSGR Krishnammal college. Currently developing an application that can help the poor and needy by giving them free food, and the app name is "humanity" which traces the excess food wasting sources and can saves many lives by filling their tummy's.
Nirmal. L, completed my schooling in Marudham higher secondary School, has completed advanced programming -C language in Bharathiyar University with 89% and also I completed Tritiya Sopan in Bharat scouts and guides. I attended the national level workshop on virtual reality development using unity-3D game engine c#.and I presented the paper presentation(A perlustration on usage of soft enunterate technique in crop annex) in National conference on Intelligent learning and computing . | 2020-02-20T09:02:18.780Z | 2020-01-30T00:00:00.000 | {
"year": 2020,
"sha1": "eb4c82e2541447bd7af2749f4ffd1e76f2033826",
"oa_license": null,
"oa_url": "https://doi.org/10.35940/ijrte.c6508.018520",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b8ae2beaae75fdcc6b50023dc48e3493a9a63118",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
251683986 | pes2o/s2orc | v3-fos-license | Neuromuscular monitoring during general anaesthesia: a review of current national and international guidelines
Background The extent to which neuromuscular monitoring is included in professional anaesthesia society guidelines is unclear. Our aim was to comprehensively review neuromuscular monitoring guidelines published by these societies. Methods National societies were identified using the World Federation of Societies of Anaesthesiologists' member list and further manual searches were undertaken to identify multinational societies and specialist medical colleges. A web search and secondary literature search were conducted to locate guidelines for monitoring during anaesthesia. The income of each nation or group of nations was determined using the World Bank classification. Results Forty guidelines were identified. Of 38 nations or classifiable groups of nations, 25 (66%) were high-income nations and 13 (34%) were middle-income nations. Neuromuscular monitoring was mentioned in 36 (90%) of the 40 guidelines. Availability of neuromuscular monitoring was mentioned in 17 (47%) guidelines (mandated in eight [47%] and recommended in nine [53%]). Use of neuromuscular monitoring was mentioned in 26 (72%) guidelines (mandated in three [12%] and recommended in 23 [88%]). Quantitative neuromuscular monitoring was specified in nine (25%) of the guidelines, with the type of monitoring unspecified in the remaining 27 (75%) of the 36 guidelines. Quantitative monitoring was only mandated in one guideline, and this was only when monitoring equipment was available. Conclusions We identified a gap in the availability of professional anaesthesia society neuromuscular monitoring guidelines, particularly in middle- and low-income nations. Recommendations about availability, use and type of monitoring varied among guidelines. An effort to improve the availability and consistency of guidelines is required.
Despite increasing availability of shorter acting neuromuscular blocking drugs, effective reversal agents, and neuromuscular monitoring, residual neuromuscular block is common 1 and may lead to complications, poor patient experience, and increased cost. 2 Evidence supports the effectiveness of quantitative neuromuscular monitoring 3 and experts consistently recommend quantitative neuromuscular monitoring for all patients receiving neuromuscular blocking drugs. 4e6 Despite these recommendations, neuromuscular monitoring is inconsistently applied, with surveys from the USA, 7 Europe, 7,8 China, 9 Australia, and New Zealand 10 revealing inconsistent availability and use of monitors in practice.
Guidelines published by national and multinational professional societies are powerful tools for promoting safe practice. 6 They provide standards against which individuals and health services are assessed and are critical to raising standards in middle-and low-income nations. For example, the World Federation of Societies of Anaesthesiologists (WFSA) published a monitoring guideline, 11 which was endorsed by all its member societies and which underpins its Lifebox campaign to bring pulse oximetry to all patients having anaesthesia. 12 National guidelines for neuromuscular monitoring could result in similar improvements in patient safety.
A recent narrative review sampled published monitoring guidelines and highlighted the inadequacies and inconsistencies of their recommendations. 13 However, a comprehensive analysis of global monitoring guidelines is not available. Our aim therefore was to identify all monitoring guidelines published by national and multinational professional anaesthesia societies and to assess them for their requirements for availability, use, or both of neuromuscular monitoring and the type of monitoring recommended.
Definitions
We used the definitions offered by an expert panel in a recent guideline. 6 Subjective (or qualitative) neuromuscular monitoring involves observing clinical signs such as the 5-s head lift or handgrip strength or stimulating a peripheral nerve and observing the muscular response by visual or tactile means. Objective (or quantitative) neuromuscular monitoring involves stimulating a peripheral nerve and measuring the response using acceleromyography or another technology. 6 Subjective (qualitative) neuromuscular monitoring cannot reliably assess train-of-four ratios >0.4, so objective (quantitative) neuromuscular monitoring is highly recommended. 14
Eligibility criteria
We defined a guideline as any document or webpage titled as a guideline, manual, policy, recommendation, requirement, standard, or statement. Guidelines for monitoring during anaesthesia in general, or neuromuscular block alone, were included. Only guidelines intended for medically trained anaesthesia providers were included. Guidelines intended for other medical practitioners (e.g. emergency medicine physicians) and non-medically trained anaesthesia providers (e.g. nurse anaesthetists) were not included. Professional anaesthesia societies were defined as associations, boards, colleges, faculties, federations, and societies established to train, represent, or both, anaesthesiologists. National and multinational professional societies were included. There were no language exclusions.
Data collection
The WFSA list of member societies was used as a starting point for this study. 15 We also searched for multinational professional societies and specialist medical colleges. Medline and EMBASE were also systematically searched using relevant keywords for national or multinational monitoring guidelines. A manual search of each society's website was conducted. Online translation tools (e.g. DeepL Translator [www.deepl. com] and DocTranslator [www.onlinedoctranslator]) were used to navigate non-English websites and translate non-English documents.
For each society, the following data were collected: itself), and nine specialist medical colleges (Supplementary Table S1). Data from 150 societies were analysed (Fig 1). Three member societies were excluded as their websites were only accessible by members. Forty professional societies (26%) published monitoring guidelines. The professional societies were in Africa (n¼2), Asia (n¼9), Europe (n¼18), global (n¼1), Middle East (n¼2), North America (n¼2), Oceania (n¼1), and South America (n¼5). Thirty-three (82.5%) were WFSA member societies, two (5%) were groups of member societies, four (10%) were specialist medical colleges, and one (2.5%) was a collaboration between a member society and a specialist medical college. Of 38 nations or assessable groups of nations with World Bank classifications, 66% were high-income nations and 34% were middleincome nations.
Neuromuscular monitoring was mentioned in 36 (90%) of the 40 guidelines. Availability of neuromuscular monitoring was mentioned in 17 (47%) guidelines (mandated in eight [47%] and recommended in nine [53%]). Use of neuromuscular monitoring was mentioned in 26 (72%) guidelines (mandated in three [12%] and recommended in 23 [88%]). Seven (19%) guidelines mentioned both availability and use, 19 (53%) mentioned availability only, and 10 (28%) mentioned use only. Quantitative neuromuscular monitoring was specified in nine (25%) of the guidelines and unspecified in the remaining 27 (75%) guidelines. Use of quantitative monitoring was mandated in only one guideline, and this was only for situations where the necessary equipment was available. Universal availability was not mandated.
Discussion
We identified a gap in the availability of professional anaesthesia society neuromuscular monitoring guidelines, in high-, middle-, and low-income nations. Recommendations about availability, use, and type of monitoring varied widely, with only three guidelines mandating use and only one mandating quantitative neuromuscular monitoring. An effort to improve the availability and consistency of guidelines is required.
The gap in the availability of neuromuscular monitoring guidelines between professional societies in higher-and lower-income nations is not unexpected and could be attributed to the financial and human resource costs of developing and publishing of guidelines, a recognition that local hospitals may be unable to provide and maintain the necessary equipment, or both. 17,18 To overcome this gap, the World Health Organisation (WHO) and WFSA published standards for safe anaesthesia, including recommendations for neuromuscular monitoring. 11 It was not clear from our review how many professional societies without their own guidelines have adopted the WHO/WFSA guideline, but this would be an excellent temporary or permanent solution. Overcoming lack of suitable equipment is a greater challenge. The WFSA has successfully implemented pulse oximetry in middle-and lower-income nations through its Lifebox campaign. 12 A similar campaign focusing on neuromuscular monitoring is possible, as the equipment is relatively inexpensive and simple to use, and is likely to prevent costly postoperative complications associated with residual neuromuscular block. 18 Recommendations about availability, use, and type of monitoring varied widely between guidelines. No guideline was completely aligned with expert opinion that quantitative neuromuscular monitoring should be used in all patients receiving neuromuscular blocking drugs. 3 6 17 The reluctance of clinicians to embrace universal quantitative neuromuscular monitoring is well known and may be related to workload and erroneous perceptions of unreliability and no benefit. 19 However, the reasons for lack of alignment by professional societies is unclear, as there is ample evidence that quantitative neuromuscular monitoring is more effective than qualitative monitoring in preventing residual neuromuscular block. 3 17 The strength of our work is that it was a comprehensive survey of monitoring guidelines of professional anaesthesia societies, using a pre-planned evaluation of recommendations about availability and use of neuromuscular monitoring. We conducted a systematic web-based search for guidelines of societies affiliated with the WFSA, multinational societies, and specialist medical colleges, but we may have missed guidelines of other relevant organisations or guidelines that were not posted on the internet. We may also have missed those societies that promoted use of the WHO/WFSA guideline to their members. We were also unable to access society websites that were exclusive to their members. Finally, the online translation services we used may have provided imperfect translations.
In conclusion, we identified a gap in the availability of professional society neuromuscular monitoring guidelines, particularly in middle-and low-income nations. Recommendations about availability, use, and type of monitoring varied widely among guidelines. An effort to improve the availability and consistency of guidelines is required. | 2022-08-20T15:04:36.151Z | 2022-08-18T00:00:00.000 | {
"year": 2022,
"sha1": "595e2bcab7a8ac1a74a582ce5edc14cb0e785c08",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fdb19f338b588de5fb036d113f9ced3c22ffa45",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265040134 | pes2o/s2orc | v3-fos-license | Automatic quality assessment for 2D fetal sonographic standard plane based on multitask learning
Abstract The quality control of fetal sonographic (FS) images is essential for the correct biometric measurements and fetal anomaly diagnosis. However, quality control requires professional sonographers to perform and is often labor-intensive. To solve this problem, we propose an automatic image quality assessment scheme based on multitask learning to assist in FS image quality control. An essential criterion for FS image quality control is that all the essential anatomical structures in the section should appear full and remarkable with a clear boundary. Therefore, our scheme aims to identify those essential anatomical structures to judge whether an FS image is the standard image, which is achieved by 3 convolutional neural networks. The Feature Extraction Network aims to extract deep level features of FS images. Based on the extracted features, the Class Prediction Network determines whether the structure meets the standard and Region Proposal Network identifies its position. The scheme has been applied to 3 types of fetal sections, which are the head, abdominal, and heart. The experimental results show that our method can make a quality assessment of an FS image within less a second. Also, our method achieves competitive performance in both the segmentation and diagnosis compared with state-of-the-art methods.
Background
Fetal sonographic (FS) examinations are widely applied in clinical settings due to its noninvasive nature, reduced cost, and real-time acquisition. [1] FS examinations are consisted of first, second, and third trimester examinations, and limited examination, [2] which covers a range of critical inspections such as evaluation of a suspected ectopic pregnancy, [3,4] and confirmation of the presence of an intrauterine pregnancy. [5][6][7] The screening and evaluation of fetal anatomy are critical during the second and third trimester examinations. The screening is usually assessed by ultrasound after approximately 18 weeks' gestational (menstrual) age. According to a survey, [8] neonatal mortality in the United States in 2016 was 5.9 deaths per 1000 live births, and birth defects are the leading cause of infant deaths, accounting for 20% of all infant deaths. Besides, congenital disabilities occur in 1 in every 33 babies (about 3% of all babies) born in the United States each year. In this case, the screening and evaluation of fetal anomaly will provide crucial information to families prior to the anticipated birth of their child on diagnosis, underlying etiology, and potential treatment options, which can greatly improve the survival rate of the fetus. However, the physiological evaluation of fetal anomaly requires well-trained and experienced sonographers to obtain standard planes. Although a detailed quality control guideline was developed for the evaluation of standard plan, [8] the accuracy of the measurements is highly dependent on the operator's training skill and experience. According to a study, [8] intraobserver and interobserver variability exist in routine practice, and inconsistent image quality can lead to variances in specific anatomic structures captured by different operators. Furthermore, in areas where medical conditions are lagging, there is a lack of well-trained doctors, which makes FS examinations impossible to perform. To this end, automatic approaches for FS image quality assessment are needed to ensure The authors have no conflicts of interest to disclose.
The datasets generated during and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request.
that the image is captured as required by guidelines and provide accurate and reproducible fetal biometric measurements. [9] To obtain standard planes and assess the quality of FS images, it is necessary that all the essential anatomical structures in the imaging should appear full and remarkable with clear boundary. [2] For each medical section, there are different essential structures. In our research, we consider 3 medical sections: the heart section, the head section, and the abdominal section. The essential structures corresponding to these sections are given in Table 1. The list of essential anatomical structures used to evaluate the image quality is defined by the guideline [2] and further refined by 2 senior radiologists with more than 10 years of experience of FS examination at the West China Second Hospital Sichuan University, Chengdu, China. A comparison of standard and nonstandard planes can be illustrated in Figure 1.
There are various types of challenges concerning the automatic quality control of FS images. As illustrated in Figure 2, the main challenges can be divided into 3 types: the first type is that the image usually suffers from the influence of noise and shadowing effect, the second type is that similar anatomical structures could be confused due to the low resolution of the images, and the third type is that the fetal location during the scanning is unstable which will cause the rotation of some anatomical structure. The first type of challenges can only be solved by using more advanced scanning machines, but we can tackle the rest 2 challenges by a more scientific approach. Specifically, the purpose of our research can be summarized as follows: Propose an automatic fetal sonographic image quality control framework for the segmentation and classification of the 2dimensional fetal heart standard plane, which is highly robust against the interference of image rotation and similar structures, and the segmentation speed is quite fast to meet the clinical requirements fully. Improve the accuracy of the detection and classification further compared with state-of-the-art methods by using many recent advanced object detection technologies. Generalize the framework so that it can be well applied to other standard planes.
Related work
In recent years, deep learning techniques have been widely applied in many medical imaging fields due to the technique's stability and efficiency, such as anatomical object detection and segmentation [10][11][12] and brain abnormalities segmentation. [13,14] Accordingly, many intelligent automatic diagnostic techniques [15] proposed a novel framework to diagnose the fetal anomaly by using ultrasound images. In the framework, it adopts U-Net architecture with Hough transformation to segment the abdominal region, and then a multistage convolutional neural network (CNN) is designed to extract the hidden features of FS images. The experiment shows it outperforms other CNN-based approach. [15] Lin et al [16] proposed a multitask CNN framework to address the problem of standard plane detection and quality assessment of fetal head ultrasound images. Under the framework, they introduced prior clinical and statistical knowledge to reduce the false detection rate further. The detection speed of this method is quite fast, and the result achieves promising performance compared with state-ofthe-art methods. [16] Xu et al [17] proposed an integrated learning framework based on deep learning to perform view diagnosis and landmark detection of the structures in the fetal abdominal ultrasound image simultaneously. The automatic framework achieved a higher diagnosis accuracy better than clinical experts, and it also reduced landmark-based measurement errors. [17] Wu et al proposed a computerized FS image quality assessment scheme to assist the quality control in the clinical obstetric examination of the fetal abdominal region. This method utilizes the local phase features along with the original fetal abdominal ultrasound images as input to the neural network. The proposed scheme achieved competitive performance in both view diagnosis and region localization. [18] Chang et al [19] proposed an automatic mid-sagittal plane (MSP) assessment method for categorizing the 3D fetal ultrasound images. This scheme also analyzes corresponding relationships between resulting MSP assessments and several factors, including image qualities and fetus conditions. It achieves a correct high rate for the results of MSP detection. Kumar and Sriram et al proposed an automatic method for fetal abdomen scan-plane identification based on 3 critical anatomical landmarks: the spine, stomach, and vein. In their approach, a Biometry Suitability Index (BSI) is proposed to judge whether the scan plane can be used for biometry based on detected anatomical landmarks. The results of the proposed method over video sequences were closely similar to the clinical expert's assessment of scan-plane quality for biometry. [20] Baumgartner et al [21] proposed a novel framework based on convolutional neural networks to automatically detect 13 standard fetal views in freehand 2D ultrasound data and provide localization of the anatomical structures through a bounding box. A notable innovation is that the network learns to localize the target anatomy using weak supervision based on image-level labels only. [21] Namburete et al [22] proposed a multitask, fully convolutional neural network framework to address the problem of 3D fetal brain localization, alignment to a referential coordinate system, and structural segmentation. This method optimizes the network by learning features shared within the input data belonging to the correlated tasks, and it achieves a high brain overlap rate and low eye localization error. [22] However, there are no existing automatic quality control methods for fetal heart planes, and the detection accuracy of existing methods on other planes is relatively low due to the use of the outdated design of neural networks. Therefore, it is desirable to propose a more efficient framework that can not only provide accurate clinical assessment in fetal heart plane but can also increase the segmentation accuracy in other planes.
Methods
The framework of our methods can be illustrated in Figure 3. First, the original image is smoothed by the Gaussian filter, and input to feature extraction network (FEN). Second, FEN will extract a deep level feature of image by convolutional neural network and input to Region Proposal Network (RPN) and Class Prediction Network (CPN) respectively. Then CPN will judge whether the organs meet the standard as well as predict the class, and RPN will locate the position of essential organs with the help of feature pyramid network. Lastly, the 2 networks will combine information together and output the final result. In this section, we will briefly introduce the network structure and then elaborate the feature extraction, the region of interest (ROI) localization, and the organ diagnosis in detail. Our study is approved by Ethics Committee of West China Second Hospital Sichuan University.
Feature extraction network
In the feature extraction network, we have made many improvements compared with the traditional CNN-based approaches: the convolutional neural network is used as a thematic framework, and many state-of-the-art deep learning techniques such as relation module, spatial pyramid pooling (SPP) layer, are integrated into the framework to further increase the feature extraction efficiency. The CNN has unique advantages in speech recognition and image processing with its special structure of local weight sharing, which can greatly reduce the number of parameters and improve the accuracy of recognition. [23][24][25] CNN typically consists of pairs of convolutional layers and average pooling layers and fully connected (FC) layers. In convolutional layer, several output feature maps can be obtained by the convolutional calculation between input layer and kernel. Specifically, suppose f n m denotes the mth output feature map in layer n, f nÀ1 k denotes the kth feature map in n -1 layer, W n m denotes the kernel generating that feature map, then we can get: where b n is the bias term in the nth layer, relu denotes rectified linear unit, and is defined as: relu(x)=max(x,0). It is also worth mentioning that we use global average pooling (GAP) instead of local pooling for pooling layers. The aim is to use GAP to replace FC layer, which can regularize the structure of the entire network to prevent overfitting. [26] The setting of convolution layer is shown in Table 2.
To fully utilize relevant features between objects and further improve segmentation accuracy, we introduce the relation module presented by Hu. [27] Specifically, first the geometry weight is defined as: where f m G and f n G are geometric features, e G is a dimensional lifting transformation by using concatenation. After that, the appearance weight is defined as: Figure 3. The framework of our method. We train the network end-to-end to ensure the best performance. The framework contains 3 sections: Feature Extraction Network (FEN), Region Proposal Network (RPN), and Class Prediction Network (CPN). The will help to extract the deep-level features of the image with the help of the relation module and Spatial Pyramid Pooling (SPP) layer, which is the input to RPN and CPN. The RPN will locate the position of essential structures based on the anchors generated by Feature Pyramid Network (FPN), and the CPN will help to judge and classify the structures. The final output will be a quality assessment of each essential structure and its location. where W K and W Q are the pixel weights from the previous network. Then the relation weight indicating the impact from other objects is computed as: Lastly, the relation feature of the whole object set with respect to the n th object is defined as This module achieves a great performance in the instance recognition and duplicate removal, which increases the segmentation accuracy significantly.
The SPP layer we use here denotes the SPP layer presented by He et al. [28] Specifically, the response map after FC layer is divided into 1 Â 1 (pyramid base), 2 Â 2 (lower middle of the pyramid), 4 Â 4 (higher middle of the pyramid), 16 Â 16 (pyramid top) 4 submaps and do max pooling separately. A problem with the traditional CNN network for feature extraction is that there is a strict limit on the size of the input image, this is because there is a need for the FC layer to complete the final classification and regression tasks, and since the number of neurons of the FC layer is fixed, the input image to the network must also have fixed size. Generally, there are 2 ways of fixing input image size: cropping and wrapping, but these 2 operations either cause the intercepted area not to cover the entire target or bring image distortion, thus applying SPP is necessary. The SPP network also contributes to multisize extraction features and is highly tolerant to target deformation.
The design of Bottle Net borrows the idea of Residual Networks. [29] A common problem with deep networks is that gradient depth and gradient explosions are prone to occur as depth deepens. The main reason of this phenomenon is the overfitting problem caused by the loss of information. Since each convolutional layer or pooling layer will downsample the image, a lossy compression effect could be produced. With network going deeper, these images will appear some strange phenomena, which is that obviously different categories of images produce similarly stimulating effect on the network. This reduction in the gap will make the final classification effect less than ideal. To let our network extract deeper features more efficiently, we add the residual network structure to our model. The basic implementation is given in Figure 2. By introducing the data output of the previous layers directly into the input part of the latter data layer, it is realized that original vector data and the subsequently down sampled data are used together as the data input of the latter layer, which introduced a richer dimension. In this way, the network can learn more features of the image.
ROI localization with RPN
The RPN is designed to localize the ROI that encloses the essential organs given in Table 1. To achieve this goal, we first use a feature pyramid network (FPN) [30] to generate candidate anchors instead of traditional RPN network used in Faster-RCNN. [23] FPN could connect the high-level features of lowresolution and high-semantic information with the low-level features of high-resolution and low-semantic information from top to bottom, so that features at all scales have rich semantic information. Specifically, the setting of FPN is shown in Table 3.
In the training process, we define the metrics of intersection over union (IoU) to evaluate the goodness of ROI localization: where A is a computerized ROI and B is a manually labeled ROI (Ground Truth). In the training process, we set the samples with IoU higher than 0.5 as positive samples, and IoU lower than 0.5 as negative samples.
Judging and predicting class with CPN
For different sections, we use CPN to classify essential organs. For thalamus section, there are cavum septi pellucidi and thalamus to be classified. For abdominal section, there are stomach bubble, spine, and umbilical vein to be classified. For heart section, there are left ventricle, left atrium, right ventricle, and right atrium to be classified. To improve classification accuracy, we choose focal loss [31] as the loss function. In the training process of neural network, the internal parameters can be adjusted with the minimization of the loss function of all training samples. The proposed focal loss enables highly accurate dense object detection in the presence of vast number of background examples, which is suitable in our model. The loss function can be defined as: where g is the focusing parameter, and g ≥ 0. p t is defined as: y represents the truth label of a sample, and p represents the probability that the neural network predicts this class.
Results
In this section, we will start with a brief explanation of the process of obtaining and making data sets for training and testing. Then a systematic evaluation scheme will be proposed to test the efficacy of our method in FS examinations. The evaluation is carried out Table 2 The setting of convolutional layer.
Layer
Kernel size Channel depth Stride C1 3 128 2 C2 3 256 2 C3 3 512 2 C4 3 1024 2 C5 3 2048 2 Table 3 The setting of FPN. in 4 parts. First, we investigate the performance of ROI localization; we will use Mean Average Precision (mAP) and box-plot to evaluate it. Second, we quantitatively analyze the performance of classification with common indicators: accuracy (ACC), specificity (Spec), sensitivity (Sen), precision (Pre), F1score (F1), and area under the receiver of operation curve (AUC). Third, we demonstrate the accuracy of our scheme when compared with experienced sonographers. Fourth, we do the running time analysis and sensitivity analysis of our method.
Data preparation
All the FS images used for training and testing our model were acquired from the West China Second Hospital Sichuan University from April 2018 to January 2019. The FS images were recorded with a conventional hand-held 2-D FS probe on pregnant women in the supine position, by following the standard obstetric examination procedure. The fetal gestational ages of all subjects ranged from 20 to 34 weeks. All FS images were acquired with a GE Voluson E8 and Philips EPIQ 7 scanner. There are, in total, 1325 FS images of the head section, 1321 FS images of the abdominal section, and 1455 FS images of the heart section involved for the training and testing of our model. The training set, validation set, and test set of each section are all divided by a ratio of 3:1:1. The ROI labeling of essential structures in each section is achieved by 2 senior radiologists with more than 10 years of experience in the FS examination by marking the smallest circumscribed rectangle of the positive sample. The negative ROI samples are randomly collected from the background of the images.
Evaluation metrics
For testing the performance of ROI localization, first, we define the metrics of IoU between prediction and ground truth and use box-plots to evaluate ROI localization intuitively. As illustrated before, IoU is defined as: where A is computerized ROI, and B is ground truth (manually labeled) ROI. Second, we use average precision (AP) to quantitatively evaluate the segmentation results of each essential anatomical structure and mAP to illustrate the overall quality of ROI localization.
To test the performance of classification results, we use several popular evaluation metrics. Suppose TP represents the number of true positives of a certain class, FP is the number of false positives, FN is the number of false negatives, and TN is the number of true negatives, then the definitions of ACC, specificity (Spec), sensitivity (Sen), precision (Pre), and F1-score (F1) are as follows: The area under the AUC is defined as the area under the receiver operating characteristic (ROC) curve, which is equivalent to the probability that a randomly chosen positive example is ranked higher than a randomly chosen negative example. [32] The confusion matrix is also a common indicator to visualize the performance of diagnosis in supervised machine learning, we use it to illustrate the performance of our method in each anatomical structure. [33] To show the effectiveness of advanced techniques we add to the framework, and 2 different structures are also tested, where NRM means the removal of the relation module, and NSPP means the removal of the SPP layer in the feature extraction network. By comparing the difference in classification and segmentation results, it is clear to see their impact on overall network performance.
To analyze the time complexity of our method, we use floating point operations (FLOPs), [34] which is a common method in describing the complexity of CNN. Specifically, in the convolutional layer, FLOPs is defined as: where C in is the input channel size, C out is the output channel size, the convolutional kernel size is C in * K h * K w , and the output feature map size is: C out * M h * M w . In the fully connected layer, FLOPs is defined as: In the global average pooling layer, FLOPs is defined as: where I h denotes the input feature map height, and I w denotes the input feature map width.
Results of ROI localization
To demonstrate the efficacy of our method in localizing the position of essential anatomical structures in FS images, we carry out the experimental evaluation in 2 parts. First, we use boxplots to evaluate ROI localization intuitively. Second, we use AP and mAP to illustrate the quality of ROI localization quantitatively.
For the head standard plane, there is already a state-of-the-art method proposed for the quality assessment [16] (denoted as Lin), so we have compared its results with our method. Also, to show the effectiveness of advanced object segmentation techniques we add to the network, our methods have also been compared with other popular object detection frameworks, including SSD, [35] YOLO, [36,37] Faster R-CNN. [23] The test of the effectiveness of the relation module we add to the network is also carried out, with Non-NM denoting the framework without the relation module.
As shown in Figure 4, our method has achieved a high IoU in all 3 sections. Specifically, for the head section, the median of IoU values in all the anatomical structures is above 0.955. Also, for the heart section and the abdominal section, the median is above 0.945 and 0.938, respectively. Also, the minimum of IoU values for all 3 sections is above 0.93. As a comparison, the state-of-theart framework for the quality assessment of the fetal abdominal images proposed by Wu et al [18] has only achieved a median of below 0.9. It proves the effectiveness of our method in localizing Table 4, we observe that our method has the highest mAP compared with the method proposed by Lin et al and other popular object detection frameworks. Also, we have improved the segmentation accuracy significantly in TV and CSP and overcome the limitation in Lin's method. This is because our method could detect flat and smaller anatomical structures more precisely. It is worth mentioning that after adding the relation module to our network, the segmentation accuracy has been significantly improved in all the anatomical structures, which proves the effectiveness of this module. As shown in Table 5, since it is our first attempt to evaluate the image quality in the heart section, so we have only compared our method with state-of-theart object segmentation frameworks. We observe that our approach has the highest average precision in all the anatomical structures. Also, as shown in Table 6, we have achieved quite promising segmentation accuracy. It proves that our framework is generalized and can be well applied to the quality assessment of other standard planes.
Results of diagnosis accuracy
To illustrate the performance of our model in classifying the essential anatomical structures, we first use area of ROC and confusion matrix to characterize the performance of the classifier visually, then we use several authoritative indicators to measure it quantitatively: ACC, Spec, Sen, Pre, and F1. Also, to show the effectiveness of our proposed network in diagnosis, we have compared our method with other popular classification networks, including AlexNet, [38] VGG16, VGG19, [39] and ResNet50. [40] The comparison with Lin's method is also carried out. As shown in Figure 5, it is observed that the classifier achieves quite promising performance in all the 3 sections with the true positive rate reaching 100% while the false positive rate is less than 10%. Also, the ROC achieves at 0.96, 0.95, and 0.98 for the head section, abdominal section, and heart section, respectively.
From Figure 6, it is clear that our method achieves a quite superior performance in every anatomical structure of different sections with the true positive rate reaching nearly 100%. Table 7, we can observe that the classification results of our method are superior to other state-of-the-art methods. Specially, we achieve the best results with a precision of 94.63%, a specificity of 96.39%, and an AUC of 98.26%, which are better than Lin's method. The relative inferior results in sensitivity, accuracy, and F1-score can be further improved if we add prior clinical knowledge into our framework. [16] Tables 8 and 9 illustrate the classification results in abdominal and heart section. We can observe that our method has achieved quite promising results in most indicators compared with existing methods. It demonstrates the effectiveness of our proposed method in classifying anatomical structures of all the sections.
Running time analysis and sensitivity analysis
We test the running time of detecting a single FS image for different single-task and multitask networks in a workstation equipped with 3.60 GHz Intel Xeon E5-1620 CPU and a GP106-100 GPU. The results are given in Table 10. It is observed that detecting a single frame could only cost 0.871s, which is fast enough to meet clinical needs. Also, it is observed that although the network parameters of our method and FLOPs are much more than Faster R-CNN + VGG16, there is not much difference in segmentation time, this is because our network shared many low-level features, which could achieve a more efficient segmentation with using only a few parameters.
In the CNN model, we usually need to try a series of parameters to get the best performance for the model. There are many parameters that could affect the results of a CNN model such as learning rate, epoch, regularization loss, etc, but generally, the learning rate and weight decay have a great impact on it. Therefore, we change the learning rate and weight decay to illustrate the sensitivity of our method. As shown in Table 11, by altering the learning rate, the change of mAP does not exceed 1.7%; by changing the weight decay, the change of mAP does not exceed 2.2%, demonstrating the robustness of our method. Figures 7-9 depict the comparison of our results with the manually labeled images by experts in the head section, abdominal section, and heart section, respectively. Our method displays the classification and segmentation results simultaneously to assist in sonographers' observation. More comparison between our results and ground truth is given in Figure 10. It can be seen that our method is perfectly aligned with professional sonographers.
Discussion
In this paper, an autonomous image quality assessment approach for FS images was investigated. The experimental results show that our proposed scheme achieves a highly precise ROI localization and a considerable degree of accuracy for all the essential anatomical structures in the 3 standard planes. Also, the conformance test shows that our results are highly consistent with those of professional sonographers, and running time tests show that the image segmentation speed per frame is much higher than sonographers, which means this scheme can effectively replace the work of sonographers. In our proposed network, to further improve segmentation and classification accuracy, we also modify the recently published advanced object segmentation technologies and adapt them to our model. The experiment Table 5 Comparisons about detection results between our method and other methods in heart section. shows these modules are highly useful, and the overall performance is better than the state-of-the-art methods such as the FS image assessment framework proposed by Lin et al. [16] After the Feature Extraction Network, we also divide the network into Region Proposal Network and Class Prediction Network. Accordingly, the features in the segmentation network can avoid interfering with the features in the classification network, so the segmentation accuracy is further increased. Also, the segmentation speed can be significantly improved, as the classification and localization are performed simultaneously.
Method
Although our method achieves quite promising results, there are still some limitations. First, for the training sets, we regard the manually labeled FS images by 2 professional sonographers as the ground truth, but the results of manual labeling will have some accidental deviation even though they all have more than 10 years of experience. In future studies, we will invite more professional clinical expects to label the FS images and collect more representative datasets. Second, there still remain some segmentation and classification errors in our results. This is because our evaluation criteria are rigorous, and the midsection of a single Table 6 Comparisons about detection results between our method and other methods in abdominal section. anatomical structure could lead to a negative score on the image. Third, all the FS images are collected from GE Voluson E8 and Philips EPIQ 7 scanner; however, different types of ultrasonic instruments will produce different ultrasound images, which may cause our method not to be applied well to the FS images produced by other machines. Our proposed method further boosts the accuracy in the assessment of two-dimensional FS standard plane. Although t3dimensional and 4-dimensional ultrasound testing are popular recently, they are mainly utilized to meet the needs of pregnant women and their families to view baby pictures instead of serving the diagnosing purpose visually. Two-dimensional ultrasound images are still the most authoritative basis for judging fetal development. [2] As illustrated before, there are still many challenges for the automatic assessment of 2D ultrasound images, such as shadowing effects, similar anatomical structures, different fetal positions, etc. To overcome these challenges and further promote the accuracy and robustness of segmentation and classification, it may be useful to add some prior clinical knowledge [16] and more advanced attention modules to the network. In the future, we will also investigate the automatic selection technology for finding the standard scanning plane, which will find a standard plane containing all the essential anatomical structures without sonographers'intervention. Comparisons about classification results between our method and other methods in heart section. ACC = accuracy, AUC = the area under the receiver of operation curve, F1 = F1 score, Prec = precision, Sen = sensitivity, Spec = specificity.
Table 10
The detection speed and parameters of different single-task and multitask methods. . Demonstration that our results perfectly match with the annotations of ground truth in the heart section. The classification results in the left white box are the ground truth labeled by professional radiologists, and the results in the right white box are the detection results of our method. "1" means the anatomical structure meets the quality requirement, and "0" means the structure does not meet the requirement. Figure 8. Demonstration that our results perfectly match with the annotations of ground truth in the abdominal section. The classification results in the left white box are the ground truth labeled by professional radiologists, and the results in the right white box are the detection results of our method. "1" means the anatomical structure meets the quality requirement, and "0" means the structure does not meet the requirement. Figure 9. Demonstration that our results perfectly match with the annotations of ground truth in the head section. The classification results in the left white box are the ground truth labeled by professional radiologists, and the results in the right white box are the detection results of our method. "1" means the anatomical structure meets the quality requirement, and "0" means the structure does not meet the requirement. | 2021-01-29T14:08:21.715Z | 2021-01-29T00:00:00.000 | {
"year": 2021,
"sha1": "64ba024f84b5c29be55f87522b7694ca5779465c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000024427",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64ba024f84b5c29be55f87522b7694ca5779465c",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247176879 | pes2o/s2orc | v3-fos-license | Management of Esophageal Cancer-Associated Respiratory–Digestive Tract Fistulas
Simple Summary As rare but life-threatening complications, respiratory–digestive tract fistulas (RDF) have a major impact on esophageal cancer patients. Furthermore, interdisciplinary treatment concepts are still evolving. This retrospective study aims to assess general strategies for RDF, especially in terms of technical and anatomical approaches. In 51 RDF patients, we proved that bilateral fistula repair and combined surgical and non-surgical intervention correlated significantly with good short- and long-term outcomes. Abstract Respiratory–digestive tract fistulas are fatal complications that occur in esophageal cancer treatment. Interdisciplinary treatment strategies are still evolving, especially in anatomical treatment stratification. Thus, this study aims to evaluate general therapeutic strategies for this rare condition. Medical records were reviewed for esophageal cancer-associated respiratory–digestive tract fistula patients treated between January 2008 and September 2021. Fistulas were classified according to being surgery- and tumor-associated. Treatment strategies, clinical success, and survival were analyzed. A total of 51 patients were identified: 28 had tumor-associated fistulas and 23 surgery-associated fistulas. Risk factors for fistula development such as radiation (OR = 0.290, p = 0.64) or stent implantation (OR = 1.917, p = 0.84) did not correlate with lack of symptom control for RDF patients. In contrast, advanced lymph node metastasis as another risk factor was associated with persistent symptoms after treatment for RDF patients (OR = 0.611, p = 0.01). Clinical success significantly correlated with bilateral fistula repair in surgery-associated fistulas (p = 0.01), while tumor-associated fistulas benefited the most from non-surgical (p = 0.04) or combined surgical and non-surgical intervention (p = 0.04) and a bilateral fistula repair (p = 0.02) in terms of overall survival. The therapeutic strategy should aim for bilateral fistula closure. A multidisciplinary, stepwise approach might have the best chance for restoration or symptom control with optimized overall survival in selected patients.
A wide range of therapeutic approaches has been described for RDF, from fibrin glue application over stenting to esophageal diversion. However, general treatment strategies have not been described so far. In particular, little is known about anatomical treatment stratification, whether only one or both fistula orifices should be sealed. Therefore, this study is the first to evaluate the general multidisciplinary treatment principles of respiratory-digestive tract fistulas in esophageal cancer patients.
Materials and Methods
All patients treated for esophageal cancer-associated respiratory-digestive fistula at the University Medical Center Hamburg-Eppendorf between January 2008 and September 2021 were identified from surgical, endoscopic, radiological, and hospital databases. Patients with benign acquired and congenital fistulas were excluded. The medical records at our institution were reviewed entirely for each included patient.
According to local laws, no informed patient consent or statement by the federal ethics committee is required since the study is non-interventional and retrospective ( §12HmbKHGcity law Hamburg).
Studied variables included gender, age, date of birth, death, and date of the last follow-up, coexisting medical conditions by Charlson Comorbidity Index (CCI) calculated at fistula diagnosis, tumor stage administered by the American Joint Committee on Cancer Union 6th-8th edition depending on the year of diagnosis, tumor localization, history of definitive chemoradiation or multimodal therapy and laboratory results.
Fistulas were subdivided into T-RDF and S-RDF. T-RDF also included patients who developed fistulas after esophagectomy due to local recurrence without evidence of leakage or conduit necrosis. Tracheobronchial localization according to Wang et al. [21] and fistula diameter were recorded. Patients' pretherapeutic condition and therapeutic approaches, along with the time interval between diagnostical and therapeutic steps, were analyzed. The curative status of the underlying disease was defined as a locally resectable tumor and the absence of distant metastases. Non-surgical techniques were defined as endoscopic or bronchoscopic interventions. Furthermore, the anatomical treatment approach was evaluated and classified in treatments sealing only the gastrointestinal orifice, those sealing only the respiratory orifice or bilateral approaches and conservative treatment with the best supportive care. The treatment strategy in cases of local recurrence included chemoradiation, surgical reintervention, and best supportive care depending on previous therapy, patient condition, and morbidity caused by the RDF.
Outcomes were analyzed as 'restoration or symptom control', 'no symptom control', and '30-d mortality' after treatment. Symptom control was defined as the capability of oral food intake and absence of respiratory infection after the resumption of oral cost. For subgroup analysis of T-RDF, only symptom control was evaluated since the underlying disease of advanced EC limited survival. The therapeutic outcome was investigated by occurrence and severity of complications according to Clavien-Dindo classification and 30-day mortality. Survival analysis was calculated following the date of fistula diagnosis. Anastomotic leakage was diagnosed endoscopically.
Data management and statistical analysis were performed using IBM SPSS Statistics for Macintosh, Version 25.0. (Armonk, NY, USA: IBM Corp.). Descriptive statistics are reported as absolute numbers and percentages or mean and standard deviation (SD), as indicated. For univariate analyses, the ANOVA was applied for the comparison of three groups. For two groups, the Student t test was applied for parametric continuous variables and the Mann-Whitney U test for nonparametric continuous variables. Categorical variables were tested using the χ 2 test or the Fisher exact test as appropriate. Survival rates were estimated using the log-rank test and described by Kaplan-Meier curves. A two-sided p-value < 0.05 was considered as significant. The study conformed to the standards of the Declaration of Helsinki.
Results
A total of 51 patients with esophageal cancer-associated RDF were able to be identified: 28 with T-RDF and 23 S-RDF (Table 1). S-RDF had a higher share of lower tumors and esophagobronchial (EBF) rather than esophagotracheal fistula (ETF). Four T-RDF patients (14.3%) had received curative cancer treatment at fistula diagnosis. Moreover, the S-RDF cohort had a significantly poorer condition at diagnosis and a shorter interval from fistula diagnosis to first intervention. Thirty-day mortality was 17.9% in the T-RDF and 30.4% in the S-RDF cohort. The clinicopathological parameters of the T-RDF cohort are shown in Table 2. Neither patients' demographics nor tumor parameters correlated with the clinical outcome. Fistula characteristics and patients' conditions at RDF diagnosis (p = 0.443) had no impact on the clinical course after fistula development. In contrast, patients with a curative treatment at fistula diagnosis had a higher chance for cure or symptom control (p = 0.016).
3 (50.0) 10 (45.5) no grading after neoadjuvant Tx For S-RDF, demographics and tumor-associated parameters as the histological subtype (p = 0.30), infiltration depth (p = 0.51), or lymph node metastasis (p = 0.21) also showed no correlation with the outcome (Table 3). All S-RDF were associated with anastomotic leakage or conduit necrosis presenting with either acute (39.9%) or delayed (>30d, 60.9%) onset, which did not correlate with the clinical outcome (p = 0.886). However, neoadjuvant radiation as a risk factor for RDF development (p = 0.03), advanced UICC stage (p = 0.03), and anatomical aspects such as cervical anastomosis (p = 0.046) or a higher fistula location (p = 0.02) correlated significantly with an unfavorable clinical course. The advanced tumor and nodal stage did not reveal a correlation with poor outcomes. However, all pN2 staged patients died in the postoperative course, while 75.0% of cured patients had no lymph node metastasis (p = 0.20). A total of 63.6% of the patients deceased within 30 days presented with severe illness necessitating intensive care treatment at diagnosis of RDF (p = 0.02). Numbers are presented as mean ± standard deviation or absolute numbers and percentages in paracenteses. p-values in bold indicate statistical significance between cohorts. AC-adenocarcinoma; acute anastomotic leakage < 30d; EBF-esophagobronchial fistula; ETF-esophagotracheal fistula; d-days; ICUintensive care unit; l-liter; mg-milligram; mm-millimeter; n-patient number; S-RDF-surgery-associated respiratory-digestive tract fistula; SCC-squamous cell cancer; Tx-therapy; UICC-Union internationale contre le cancer; y-years.
The therapeutic strategy was significantly associated with the clinical outcome for both T-RDF and S-RDF.
The lowest therapeutic success rates were found in T-RDF patients treated conservatively (0.0%) and solely surgically (16.1%, Table 4). The initial treatment did not correlate with the final outcome (Technique p = 0.51, Anatomical treatment p = 0.22). In total, 82.1% of T-RDF patients were evaluated for reintervention due to insufficient symptom control, which resulted in 43.5% in reintervention. Though, after the primary intervention, four patients experienced initial symptom control, of which one was of long duration. Among patients who experienced an overall symptom control, surgical reinterventions (80.0%, p < 0.01) and bilaterally fistula repair (80.0%, p = 0.03) were the most frequent strategies. Though, both were accompanied by a high peri=interventional risk. Change in therapeutic strategy in cases of unfavorable results led to 50.0% restoration or symptom control and showed a trend towards favorable outcome (p = 0.05). For T-RDF, no nonsurgical technique correlated with restoration or symptom control. Suture of the esophageal (p < 0.01) or respiratory fistula orifice (p = 0.04) and the interposition of latissimus dorsi (p < 0.01) were associated with a favorable outcome. Moreover, esophageal diversion was also associated with a good outcome (p = 0.04).
Sealing both the esophageal and respiratory fistula orifice was by far the most frequently applied anatomical approach among cured or symptom-controlled S-RDF patients as an initial (87.5%, p = 0.046) or step-up approach (100.0%, p = 0.01, Table 5). On the other hand, 50.0% of bilaterally treated patients died in the postoperative course (30-d mortality). S-RDF patients benefited from bilateral fistula repair both in the first treatment attempt (p = 0.046) and overall (p = 0.01, Table 5). The choice of intervention had no impact on the final outcome in this cohort (p = 0.57, p = 0.09, and p = 0.11). S-RDF benefited, whenever possible, from esophageal segment resection and re-anastomosis (p = 0.04). An esophageal diversion was applied as an ultima ratio in eight patients, five (62.5%) of whom nevertheless died. One of the surviving patients could be reconstructed successfully. Moreover, soft tissue flaps were also associated with restoration or symptom control (p = 0.01). Numbers are presented as mean ± standard deviation or absolute numbers and percentages in paracenteses.
p-values in bold indicate statistical significance between cohorts. d-days; S-RDF-surgery-associated respiratorydigestive tract fistula; n = patient number. a Percentage of patients with therapeutic re-evaluation without mortality or therapeutic success after first attempt. b Only re-evaluated patients considered.
Complications and their severity are presented in Tables S1 and S2. Patients with no options for therapeutic interventions were at the highest risk for mortality (60.0%). However, surgery was accompanied by an elevated risk of peri-interventional mortality compared to non-surgical intervention (3.3% vs. 26.7%). Of note, overall severe complications were more frequent following non-surgical intervention (p = 0.02). After the final treatment, no significant difference between anatomical approaches (p = 0.11) or techniques (p = 0.16) was found.
For overall survival, only a bilateral fistula repair compared to conservative treatment reached statistical significance in T-RDF patients (p = 0.02, Figure 1). Combined surgical and non-surgical techniques were significantly associated with long-term survival (p = 0.04) and endoscopic intervention (p = 0.04), both compared to best supportive care.
In the S-RDF cohort, all techniques proved to have significantly longer survival compared to best supportive care. Moreover, no differences could be found between the interventional techniques ( Figure 2). Moreover, bilateral anatomical approaches (p < 0.01) and the sole sealing of the gastrointestinal fistula orifice (p = 0.03) induced significantly better survival rates compared to best supportive care. In the S-RDF cohort, all techniques proved to have significantly longer survival compared to best supportive care. Moreover, no differences could be found between the interventional techniques (Figure 2). Moreover, bilateral anatomical approaches (p < 0.01) and the sole sealing of the gastrointestinal fistula orifice (p = 0.03) induced significantly better survival rates compared to best supportive care. (a) All interventional techniques were associated with significantly longer overall survival compared to best supportive care (non-surgical techniques p = 0.03, surgical techniques p = 0.005, combined endoscopic and surgical techniques p = 0.005). (b) Bilateral (p < 0.001) and only GI tract approaches (p = 0.03) were significantly associated with improved survival compared to best supportive care. GI-gastrointestinal; S-RDF-surgery-associated respiratory-digestive tract fistula.
Discussion
Respiratory-digestive tract fistulas are associated with high morbidity and mortality. Treatment of this fragile condition often fails to reach persistent symptom control, and approaches for strategic interdisciplinary therapy are currently still lacking. This study demonstrates that as far as the patient's condition allows, a bilateral fistula repair is associated with optimized outcomes for most RDF patients.
Several studies already investigated procedural safety, success rates or risk factors of specific treatment options in case series [3][4][5][6][7]16,21,23,26,27,[29][30][31]. However, especially for T-RDF, comparative data of multidisciplinary treatments for short-and long-term outcomes are missing. Moreover, no previous study fully addressed anatomical considerations of RDF therapy: whether one or both fistula orifices should be sealed. This has been underlined by several studies, which concluded that the appropriate management of RDF is still uncertain and variable [18,32,34].
With best supportive care alone, the mean survival of T-RDF was 25.7 ± 13.5 days, which is in line with previous reports [5,23,26]. Non-surgical and surgical therapy can prolong the mean survival to 11.7 ± 6.6 (p = 0.04) and 19.0 ± 15.6 months. Previous publications could reveal 2.3-7.9 months after interventional treatment [15,21,26,28] and up to 10 months median survival after surgery [15,30,31]. Moreover, combined surgical and non-surgical therapy could extend the mean survival to 27.8 ± 13.4 months (p = 0.04), which has not been demonstrated so far. Similarly, a bilateral fistula repair resulted in significantly prolonged survival (24.4 ± 10.3, p = 0.03). In patients with S-RDF, long-term outcome was significantly improved following bilateral fistula repair (p < 0.001) or sole sealing of the gastrointestinal fistula orifice (p = 0.03). All interventional techniques could prove significantly longer survival compared to best supportive care. A previous study aligns with this finding [15], while a meta-analysis of 89 S-RDF patients demonstrated longer survival after surgery compared to bronchial stenting [32].
However, symptom control is the best prognostic factor for long-term survival [29]. The key principle of the RDF treatment strategies must be airway protection to reach patient stabilization and reduce infectious complications [33,34].
Only a small number of interventions were successful in the first attempt, and in 37.5% of these cases, a reintervention was necessary for recurrent symptoms. Conversion from interventional to surgical therapy or vice versa had a beneficial impact on the clinical success rate in the T-RDF cohort, suggesting that the stepwise approach should be considered the pivotal therapeutic concept for these patients. In most converted cases, patients who were not primarily surgical candidates profited from a reduction in tracheobronchial contamination by esophageal stenting and were later amenable for surgical fistula repair. For S-RDF, therapy conversion resulted in 50.0% therapeutic success.
The literature emphasizes the value of esophageal stenting for RDF therapy: initial symptom control rates of up to 90% have been published [3,5,6,18,23,26,29], although fistula recurrence is frequent six weeks after stent placement [26,33]. Our data align with these results: 68% of stents were not associated with symptom control, but esophageal stent placement improved mean survival to 11.9 ± 6.5 months in T-RDF patients, which corresponds approximately to the 1.5-5-fold survival in the literature [15,26,28,29]. On the other hand, in 50% of symptom-controlled patients, esophageal stents were part of a multidisciplinary approach, which extended mean survival up to 28.2 ± 13.3 months.
Soft tissue flaps have been discussed as an effective treatment for RDF or as fistula prevention in high-risk esophagectomies [19,35]. Sealing both fistula orifices at once was the only procedure significantly associated with the clinical success rate (71.4%). Nevertheless, the failure of soft tissue flaps is high: 30-d mortality rate is 50.0% in S-RDF and 20.0% in T-RDF, and 21.1% and 40.0%, respectively, lack symptom control. Thus, no single technique can be promoted for RDF therapy, but soft tissue flaps can serve as an effective procedure in a multidisciplinary approach.
Peri-interventional complications were frequent in this frail cohort: the share of complication-free and primarily successful interventions was highest among surgical (33.3%) and bilateral procedures (25.7%). While the severity of complications did not differ widely between surgical and non-surgical techniques, high rates of overall periinterventional mortality accompany both surgical (31.0%) and bilateral fistula repair (31.4%). Nonetheless, these numbers compete with a 60% 30-d mortality for primary best-supportive care.
All S-RDF cases were associated with acute or chronic anastomotic leakage, which might predispose inflammatory affection of the mediastinum and the tracheobronchial system and may account for the poorer general condition of S-RDF compared to T-RDF patients.
Consequently, therapy indication and initiation were faster (0.6 ± 1.3 vs. 27.7 ± 51.5 days) in S-RDF patients, which nonetheless were at the highest risk for 30-d mortality (87.5%, p = 0.02). Rapid therapeutic onset is inversely correlated with the outcome of S-RDF (p = 0.02), which instead reflects the critical condition forcing an immediate therapy than should be attributed to the quick intervention itself. This underlines the value of an early RDF diagnosis and therapy initiation as the most effective steps for patient survival. Once RDF leads to decompensation, patients who are not amenable to radical treatment strategies or therapies are less likely to succeed. The pulmonary function often presents a limiting factor, omitting single-lung ventilation or apnea phases for bronchoscopic or surgical sealing of the respiratory orifice.
For S-RDF patients, neoadjuvant radiation as a risk factor for fistula development [16,17,20] significantly correlated with poor outcomes (p = 0.03). Only a sizeable total tumor burden significantly correlated from several factors associated with compromised perfusion of the tracheobronchial system, such as extended lymphadenectomy or dissection, advanced lymph node metastasis, or large tumors [17,20] with poor clinical outcome (UICC, p = 0.03). This underlines the negative impact of surgical devascularization of the tracheobronchial system on the healing capacity. While fistulas originating from cervical anastomoses and those leading into the trachea were at higher risk for lack of symptom control, 30-d mortality was lower than thoracical anastomoses and EBF. Limited therapeutic and endoscopic options might explain this in cases with cervical anastomosis [21,26] and a lower transition of digestive fluids in descending compared to horizontal fistulas [33].
The presented study has several limitations. First, the retrospective character and the cohort size restrain this research, which hampers the correct assessment of clinical success. Clinical follow-up was not utterly accessible in some patients, either due to symptom control or avoidance of further medical contact or practitioner change. Moreover, one patient without symptom control in the clinical follow-up survived 60.6 months, so a secondary restoration can be assumed but not proven.
Conclusions
Overall, this study is the first to analyze general treatment stratification and multidisciplinary approaches for RDF therapy. Sufficient fistula repair was achieved most effectively by bilateral surgical procedures in both cohorts. However, patient condition and palliative disease at diagnosis limit the therapeutic success. Radical approaches should be reserved for selected patients but can be offered to a broader range of patients in a stepwise, multidisciplinary approach. With a bilateral fistula repair, the optimized long-term outcome can be achieved for both tumor-and surgery-associated fistulas.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/cancers14051220/s1, Table S1. Adverse events by intervention technique and anatomical approach, Table S2. Severity of adverse event by treatment stratification. Institutional Review Board Statement: Ethical review and approval were waived for this study. According to local laws, no informed patient consent or statement by the federal ethics committee is required since the study is non-interventional and retrospective ( §12HmbKHG-city law Hamburg).
Informed Consent Statement: Patient consent was waived since non-interventional and retrospective studies do not require informed patient consent according to local laws ( §12HmbKHG-city law Hamburg).
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-03-02T16:22:49.533Z | 2022-02-26T00:00:00.000 | {
"year": 2022,
"sha1": "1c4bfe915aaaf4f046054f74259365d916f8871d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/5/1220/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53f69e03be72fab1c9074df9ca83b7329604014b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14440165 | pes2o/s2orc | v3-fos-license | Sonographic evaluation of the shoulder in asymptomatic elderly subjects with diabetes
Background The prevalence of rotator cuff tears increases with age and several studies have shown that diabetes is associated with symptomatic shoulder pathologies. Aim of our research was to evaluate the prevalence of shoulder lesions in a population of asymptomatic elderly subjects, normal and with non insulin - dependent diabetes mellitus. Methods The study was performed on 48 subjects with diabetes and 32 controls (mean age: 71.5 ± 4.8 and 70.7 ± 4.5, respectively), who did not complain shoulder pain or dysfunction. An ultrasound examination was performed on both shoulders according to a standard protocol, utilizing multiplanar scans. Results Tendons thickness was greater in diabetics than in controls (Supraspinatus Tendon: 6.2 ± 0.09 mm vs 5.2 ± 0.7 mm, p < 0.001; Biceps Tendon: 4 ± 0.8 mm vs 3.2 ± 0.4 mm, p < 0.001). Sonographic appearances of degenerative features in the rotator cuff and biceps were more frequently observed in diabetics (Supraspinatus Tendon: 42.7% vs 20.3%, p < 0.003; Biceps Tendon: 27% vs 7.8%, p < 0.002). Subjects with diabetes exhibited more tears in the Supraspinatus Tendon (Minor tears: 15 (15.8%) vs 2 (3.1%), p < 0.03; Major tears: 15 (15.8%) vs 5 (7.8%), p = ns), but not in the long head of Biceps. More effusions in subacromial bursa were observed in diabetics (23.9% vs 10.9%, p < 0.03) as well as tenosynovitis in biceps tendon (33.3% vs 10.9%, p < 0.001). In both groups, pathological findings were prevalent on the dominant side, but no difference related to duration of diabetes was found. Conclusions Our results suggest that age - related rotator cuff tendon degenerative changes are more common in diabetics. Ultrasound is an useful tool for discovering in pre - symptomatic stages the subjects that may undergo shoulder symptomatic pathologies.
Magnetic Resonance Imaging is a more sensitive methodology than ultrasound (US) in detecting pathological changes in asymptomatic shoulders. However, its use for epidemiological purposes is limited by higher costs and lower availability [10,11].
However, to our knowledge, there are no studies, which evaluated asymptomatic elderly subjects with diabetes. Therefore, it could be of interest to investigate whether diabetes has an additive effect on age -related tendon degeneration and whether US evaluation of the shoulder in pre -symptomatic stage could be a useful tool for discovering subjects at risk.
In addition, it must be considered that the majority of sonographic studies has been focused on supraspinatus tendon tears [1,2,[5][6][7][8][9], whereas less attention has been paid to other tendons of rotator cuff and anatomical structures of the shoulder [3,25].
Therefore, aim of this study was twofold: first, to evaluate the prevalence of sonographic shoulder lesions in asymptomatic elderly subjects, normal and diabetics; second, to describe, beside supraspinatus tears, other abnormalities of anatomical structures of clinical interest which could occur in these subjects.
Subjects
All the subjects enrolled in the study were recruited from the Outpatients Service of the Medicine and Science of Aging Department of Chieti -Pescara University.
Inclusion criteria were the following: 1) living independently in the community; 2) age > 65 years; 3) righthandedness; 4) absence of pain or acceptable discomfort in the shoulder joint, spontaneous or during usual activities of daily living; 5) no subjective dysfunction; 6) no history of trauma or surgery of the shoulder joint.
The local Ethics Committee approved the study design and informed written consent was obtained from all the patients.
The study group included 48 subjects with non insulin -dependent diabetes mellitus (NIDDM). The diagnosis of NIDDM was based on American Diabetes Association criteria [26].
The control group was made by 32 subjects, matched for age and sex, but without NIDDM, and selected with the same inclusion/exclusion criteria.
The age of onset of diabetes, current therapies and comorbidities were registered.
Hypertension was diagnosed when the subjects were taking antihypertensive drugs, Coronary Arterial Disease when they suffered from angina or myocardial infarction and Peripheral Arterial Disease when Ankle Brachial Index was less than 0.90 mmHg.
Subjects were also classified according their previous working activities, sports and hobbies. Home and office work was considered as light work; farm, factory and building industry work was considered heavy work.
Both shoulders were evaluated, according to a standard protocol, previously described by Papatheodorou et al. [27].
Maximal SST thickness was measured on a longitudinal view, just in front of the lateral part of the humeral head [27]; the thickness of the long head of BT was measured into the bicipital groove [28].
The presence of dishomogeneous hypo-or hyperechoic thicknening, diffuse or focal, of the tendon, associated with loss of the normal fibrillar pattern and/or irregularity of the tendon margins, was interpreted as sign of degeneration.
In addition, but only in BT, which is provided of a synovial sheat, the appearance of an anechoic area around the tendon, associated or not with synovial proliferation, was considered as a sign of tenosynovitis [29].
SST, IST and SBT tears were classified as follows: 1) Partial thickness tear: focal hypoechoic discontinuity with irregular margins at the bursal or articular side or located intratendinously. A bursal -side tear produces flattening of the bursal surface, with loss of the superior convexity of the tendon, while an articularside tear appears as a distinct hypoechoic or mixed hyper -hypoechoic defect of the articular surface, abutting the articular cartilage [27].
2) Full thickness tear: full defect in the tendon from the bursal to the articular margin. Hypoechoic fluid may fill the tear, with loss of the normal outward convexity of the tendon at this site. Moreover, owing to the pressure applied with the transducer, the deltoid muscle can abut against the humeral head [27].
Taken into consideration the distance between the ends of the tendon, tears were divided in small ( < 1 cm), large (more than 1 but less than 3 cm) or massive (> 3 cm).
For simplicity sake, we considered a) Minor tears ( including partial thickness tears and small full thickness ruptures) and b) Major tears (including large full thickness tears and massive ruptures).
For BT, a partial tear was reported as a focal hypoechoic discontinuity, with irregular borders, located intratendinously, while the non visualization of the tendon into the bicipital groove, with associated bulbous appearance of the biceps muscle, was reported as complete tear [27].
Involvement of SAD was identified when accumulation of anechoic fluid, with or without hypoechoic swelling of the synovia, appeared within it; it was graded subjectively as normal (distension < 1 mm), slightly increased (1 -2 mm) or clearly increased (> 2 mm).
Data analysis
Data are reported as mean and standard deviation (mean ± SD) for continuous variables, whereas categorical and dichotomous variables are reported as frequencies and percentage. The tendon thickness and the percentage of US abnormalities found in the shoulders of diabetic subjects were compared with that observed in controls. Afterwards, the differences in tendon thickness between dominant and non -dominant side and according to diabetes duration were analyzed.
The significance level was determined at p < 0.05. The two -sample Student's ttest was used to compare continuous variables when the distribution of data was normal; the Wilcoxon's rank sum test was used otherwise. The χ 2 test was used to evaluate associations between categorical data.
All analyses were done using SAS statistical software, release 8.1.
Results
Demographic and clinical characteristics of diabetic and control subjects are presented in Table 1.
The subjects did not differ for age, sex and previous working activity. None of the subjects enrolled practised sports or had hobbies involving the use of shoulder joints. Cardiovascular complications (Hypertension, Coronary Heart Disease), as expected, were observed more frequently in diabetics.
The duration of diabetes was less than 10 years (mean 6.8 ± 1.9) in 34 subjects and more than 10 years (mean 14.7 ± 2.2 years) in 14. In all the participants, HbA1c levels were < 8.0 gr/dl. SST and BT thickness was significantly greater in diabetics than in controls both in the dominant and non dominant side (Figure 1; Table 2).
In both groups, tears of SST were more frequently observed (Figure 2). The percentage of both minor and major tears was higher in diabetics, but the difference was significant only for SST minor tears. Massive tears of SST were always associated with an involvement of IFT. No tears in the SBT were registered.
The prevalence of degenerative abnormalities in rotator cuff tendons and BT was significantly higher in diabetics.
Tenosynovitis of the long head of the BT and SAD bursitis were more frequent in diabetics ( Figure 3; Table 2). Discussion Several studies have shown that the prevalence of rotator cuff tendons tears is increased in elderly subjects, with or without shoulder pain or movement limitation [1][2][3][4][5][6][7][8][9]. Moreover, it is well known that patients with diabetes are at increased risk for shoulder pathologies, such as frozen shoulder or rotator cuff tears [12][13][14][15][16][17][18][19][20][21][22]. In addition, after a surgical repair, diabetics show a restricted range of motion of the shoulder [23]and a higher incidence in re-tears. This observation can be related to the intrinsic poor quality of the tissue that is being repaired [24]. The results of our study suggest that, also in asymptomatic subjects, age -related rotator cuff tendon changes are more common in diabetics. This conclusion is supported by the observation of a higher prevalence of tears and of degenerative phenomena in diabetics, as well as by the increased thickness of supraspinatus and biceps tendons, which is due to the abnormal storage of collagen layers in the tissue and, therefore, is itself an expression of degenerative changes [20].
These observations are of clinical relevance, because it has been shown by Yamaguchi et al. [30,31], in a 2.8 years follow -up study, that pain and limitation in functional ability can develop in a large percentage (50%) of people with asymptomatic tears at baseline.
Another clinical observation coming from our study is the increased prevalence of pathological findings in the dominant side, that confirms the theory that overuse may have a significant pathogenetic role [9,37,38].
However, due to the difficulty of getting definite information about working -related lesions, we have not been able to differentiate work -related injuries versus intrinsic diabetes -induced changes in the rotator cuff.
No significant correlation was observed with the duration of diabetes. The lack of association may be explained by the difficulty in establishing the age of diabetes onset. Indeed, subjects could have glucose intolerance or mild NIDDM for a significant period of time before the clinical diagnosis of diabetes.
From the pathogenetic point of view, it is likely that, in individuals with no history of significant trauma, rotator cuff tears are mainly caused by intrinsic degenerative changes, related to aging, vascular factors or mechanical impingement [6,34,36,39,40].
The biochemical mechanisms of tendon degeneration are similar in ageing and diabetes.
The most important abnormality is the non -enzymatic glycosylation of collagen with advanced glycation end products (AGEs) formation [41][42][43].
The spontaneous condensation of glucose and metabolic intermediates (e.g. triose phosphates, glyoxal and methylglyoxal) with free aminogroups in lysine, hydroxylysine or arginine leads to a covalent bond between the sugar and the aminoacid (the Amadori product), and subsequent reactions give rise to the formation of AGEs.
AGEs affect physical and chemical properties of proteins, increasing the amount of intermolecular collagen cross -links. This results in reduction of the solubility of collagen that gets tougher, stiffer and weaker, loses its elasticity and is more likely to tear [44].
Another mechanism, by which AGEs may exert noxious effects, is represented by their action on specific receptors (RAGE), which have been identified on the membrane of chondrocytes, tenocytes and fibroblasts [45,46].
Ligand engagement of RAGE triggers cell -specific signalling, resulting in enhanced generation of reactive oxygen species, and in the activation of the transcription of nuclear factor NF -kB [47,48]. This, in turn, accelerates AGEs cross -link formation in collagen fibers [49,50] and leads to sustained upregulation of proinflammatory mediators, adhesion molecules and to a dysfunctional cell phenotype [48,51,52].
In diabetes RAGE, Vascular Endothelial Growth Factor and Cytokines are overexpressed [53][54][55][56] and it may explain why diabetics show increased prevalence of lesions and inflammatory reactions.
Beside the AGE -mediated pathogenetic mechanism, microvascular disease may lead to tissue hypoxia, resulting in production of oxygen free radicals, which, in turn, leads to overproduction of growth factors and cytokines [57][58][59].
Some limitations of this study must be acknowledged. Pain and functional limitation were evaluated on a self report basis, which is highly subjective. Active and passive ranges of motion of the shoulder were not measured. As matter of fact, functional impairment or pain in the extreme degrees of movements could be present in these subjects, who were yet able in performing usual activities of daily living, but who could avoid, spontaneously or unconsciously, some disturbing tasks. Spurs or other bone abnormalities were not taken into account in this study, which was mainly aimed at tendon evaluation. Therefore, we cannot state whether individuals with diabetes had a higher prevalence of lesions that could be treated surgically to prevent impingement type tears. Moreover, the examiner was not blind to whether or not the individuals had diabetes or not.
Finally, it must be acknowledged that US investigation has limited reliability in detecting partial thickness tears and intra -articular tears of biceps tendon [7,60,61].
Conclusions
Our results demonstrate that NIDDM worsen the tendon degeneration in aged subjects. US imaging, beside clinical evaluation, is an useful tool for discovering in pre -symptomatic stages the subjects at risk, who may undergo to shoulder pathologies. In these subjects, that represent a growing segment of elderly population, a careful metabolic control by means of diet and antidiabetics drugs is recommended and the progression of tear size should be monitored over time. | 2014-10-01T00:00:00.000Z | 2010-12-07T00:00:00.000 | {
"year": 2010,
"sha1": "6c8ae5bc8fe9789d8ee3069c5d81215e4d9a8cc5",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-11-278",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77298b1ac3861fc374abae2235cc76aece71f7a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248858271 | pes2o/s2orc | v3-fos-license | The emergence of temporality in attitudes towards cryo-fertility: a case study comparing German and Israeli social egg freezing users
Assistive reproductive technologies are increasingly used to control the biology of fertility and its temporality. Combining historical, theoretical, and socio-empirical insights, this paper aims at expanding our understanding of the way temporality emerges and is negotiated in the contemporary practice of cryopreservation of reproductive materials. We first present an historical overview of the practice of cryo-fertility to indicate the co-production of technology and social constructions of temporality. We then apply a theoretical framework for analysing cryobiology and cryopreservation technologies as creating a new epistemic perspective interconnecting biology and temporality. Thereafter, we focus on the case of ‘social egg freezing’ (SEF) to present socio-empirical findings illustrating different reproductive temporalities and their connection to the social acceptance of and expectations towards the practice. SEF is a particularly interesting case as it aims to enable women to disconnect their reproductive potential from their biological rhythms. Based on 39 open interviews with Israeli and German SEF users, the cross-cultural comparative findings reveal three types of attitudes: postponing motherhood/reproductive decisions (German users); singlehood and “waiting” for a partner (Israeli and German users); and the planning of and hope for multiple children (Israeli users). For theory building, this analysis uncovers temporality formations embedded in gender and reproductive moral values; including the ‘extended present’, ‘waiting’, and ‘reproductive futurism’. We conclude by discussing the contribution of our findings by advancing the theoretical framework of ‘cryopolitics’ highlighting the theoretical implications and importance of gendered and cultural imaginaries (re)constructing medical technological innovations and related temporalities.
Introduction
On March 2020, with the first outbreak of the COVID-19 pandemic leading to the first lockdown in Israel, Maayan Adam, an Israeli celebrity and television presenter, published a short column in "Israel Hayom" (a daily newspaper) in which she revealed her decision (after a year of deliberation and hesitation) to undertake the procedure of social egg freezing. In the article Adam writes: "That night, when they informed that Israel is closed due to war [in ]-I heard one thing only: "you are out of time sweetheart". What you did not manage to do will no longer suffice in the near future. But learn for next time: [freeze your eggs] Now, (…) not later, not even in one minute" (Adam, 2020). This quotation, presented in a unique 'eye opening' context, highlights the decision and use of social egg freezing as embedded in perceptions of time and human temporality. First, it includes ideas regarding the time of procreational processes and their finality, highlighting dimensions of irreversibility. Second, it reflects ideals of timing and notions 'right time' for reproduction that rest on morally loaded understandings of the desirable course of individual lives leading to concerns that one is 'not on time'. Third, it reflects a sense of urgency, of being 'out of time', of time as 'fleeting' or 'ticking away'. Finally, it illustrates future orientation and expectations, also connected to ideas of responsibility and control.
Indeed, time plays a pivotal role when it comes to fertility and reproduction. Especially, since female fertility is characterized by a menstrual cycle and repetitive rhythm (Thompson, 2005) while pointing to a biologically limited time-frame, reflected in the common metaphor of the "biological-clock" (Amir, 2006). While assistive reproductive technologies were initially aimed at treating and overcoming biological infertility, such technologies are nowadays increasingly used to control also its temporal characteristics, aiming to expand the time-frame of fertility and procreation (Waldby, 2015). Cryopreservation is a key technology for this practice.
Cryopreservation is the use of very low temperatures (most commonly − 196 °C) to preserve and store structurally intact living cells and tissues (Gosden, 2014). At such low temperatures, all biological activity stops, including the biochemical reactions that lead to cell death and DNA degradation (ibid). Cryobiology has become a central technique in contemporary life-sciences as more and more types of tissue and cellular material can be frozen and thawed with almost no loss of vitality (Waldby, 2015). This is particularly relevant when it comes to assisted fertility practices as it allows the preservation and further use of different materials (sperm, eggs, gonadal tissues, and embryos) for reproductive purposes (Lemke, 2019). We refer to these applications as cryo-fertility. Cryo-fertility has become a key technology of fertility preservation for patients (both male and female) facing the risk of sterilization/infertility, for example due cancer treatments, but now also for 'social' reasons.
Pertinent academic debates on cryo-fertility and temporality highlight: the way such technologies halt the temporality of reproductive biology and the related social and ethical implications Landecker, 2010;Lemke, 2019Lemke, , 2021; the use of such technologies in order to control fertility while synchronizing different temporalities (e.g. social and biological) (Baldwin, 2019;Brown & Patrick, 2018;Kroløkke, 2019;Waldby, 2015); and the analysis of the criticism directed at such technologies as reflecting moral and normative perceptions concerning the "appropriate" timetables and life-course ideals (Rimon-Zarfaty & Schweda, 2019). What are still missing in this debate are the socio-cultural aspects directing such reproductive temporalities. The impetus behind this research was therefore to embrace a cross-cultural perspective for gaining an understanding regarding if and how different socio-cultural contexts create and reflect different time constructions in the context of cryo-fertility.
In this paper we aim to expand our understanding of the way temporality emerges and is negotiated in the contemporary practice of cryopreservation by focusing on cryo-fertility and especially social egg freezing ('SEF'). Starting with a brief historical overview of the main stages of cryo-fertility, we indicate the intersection or co-production of technology and societal norms. In what follows, we theoretically reflect on cryo-technologies as creating new epistemic perspectives interconnecting biology and temporality, while drawing insights regarding temporality in contemporary egg freezing practice. In the next section, we present empirical findings from a qualitative study using interviews of the case of SEF users in Germany and Israel to illustrate different reproductive temporalities, the way these are negotiated in a specific application of cryo-fertility, and reflected in different socio-cultural contexts. The analysis of the interviews highlights three emerging formations of temporality that inform attitudes towards SEF: the 'extended present', 'waiting', and 'reproductive futurism'. In the final concluding section, we focus on the theoretical concept of 'cryopolitics' and explore the theoretical implications of a gendered and culturally sensitive reconstruction of medical technological innovations and related reproductive temporalities and how they can inform philosophical and bioethical debates on cryobiology.
Historical stages of cryo-fertility as example of co-production of technological and societal interest
While the impact and implications of different biotechnological innovations such as cloning, organ transplantation, tissue generation, and artificial insemination attracted much scholarly attention and academic debate, the history and influence of cryobiology and cryopreservation technologies (which are fundamental to each of those biotechnological innovations) within the life-sciences have been, to a large extent neglected (Lemke, 2019;Radin, 2013). These currently attract various scholars due to the particular epistemic and social implications of 'freezing as suspension' (see below). We will summarize here some basic stages of technology development to indicate how such development can be understood based on co-production from scientific and societal efforts. Interestingly, some of the most important developments and breakthrough in cryopreservation technologies took place in the context of reproductive medicine and related scientific experiments (Gosden, 2014), particularly in the fields of animal agriculture and breeding (Kroløkke, 2019;Waldby, 2015).
Up to the early 1900s, all attempts to preserve cells at subzero temperatures had failed and were therefore deemed almost impossible (Gook, 2011). Starting with the collection of sperm for artificial insemination by a donor in the first half of the twentieth century, the question of 'preservation' became more and more relevant (Daniels & Golden, 2004). In 1937, the biologist B.L. Luyet formulated a biological scientific protocol elaborating the principles of cell cryopreservation. While his experiments were only partially successful, he advanced the field from being merely speculative into a proper scientific foundation, thus marking an important milestone (Gosden, 2014). Luyet was the first scientist to experiment on the usage of ultra-rapid cooling rates, already in 1937, and in 1938, to identify the beneficial effect of dehydration prior to freezing (Gook, 2011).
Another important milestone in the development of vitrification techniques was made in the late 1940s by a research team consisting of Sir Alan Parkes (a reproductive biologist), the cryobiologist Audrey Smith, and their graduate student Christopher Polge, who later became a prominent researcher in the field. Interestingly, the team focused on laboratory experiments in sperm cryopreservation (chicken, rabbit, and human spermatozoa). They chose to focus on spermatozoa both because their movement served as a natural viability indicator and due to the practical value for the cattle breeding industry. The experiments resulted in the accidental discovery of the cryoprotective properties of glycerol (Gosden, 2014;Polge et al., 1949)-constituting an important turning point in cryobiology in general and cryofertility, in particular (Gook, 2011;Gosden, 2014).
During the 1950s, this important discovery and the possibility to freeze human sperm led to the establishment of sperm banks, mainly for men wishing to preserve their sperm due to cancer treatments or before undergoing vasectomy (Gosden, 2014). In 1954, the first birth following the use of human cryopreserved spermatozoa was reported (Gook, 2011). Later, a whole US industry of sperm banking and, triggered by positive eugenics, a sperm bank of "geniuses", shaped the new picture of "high-quality sperm", always available (Daniels & Golden, 2004).
Another important turning point in the development and application of cryo-fertility was marked by in-vitro-fertilization (IVF). When IVF was first introduced in the late 1970s, the cryopreservation of in-vitro embryos or eggs was still unavailable. The early treatments therefore followed the natural female menstrual cycles in which (usually) only one oocyte was available for insemination. Very soon, fertility experts raised the idea of improving the success rates of the treatment by increasing the number of available oocytes, i.e., by using hormonal treatment generating ovarian stimulation. This procedure, which later has become a standard practice, led to the creation of a multiple number of embryos per treatment cycle (Gosden, 2014). The first pregnancy from a frozen human embryo was reported in 1983, and the first live birth from IVF using a cryopreserved embryo was reported in 1984 (Wang & Sauer, 2006). However, the creation of superfluous embryos created also "an acute ethical dilemma" (ibid, p. 262). Debates started whether to transfer only one or a few embryos and to discard the rest. This raised the issue of the moral status of the embryonic entity (Gosden, 2014;Wang & Sauer, 2006). Hence, Western countries developed a wide range of different legislation regulating the creation (e.g., in terms of how many embryos can be produced in vitro) and the acceptable usage of such entities, as well as strategies for overcoming the problem of embryo destruction (Hashiloni-Dolev, 2013) by freezing the fertilized oocyte in a pre-embryo stage. The possibility of freezing embryos has in turn led to ethical and legal debates, concerning whether the manipulation of the beginning of human life is morally acceptable (Hashiloni-Dolev & Schicktanz, 2017), whether there are potential health risks for the future child (Michelmann & Nayudu, 2006) and how cryopreservation might limit the success rates (Gosden, 2014;Michelmann & Nayudu, 2006;Wang & Sauer, 2006). Further issues (which became more apparent also with the rise of embryonic stem cell research), included custody, ownership, and responsibility (ESHRE, 2001).
Another milestone in the historical evolution of cryo-fertility practices was the technological development of 'vitrification' or so called 'fast freezing'. The earlier 'slow freezing' is a gradual cooling technique which slowly cools the cell/tissue but leads to ice crystals which can harm the tissue/cells' potential after thawing. By contrast, vitrification technique combines ultra-rapid cooling with use of high concentrations of cryoprotectants 1 allowing a rapid entry into a glass-like state while avoiding crystallization. Vitrification therefore increases the chances that the cell/ tissue will be viable and functional when thawed. This is especially important for egg freezing since eggs are large cells with a high content of water (Gook, 2011;Gosden, 2014).
While embryo freezing quite quickly became a standard practice in IVF clinics, oocytes are more difficult to cryopreserve (Gosden, 2014). During the mid-1980s, there were few reports of viable pregnancies using frozen oocytes 2 but then the practice almost disappeared for a decade (ibid), mainly due to inconsistent results and concerns regarding embryonic development and the health of the future children (Gook, 2011). During the 1990s, egg freezing continued to be of interest to several researchers who focused on improving freezing methodologies and post-thaw survival rates as well as the normality of embryo development from a previously frozen egg (ibid). 3 1 Cryoprotectants are agents (i.e. chemical compounds such as glycerol, ethylene glycol etc.) used to prevent ice formation and damage to cells and tissues during cryopreservation by increasing the concentration of the solutes (Kar et. al., 2019). 2 In 1986, the first birth from cryopreserved eggs (using a slow freezing method) was reported. The first pregnancy and birth from vitrified oocytes were reported in 1999 (Gook, 2011). 3 Apart from the crystallization of the oocytes, the second challenge faced by scientists was the difficulty in achieving fertilization due to zona pellucida hardening caused by the cryopreservation process. This challenged has been circumvented by the development of ICSI (intracytoplasmic sperm injection) which improved the fertilization rates of previously frozen eggs (Baldwin et al., 2014;Wang & Sauer, 2006). This type of research and development can also be identified as influenced by politics and ethics. For example, substantial research in this field was promoted by Italian researchers in the context of Italy's Law 40-approved by Parliament in 2004. The law restricted PGD (pre-implantation genetic diagnosis) along with embryo freezing, embryo research, and egg donation (Baldwin et al., 2014;Gook, 2011;Martin, 2010). The restriction of embryo freezing has been identified as encouraging further technical advances in oocytes cryopreservation (Martin, 2010). In other places, egg freezing was perceived as sidestepping controversies around embryo disposition, including custody issues, "orphan" embryos, and issues related to embryonic research and the exploitive dynamics involved in egg donation (ibid).
Towards the end of the 1990s, egg freezing appeared as a new option for fertility preservation but was mainly used for medical reasons-i.e., for women who will potentially sustain partial or total loss of fertility (e.g., due to cancer treatments) (Wang & Sauer, 2006). In 1994, the world's first egg bank was established in Melbourne, Australia, at the Royal Women's Hospital to preserve fertility for women with malignant diseases (Gook, 2011).
The most recent milestone in the context of egg freezing was reached in 2012, when both the American Society for Reproductive Medicine (ASRM), and the European Society of Human Reproduction and Embryology (ESHRE) decided to lift the experimental label of the procedure. The professional-ethical move paved the way for using the procedure for preserving fertility due to so called 'social' reasonswhat has been recognized as 'non-medical' or 'social' egg freezing (ESHRE, 2012;ASRM & SART, 2013;ASRM, 2018). Egg freezing is therefore increasingly used to enable healthy women to prolong their fertility. In the US for example, reports from the Society for Assisted Reproductive Technology indicate that in 2013 (the year following the ASRM approval), reporting clinics performed approximately 5000 cycles of egg freezing. Those numbers were already doubled in 2018, when approximately 11,000 cycles were reported (Birenbaum-Carmeli et al., 2021).
Between cryo, biology and temporality
Cryopreservation technologies rely on the ability to preserve biological materials in extreme sub-zero temperature. They therefore encompass a unique configuration of temperature and temporality (Oikkonen, 2020). Indeed, several scholars share the interpretation of cryo-technological developments as introducing new perceptions of time, biology and their interrelation, calling for empirical investigation (Kroløkke, 2019(Kroløkke, , 2021Kroløkke et al., 2020;Lemke, 2019).
In her research on cryopathy and cryonics, Kroløkke (2021, pp. 35-36) offers a multimodal domain entangling different types of cold: that is 'natural'-(ice) and 'artificial'-(cryo). Following this conceptualization, in the context of cryo-fertility, and especially egg freezing, 'natural' ice crystal formations have been positioned as a threat (that should be avoided by the usage of vitrification and cryoprotectants). In this sense the technoscientific possibility to replace the 'natural' with 'artificial ' cold (cryo), becomes, or is understood as holding value, as it (allegedly) promises to challenge biological temporality.
Reflecting upon the common metaphor of the 'biological-clock' which encompasses ideas of 'natural' temporal limitations, the development of cryobiology (i.e. the ability to generate artificial cold) can be perceived as challenging biological temporality, as reshaping the boundaries between life and death, health and illness, youthfulness and aging, mortality and generativity (Katz et al., 2020;Kroløkke, 2021), and thus as inspiring or introducing new epistemological perspectives. As Hannah Landecker (2010) observes, cryobiology changes what it means to be biological. Formerly, being 'biological' meant to be embedded in the circle of continuous life process including being born and dying. Now being biological, cellular, or alive also allows "to be suspendable, interruptible and storable, freezable in parts" (ibid, p. 217). In other words, the use of cryobiology-the ability to stop and start biology with its bounded temporality or clocks, to arrest and suspend cellular activity and reanimate it at some future date, calls for a different perspective on the relationship between biology and time and the ability to synchronize and facilitate temporality (Kroløkke, 2019;Waldby, 2015). When it comes to temporality, the "plasticity of living matter" (Landecker, 2010, p. 219), and the manipulation of the plastic matter of the organism while halting it at a certain state, results in "things living differently in time" (ibid). Therefore, freezing technologies have been referred to as allowing 'time travel' (Katz et al., 2020), as a form of biological 'time machine' (Kroløkke, 2019) or 'temporal prothesis' (Radin & Kowal, 2017), and thus as enabling new ontologies-otherwise unavailable (Oikkonen, 2020).
Within this context, cryobiological technologies have further been identified as creating a new form of decontextualized 'latent life' (Radin, 2013) or 'suspended life'. By enabling vital processes to be kept in a liminal state cryo-technologies create a new condition in which biological entities are neither alive nor dead (Lemke, 2019)-"an inert life without (apparent) change" (Lemke, 2021, p. 11). The terminology of 'suspense' and 'suspension' was also offered by medical anthropologist Klaus Hoeyer (2017). According to Hoeyer, the term 'suspense' holds several meanings in this context: putting biological parts on hold, their suspension from the body from which they were taken, and the resulting suspension of life and death (ibid).
The state of latency or suspense in itself also holds an integral temporal orientation as it seeks to ensure the past, in the sense of preserving an entity's state at a given point in time long after this point has passed, while being oriented towards the future by keeping such biological materials available for future use (Kroløkke, 2019). Reflecting on the concepts of time and reversibility, Lemke (2021), further relates to 'suspended life' as introducing a new time configuration which extends the present towards the future. According to him, the expansion of the duration of the present might in turn "delay changes or postpone necessary decisions" (ibid, p. 9) by offering reversibility. At the same time, cryobiology can also be analyzed as representing a scientific effort to manage the future (Radin, 2013), reflecting a more comprehensive 'regime of anticipation' (Adams et al., 2009). This regime which currently guides various technoscientific and biomedical practices involves a temporal epistemological dimension perceiving the future as open and incidental while at the same time depending on present actions (Lemke, 2019, p. 453). Within this 'regime of anticipation,' the "sciences of the actual" are replaced by a predictive or even "speculative forecast" (Adams et al., 2009, p. 247). These modes of anticipation are closely linked to emotions and rationalities of responsibility, prevention, and preparedness: They involve a complex set of concerns, fears, and hope "linking epistemic orientations to moral imperatives" (Lemke, 2019, p. 453). A similar idea was highlighted by Hoeyer (2017) who claimed that the suspension of biological decay via cryopreservation results in the creation of "a space for action in which new social forms are built, new property managements emerge, and new hopes and concerns can flourish" (p. 209) (for relevant discussion see: Kroløkke et al., 2020). Thus, the frozen material is constructed as a form of 'promissory capital' (Lemke, 2019;Thompsom, 2005), and the ability to freeze biological material draws on the promise or hope of future revival. It therefore also becomes a type of insurance policy-a protection of life against death (Waldby, 2015).
Finally, the usage of 'artificial' cold (cryo) for reshaping or challenging life, its boundaries and temporality can be further conceptualized using the theoretical framework of 'cryopolitics'. When studying the political and social impact of the life sciences and biomedical practices and the way life is extendedly regulated by technoscientific means, social scientists often draw on the Foucauldian concept of 'biopolitics' (Foucault, 1978(Foucault, , 2003; for relevant discussion see: Lemke, 2019). Within this general theoretical framework, and due to the rapid developments in cryo-technologies, the term 'cryopolitics' (Radin & Kowal, 2017) has been suggested to conceptualize the politics of low temperatures, suggesting that cryo-technologies have become a main biopolitical tool of the twenty-first century (ibid; Kroløkke, 2019). Cryopolitics refers to the socio-political aspects and related mechanisms or "strategies of generating, regulating and processing 'suspended life'" (Lemke, 2019, p. 454). While drawing on the concept of biopolitics, Radin and Kowal (2017, p. 6) emphasize the important intensification and even intervention represented by the concept 'cryopolitics'. According to them, while the Foucauldian concept of biopolitics pertains to the way power "makes live and let die", the concept of cryopolitics, which draws on a more recent scientific discourse and practice while suspending animation and action, produces a zone of existence emphasizing the centrality of "make live and not let die" (for relevant discussion see: Kroløkke, 2019;Lemke, 2019Lemke, , 2021Kroløkke et al., 2020). In this sense, cryopreserved materials (that is organisms or bits of their bodies) are exposed to "a new onto-political regime, being neither fully alive nor dead" (Lemke, 2019, p. 455). According to Radin and Kowal (2017, p. 12), cryopolitics therefore focuses on the usage of 'artificial' cold (cryo) to reorient life-as well as related perceptions concerning what life is-in time.
Temporality in contemporary practice of cryopreservation: The case of 'social egg freezing' (SEF)
When it comes to social egg freezing, cryopreservation sees to facilitate reproductive plasticity (Kroløkke, 2019). It aims to enable women to disconnect their reproductive potential from its biological rhythms in the hope to secure their reproductive future, i.e., by providing 'young' eggs for later life. Egg freezing thus represents a specific example of the 'regime of anticipation' referring to the frozen oocytes as a unique 'promissory capital' in the form of post-menopausal reproductive potential, reconstructing reproductive temporal horizons. The procedure may therefore be analyzed as a technological attempt to stop women's 'biological clock', 'freeze time' or else synchronize different temporal orders. As described by Kroløkke (2019), the case of oocytes freezing entangles "the somatic bodily temporalities (growing old/becoming infertile) with institutional temporalities (how society structures procreation in women's lives), normative temporalities (when a woman is viewed as too young or too old to procreate) and affective temporalities (hoping to become a parent or fearing it is too late)" (Kroløkke, 2019, p. 530). Indeed, previous empirical research focusing on SEF users demonstrated how freezing seemingly becomes a form of biological 'time prosthesis' (ibid; Kroløkke et al., 2020), aimed at reconciling 'social' and 'biological' temporalities (Waldby, 2015).
Reproductive decisions regarding time, timing and life planning are therefore not only embedded in somatic and biological 'clocks', but also reflect multiple perceptions of time that are normative and socially constructed (Nowotny, 1992). In other words, individual life is not just a biological process that follows a certain biological clock, but a sequence of phases and thresholds reflecting socially constructed life course and biographical ideals, as well as generational roles and statuses reflecting collective schedules (Rimon-Zarfaty & Schweda, 2019).
Within this context, SEF can be perceived on the one hand as a type of reproductive management reflecting normative ideals such as empowerment by control over the temporality of reproduction (Robertson, 2014). On the other hand, it also involves normative uncertainty about what is the 'right' timing? Public and scholarly critique evoked by SEF demonstrate the existence of particular social expectations towards women, motherhood, and the "ideal" life course. This frames SEF as a deviance from collective temporal reproductive constructions according to which pregnancy is supposed to take place at a certain age or during a particular stage of the life course (Baldwin et al., 2014;Bozzaro, 2018;Bühler, 2015;Weber-Guskar, 2018). SEF has therefore triggered a controversy around "late" or "old" mothers (see for counterarguments : Bernstein & Wiesemann, 2014;Smajdor, 2009). Linked to this, concerns have been raised regarding the alleged harm to and burden on children of "old mothers". Further, concerns include the latter's ability to fulfil customary parental roles and responsibilities, the burdens created by early filial care responsibilities, etc. (see for relevant discussion: Rimon-Zarfaty & Schweda, 2019; Kroløkke et al., 2020).
However, little is known about if and how temporality is concretely constructed in the case of SEF. Therefore, a leading research question for us is how temporality emerges and takes shape in its context. Our case-study elaborates theoretical links between socio-cultural and normative meanings of time while examining the implications for those affected-that is the women using SEF. While during the interviews, our interviewees did not relate to the cryo-technology and its properties as such nor to the related theoretical conceptualization of human biology and temporality (apparent in the scholarly debates), they nevertheless highlight their everyday expressions and representations. We therefore focus on those 'lay moralities' (Raz & Schicktanz, 2009a, b) and their yet neglected cultural interpretations. By analyzing the views and experiences of SEF users in Germany and Israel, we attempt to gain insights into the different reproductive temporalities, how these are morally guided by ideals, hopes or concerns, and the ways those may differentiate between different societies.
Reproductive temporalities and SEF in Germany and Israel
In the following, we present results from an empirical study analyzing 39 qualitative personal in-depth semi-structured interviews (conducted during 2018-19): 23 interviews were conducted with Israeli SEF users and 16 with German SEF users. 4 The interviews lasted 3-1.5 h and were conducted in a location preferred by the interviewees. Interviews were conducted in Hebrew in Israel and in German or in English in Germany (as preferred by the interviewee). A similar interview guide was used in both countries. Interviews in Hebrew and English were conducted by Nitzan Rimon-Zarfaty. Interviews in German were conducted by a research assistant (Ms. Lisa-Katharina Sismuth). In case it was preferred and requested by the interviewee, the interview was conducted via the telephone or online platform. Interviewees were asked about their motivations to use egg freezing (including timing issues and family planning), their main considerations and the perceived advantages and burdens of SEF. They were also asked how they experienced the interaction with the medical staff and the consultation they received prior and during the procedure. They were further asked about the experience of the use of SEF, the reactions of their social environment, their opinion regarding the current regulation of SEF in their country and the overall public debate.
Interviewees were recruited using recruitment flyers placed in fertility clinics (upon the clinics' consent), via relevant internet forums (following the consent of the forums' managers) and using a snowball sampling method.
Interviews were tape recorded (with the participants' permission) and fully transcribed. Quotes were later translated into English. Interviews conducted in German were fully translated into English. 5 Transcripts were analyzed to uncover discursive themes and categories of themes recurring within and across national groups (Denzin & Lincoln, 1994). The coding and thematization process was generally based on the constructivist version (Charmaz, 2002) of the grounded theory approach to data analysis (Glaser & Strauss, 1967;Strauss & Corbin, 1990). Emergent topics identified through inductive coding were added to the analysis and compared between the different national-cultural groups. This enabled us to detect relevant considerations, ideas and moral arguments that can be further interpreted in relation to reproductive temporalities and the related cultural scripts. As an international (German-Israeli) and inter-disciplinary research team, and drawing on collaboration across research nationals and cultures, our interpretations and analysis were discussed and peer reviewed for reflexive insights.
Our cross-cultural comparative research framework was aimed at deconstructing the often implicit, taken-for-granted assumptions concerning time, timing, and planning, and unveil their underlying cultural narratives. Previous comparative research on bioethical issues between Israel and Germany identified both countries as representing two opposing regulatory frameworks and professional cultures in biomedicine in the context of the beginning and the end of life (e.g., Hashiloni-Dolev, 2007;Hashiloni-Dolev & Shkedi, 2007;Raz & Schicktanz, 2009b. Both contexts are of high relevance for human temporality, namely life-course, planning, and 'timing'. In relation to reproductive technologies, the German regulatory framework (originated at the Embryo Protection Act (EPA)) was identified as rather restrictive, while the Israeli regulation has been identified as permissive (Hashiloni-Dolev, 2007;Hashiloni-Dolev & Shkedi, 2007). Moreover, Israel is one of the first countries that officially regulated SEF. The regulation, issued in September 2010, allows freezing the eggs of healthy women aged 30-41 years and the implementation of fertilized eggs until the age of 54. The procedure is limited to 20 frozen eggs or 4 cycles. The procedure needs to be paid out of one's own pocket. In Germany, by contrast, there is no formal legal or regulatory framework of SEF-which is allowed and performed (Rimon-Zarfaty et al., 2021). The German EPA, which prohibits egg donation, permits the freezing of fertilized eggs only at the pronuclear state (while limiting the cryopreservation of embryos), and therefore does not restrict the freezing of unfertilized eggs (Robertson, 2014).
Overall, our cross-cultural comparison, which allowed for reflecting on the similarities and differences across societies, reveals three main types of anticipatory motivations for using SEF: postponing reproductive decisions, "waiting" for a partner, and usage of SEF in the hope for or planning of a multiple number of children. Those three motivations are not exhaustive nor mutually exclusive -and thus can possibly characterize a single interviewee. 6 We will present and discuss each of those motivations and offer a theoretical conceptualization of the emerging temporalities, including the 'extended present', 'waiting', and 'reproductive futurism' respectively.
Postponing reproductive decisions: extending the 'extended present'
One of the cross-cultural differences apparent among our interviewees revolved around the idea of motherhood and the decision to become a mother. While most of the Israeli interviewees expressed a wish to have children, when it came to the German interviewees, the decision to become a mother seemed less obvious and more conflictual. Several issues and concerns were raised in the interviews as related to the difficulty in making this life-changing decision. The first concern brought up by German interviewees revolved around ideas of readiness. They raised ideas of "right time" for becoming a mother which they connected to certain milestones or life conditions (Baldwin et al., 2015) they believed should be reached: a stable relationship, financial stability, and emotional readiness-all of which may also reflect broader social expectations. The second concern revolved around the German labor market which is perceived by many of the interviewees as intense and burdensome and as therefore discouraging motherhood. The educational and training courses were portrayed as long and strict. Although not serving as a direct reason for performing SEF-that is, interviewees did not choose to use SEF due to career considerations, they still acknowledged this issue and its effect on their reproductive decision making. It therefore seems that those women try to cope with two highly gendered time conflicts: a work-family conflict (Daly & Bewley, 2013;Waldby & Cooper, 2008) and a related biological-social time conflict (Leccardi, 2005a).
A third major concern highlighted how relationship-formations are nowadays challenged. Interviewees emphasized the difficulty to form a stable relationship in today's dating world with its instability and endless possibilities. Finally, German SEF users also mentioned a strong social expectation towards German mothers (mainly in Western-Germany), expecting them to put all their energies into motherhood, and a related social de-legitimation and stigmatization of working mothers. The concept of 'Rabenmutter' (Raven Mother) was frequently mentioned as representing such "neglectful" mothers. Some of our interviewees mentioned that this message is also supported by existing regulation which supports mothers in staying at home (e.g., long maternity leaves, tax legislation and the shortage of childcare provision) (Fagnani, 2002;Mckay, 2011;Bauernschuster & Rainer, 2012).
The concerns raised by our interviewees can be further discussed within the framework of changes faced by women today. Those include a complex combination of changes in career and educational paths (Leccardi, 2005b) which comprise of new labor-market requirements regarding higher flexibility, longer training periods, and less occupational stability (Bozzaro, 2018;Waldby & Cooper, 2008); changes in relationship patterns and formations leading to difficulties in forming commitment and stable relationships (Bozzaro, 2018;Illouz, 1997Illouz, , 2012Inhorn, 2020); as well as high social expectations from German mothers (Fagnani, 2002;Mckay, 2011). All these issues serve as a background for understanding reproductive postponing and a notion of a temporal gap between the biologically bound time of women's reproduction and a newly evolved social time (Waldby, 2015).
Against this background, one main motivation for using SEF emerging from the interviews-also reflecting a form of reproductive temporality-is to postpone reproduction and even more generally, reproductive decision making. This motivation characterized a minority of the interviewees, mainly Germans. By contrast, the decision to have a child came up as almost unconditional from most of the Israeli interviewees (with only very few exceptions). Israeli interviewees also acknowledged the current iterative structures of relationship formation and career patterns, yet those did not seem to raise concerns to a similar extent regarding the related time conflicts and the basic decision to become a mother. As it was put by two Israeli interviewees a 33 and a 34 years old respectively: "In Israel the dogmatic decision is (…) to become a mother. (…) the default is to be a mother" "From a very young age I wanted a big family, it is something I grew up on (…) very natural" In contrast, it seems that for our German interviewees the question of whether to have children is more open and at times more conflictual. Therefore, the option of postponing the decision to become a mother as such, became relevant. This point was exemplified by one of our German interviewees, a 37-year-old, when talking about her main motivations for using SEF and the main advantage of the procedure as she perceives it: As exemplified by this quotation, this interviewee is not sure whether she is willing to make the life-changing decision of becoming a mother. While other interviewees expressed more ambivalence or difficulty with their inability to reach such a decision, she explains that she enjoys the current status quo of her life, her job, flexibility, hobbies, and aspirations. While acknowledging the limitations of SEF, she therefore uses it to gain more time until she will be forced to face such decisions.
Following this line of reasoning, the motivation to maintain the current status quo and gain more time for making reproductive decisions can be generally conceptualized within the idea of an 'extended present' (Leccardi, 2005a;Nowotny, 1985). Expecting a potential contradictory nature of motherhood with their public sphere/social life, self-realization or personal tendencies, these women anticipate a future biographic discontinuity due to motherhood. As a result, they abandon the medium-long term future and concentrate on the time dimension of the 'extended present' in which they are able to make short-or medium-term plans. SEF therefore serves as medical-technological mean for further extending the extended present by postponing existential questions and definitive decisions, while preserving future choices (Rimon-Zarfaty & Schweda, 2019; For similar observations in the context of Belgian social egg freezing users and British medical egg freezing users see: De Proost & Paton, 2022). Our finding further highlights the connection between the medico-technological cessation of vital metabolic activities and the idea of continuous present in turn enabling the postponement of personal decisions concerning "the concrete "when" of the "whenever"" (Lemke, 2021, p. 12).
Furthermore, as this quotation demonstrates, German women using SEF are at least to a certain extent also faced with social criticism identifying them as "egoistic" or "selfish" as they might be perceived as prioritizing career and self-realization at the expense of having children. Such perceptions can also be connected to ideals of "proper" reproductive timing. Postponing reproductive decisions can thus be identified as representing a "prolonged" transition to adulthood. In a similar manner, such criticism is also connected to a controversy around 'late motherhood'. 7 This type of criticism serves as an indication for implicit normatively loaded and gendered ideas of the ideal female life-course (Weber-Guskar, 2018; Rimon-Zarfaty & Schweda, 2019). By extending the 'extended present', SEF therefore challenges traditional age norms, phase ideals and biographical schedules. Indeed, while attempting to extend the extended present, many of our interviewees have been struggling with the idea that they are "off time" or "out of sync" (Baldwin, 2019). Such concerns will be further discussed in the context of our second type of interviewees highlighting the use of SEF in the context of 'waiting'.
Indeed, the majority of our interviewees from both Germany and Israel shared the wish to have children in a relationship, i.e., with a suitable partner, as a main motivation for using SEF. Both German and Israeli SEF users sharing this motivation explained that at this point, they do not have a partner -a situation in which they did not want to remain. Facing the ticking 'biological clock', they feel they are "running out of time", or-as it was put by one of the German interviewees-are at a point of "end time panic " (see also : Baldwin, 2019). Generally, the status of (late) singlehood or lack of a partner came up extremely prominently, at times accompanied by expressions of loneliness and anxiety. They hope that SEF will buy them the extra time to enable them to find a partner, have a child within the framework of a relationship, and get back on the life course track. This motivation is exemplified by one of our Israeli interviewees, a 39 years old woman: "It will give me an option (…) I will (…) find the partner that will be a good match for me. (…) and this dead-end of fertility, somehow becomes a bit softer. (…) it does not turn the situation into something else but it helps, especially in the process of looking for a relationship (…) a bit of air".
As this quotation demonstrates, this interviewee chose to use SEF to "get some air"-that is a break or a 'time-out' from the ticking 'biological clock' that will enable her to find a partner and form a relationship within which she can fulfill her wish to have children. A similar motivation was brought up by a 29 years old German interviewee: "When you're like 34/35 and you don't have a partner (…) you really are in kind of a rush, you know? Like: Ok, Hey, my name is … so let's have children.
(…) This gives you also a bit of a laid-back attitude when it comes to relationships".
One cross-cultural difference detected within this identified type of SEF users, has to do with the option of embryo-freezing. While embryo freezing is restricted in Germany, the Israeli regulatory framework enables women opting for SEF also to freeze embryos (fertilized eggs), using donor sperm. As came up in our interviews, the option of embryo freezing, is promoted by certain fertility experts who claim more experience and higher success rates than with egg freezing. This option confronted our interviewees with the question of whether or not to "commit" to a donor sperm while also bringing up the possibility of single motherhood. Importantly, while longing for a partner, some of the Israeli (mainly secular) interviewees chose to freeze both embryos and eggs, in the hope of securing their chances for future motherhood-also relating to this choice as a last resort or a 'plan b to the plan b'. These findings correspond with the Israeli wish for motherhood discussed earlier.
The case study of SEF therefore, generally brings up the status of singlehood as a main category-in ways which highlight the temporal collective organization of social life-meaning the fact that there are certain socially accepted time-frames or age norms within which women are expected to engage in a stable relationship (Lahad, 2012) and have children. While our findings show how social understandings regarding the "appropriate" timing and duration of this life phase can vary (as demonstrated by the reported German controversy around 'late motherhood'), singlehood nevertheless represents an overarching liminal, temporary, and transitory stage (ibid).
In the context of singlehood, SEF users' experiences and understandings can be interpreted within the framework of waiting (For similar observations in the context of American SEF users see: Inhorn, 2020). Smith-Hefner and Inhorn (2020) refer 19 Page 16 of 26 in this context to a "state of waithood" (p. 3), in which one is waiting (in the sense of unintentional/unexpected delay) to marry and have children. The context of SEF usage is however, ambivalent and two-fold. When relating to the temporal constituents of the female self, Pickard (2020), who presents an overview of relevant feminist scholarship, discusses its identification as reflecting a gendered tension between constraining modes of 'waiting' or 'expectation' versus aspects of 'choice' and agency-to which we can relate as empowering. On the one hand, women are subjected to the highly gendered notion of "patiently waiting", reflecting a traditional and constrained temporal condition. This temporal notion becomes particularly salient in the context of traditional heterosexual romantic scripts according to which single women are expected to wait to be chosen (Pickard, 2020), or more generally wait for a partner. Following this line of reasoning, according to Lahad (2017), the notion of 'waiting' attributed to single women, reflects a heteronormative logic which produces power relations supported by a disciplinary temporal regime. At the same time, and in line with Pickard's (2020) insights regarding female temporality in late modernity, as our findings reveal, while women's temporal experiences can be interpreted as reflecting a traditional style of 'waiting', those experiences are also simultaneously counterposed by the neoliberal emphasis on 'choice' and agency-reflected in women's search for a right partner and demanding a form of control over their reproductive potential. This dialectic "results in a chronic state of ambivalence" (ibid, p. 314). On the one hand, as single women, our interviewees are subjected to the traditional idea that they should wait for the right partner. Such gendered temporalities and the related constitution of the hybrid feminine self, underlie the stalling and slowing down of the "gender revolution" (Pickard, 2020). On the other hand, within the Western capitalist social context idealizing notions of efficiency, such waiting is understood as a waste of time that should be eliminated (Lahad, 2012). Following this line of reasoning, the threat is that single women will overly extend their waiting time and "miss the train" with no possibility to rejoin the collective temporal linear path (ibid). Our interviewees therefore attempt to avoid additional 'waste of time'-they use SEF in the hope that it will give them the ability to synchronize and facilitate temporality and thus claim a form of reproductive agency (Brown & Patrick, 2018;Lahad, 2017)-identified as empowering-which in turn may also enable them to "look for", "find" or "choose" the right partner.
SEF users are therefore faced with a dialectic tension between choice and agency and the traditional expectations of waiting-which also represent competing interpretations in the context of SEF usage. This tension echoes the gendered challenge of the dual management of a rational-instrumental approach to time, associated with the future, and the caring time of immanence (Pickard, 2020, p. 318). Within this context, SEF serves as an empowering means by which women are able to mobilize greater agency over their reproduction-also reflecting a positive moral attitude towards active and "responsible" (hopefully) efficient time management in the light of future prognoses and risks (i.e., of 'biological' fertility decline) ( van Carroll & Kroløkke, 2017;de Wiel, 2015). In this sense, instead of passively accepting the inevitable decline of their fertility, SEF users take action/responsibility/control to preserve their reproductive future options (Baldwin, 2019). Such an attempt can also be linked to their wish to avoid future regret and blame for not taking action (ibid; Baldwin et al., 2019). However, the use of SEF paradoxically produces a form of temporal stalling, enabling SEF users to extend their waiting time (see also Bozzaro, this issue). In this sense, the use of reproductive technologies (in this context, SEF), reflects the interplay between intentionality and constraint as it on the one hand a manifestation of intentional, future oriented agency, while at the same time a mean for prolonging a gendered state of 'waiting' and 'expectation' (Pickard, 2020).
Acknowledging this paradox and the way it is reflected in SEF users' experiences and motivation, it therefore seems that the multiple experiences of women's lives cannot be reduced to dichotomous binary categories (Lahad, 2017, p. 16) but rather represent a complex and inconsistent negotiation between empowerment and constraint. From this perspective, SEF, which challenges traditional age norms (Rimon-Zarfaty & Schweda, 2019), and even to a certain extent family models (as exemplified by the Israeli interpretation of SEF as including embryo freezing, thus creating a negotiation around single motherhood), holds an empowering potential and experience. Nevertheless, at the same time, the context of singlehood emphasizes the notion of 'waiting'. When women use SEF to buy more time for finding a partner-namely a man to have children with, egg freezing becomes a technological concession to unintentional 'reproductive waithood' (Inhorn, 2020). In this sense, women use this novel technology to meet traditional cultural familial scripts with their biographical scenarios and related timelines (Inhorn, 2020) (see also Bozzaro, this issue).
Long term planning-'reproductive futurism'
The third motivation for using SEF emerging from the interviews is common among a distinct minority group of interviewees, all of them Israeli Jewish-religious women (who self-identified as observant to their faith). Unlike the (mainly Catholic) Christian idea of the 'divine order of creation' (Schöpfungsordnung), naturalness and refrain from 'playing god' as an intervention in the sacred act of creation (Bühler, 2015;Hashiloni-Dolev, 2013), the Jewish tradition perceives SEF as unproblematic. In fact, in Israel, SEF is supported by certain rabbinic authorities as an alternative to single motherhood 8 . Therefore, SEF becomes attractive for Jewish-religious single women wishing to have children within a traditional heteronormative family model.
These women usually start to use SEF as soon as the Israeli law enables themmeaning at their early 30s -to increase the procedure's potential success, and due to more traditional ideals of the life course in which women are expected to be married and start having (multiple number of) children at younger ages . Those interviewees further related to the stigmatization of "late" singlehood-which within the Jewish religious society is defined as such at younger ages.
As such, some of our religious interviewees criticized the Israeli legal age limitation for the usage of SEF (30-41) which they identified as jeopardizing the procedure's success rates. This criticism uncovers the normative nature of this legal time frame, which can be interpreted as resting upon social ideas concerning the "right" time or timing for reproduction. This is especially the case since from a biological perspective, the "right" time for reproduction arrives at much younger maternal ages than 30-41. A few of our religious interviewees stressed how in their social sector, this time-frame is "too late"-uncovering intra-cultural differences in the ideals of collective timetables and biographical schedules.
When talking about social freezing, this group of interviewees further presented a unique idea of long-term family planning by stressing that for them, the decision to freeze eggs relies not simply on the wish to have a child, but on the wish to have as many children as possible, as it was put by a Jewish-religious interviewee: "If I am getting married at the age of 35, so a secular woman getting married in that age she will have 2-3 children, (…) it is enough for her. For a religious woman it is too little. (…) When you look at it from the perspective of the religious public, this is something that is done by religious women who wants many children but (…) got married in a late age. (…). Fertility preservation (…) is not for being a mother in general, it is for being a mother of five".
Following this line of reasoning, they plan on getting married and have their first children in a natural way, but then, at later ages, when already facing fertility decline, they will be able to use the frozen oocytes for continuing to bring more children. This type of long-term planning further highlights a unique variation of the perception of the usage of SEF as a responsible act of taking control over (future) fertility. It therefore seems that the usage of SEF among religious women represents an ongoing negotiation between medical calculation on the one hand and religious belief systems and traditional social norms on the other (Kılıç & Göçmen, 2018).
Similarly, to their secular counterparts, Jewish religious interviewees undertook SEF due to their inability to find a proper partner. However, this group represents a unique motivation while clearly relating to traditional family model and large family norms in which motherhood is a central feature of religious self-identity. It therefore seems that for these women, the usage of SEF represents a future oriented attempt directed at extending their reproductive timeline or horizon.
Relating to the concept of 'futurism', Tavory and Eliasoph (2013), present a useful approach for making sense of the multiple kinds of future orientation which can be useful for our analysis. In their approach, they distinguish between 'protentions'-an individual moment-by-moment anticipation of the future, 'trajectories' through time that involve certain long-term narratives and future oriented projects and goals, and 'plans and temporal landscapes' which include overarching temporal orientations towards the future that are often naturalized, taken for granted, and experienced as inevitable. All of our interviewees, reflecting all three types of motivations, use SEF as an attempt to synchronize and/ or disentangle different trajectories (e.g., the trajectory of finding a partner and the trajectory of having children) (for relevant discussion see: Brown & Patrick, 2018). However, Jewish religious women's reproductive temporalities also adhere to a very particular and distinctive reproductive plan and temporal landscapereflecting naturalized and traditional future oriented collective temporal order.
This notion of collective temporality can be further discussed as representing a very transparent form of 'reproductive futurism' (Edelman, 2004). It is a basic temporal directedness of a social collective towards the future and future generations (Rimon-Zarfaty & Schweda, 2019). The concept of 'reproductive futurism' was also identified within the debate on 'queer temporalities' as a time line which adheres to heteronormative conceptions of a couple-oriented and reproductive futurity (Edelman, 2004;Lahad, 2012). The highly political context of the Jewish-Israeli pro-natalism with its demographic goals as well as underlying message of Jewish religious commandment to be "fruitful and multiply" (Donath, 2015;Kahn, 2000;Kanaaneh, 2002) frames this collective temporal notion. Within this context, Birenbaum-Carmeli et al. (2021), identified Jewish-women's usage of SEF as reflecting their commitment to the "Jewish maternal imperative" (p. 346).
Temporal motivations: cross-cultural perspective
Our case study reveals three main types of temporal motivations for using SEF, along national cultures and personal experiences. The first type, characterizing mainly of German SEF users, is aimed postponing reproduction and the need to make reproductive decisions. The second type is shared by both Israeli (secular) and German users. These women bank their eggs to buy more time in the hope that they will find the partner for whom they are waiting. Within this type of motivation, one can find cross-cultural differences at the cultural level of perception and argumentation (e.g., around the understanding of motherhood as (un)conditional, also reflected in the Israeli possibility to freeze embryos, as well as ideals of appropriate timing as reflected in the German ambivalence around "late motherhood"), but not in the personal context of decision-making. The third type which characterizes Israeli Jewishreligious users, reflects long-term family planning. Our case study therefore uncovers temporality formations embedded in gender and reproductive moral values, including the 'extended present', 'waiting', and 'reproductive futurism', respectively.
One can identify similarities between the women presenting the three types of motivations in terms of their personal (singlehood) status and biographical narratives. However, at the same time, the analysis of those motivations uncovers different temporality formations and logic. While the second type demonstrates the extent to which the phenomenology of singlehood produces a certain temporal identity that exceeds cultural boundaries, the other two represent a cultural contrast. Relevant cultural scripts to explain these differences include the high social expectations anticipated by German mothers (Fagnani, 2002;Mckay, 2011), and the Israeli pro-natalism which generally speaking frames the favorable Israeli approach to fertility medicine (Gooldin, 2008;Kahn, 2000), and preservation (Birenbaum-Carmeli, 2016b;Inhorn et al., 2020;Shkedi-Rafid & Hashiloni-Dolev, 2011;Rimon-Zarfaty et al., 2021). Further possible explanations may include the general promotion of individualism in Germany (Raz & Schicktanz, 2016), where familism and individualism are perceived as contradictory and the increasing value of individual selfrealization was identified as leading to a decline in birth rates (Hashiloni-Dolev & Shkedi, 2007;Keller et al., 2005). This was also apparent in data indicating increased rates of childless (or childfree) women (DESTATIS, 2019). By contrast, the Israeli culture has been identified as a combination of individualism and collectivism reflecting the importance of family ties and genetic kinship (Lavee & Katz, 2003;Raz & Schicktanz, 2016).
Conclusion
In our study, we examined how temporality is experienced and takes shape across societies in the particular context of cryo-fertility. Following our discussion about the connection between biology and temporality in the context of artificial cold (cryo), we therefore focused on SEF as a site for examining emerging temporalities and the corresponding experiences and attitudes. Our case-study however, has some limitation, mainly due to a limited sample size and focus on two particular socio-cultural contexts (Israel and Germany). Though our ability to generalize from the study is therefore limited, the cross-cultural perspectives nevertheless enabled us to reflect on how such temporalities can vary and are influenced by socio-cultural factors. These factors include normative cultural perceptions regarding reproductivity as a part of the self-image and gendered perceptions regarding the ideal lifecourse and its stages. The usage of the term 'reproductive temporalities' therefore goes beyond biological halting to socio-cultural understandings of reproductive time and timing (Smith-Hefner & Inhorn, 2020). In other words, our case study highlights both intra-and cross-cultural differences in reproductive temporalities. Thus, the abstract-individualistic outlook dominating the philosophical and bioethical perspectives on cryobiology is expanded by including the particular socio-cultural and political contexts.
Our case study also wants to advance the understanding of cryobiology in the broader sense (for relevant discussion see: Lemke, 2019). For this, we wish to go back to the concept of 'cryopolitics' as a general theoretical reference frame. While artificial cold (cryo) has been recognized and analyzed as a biopolitical tool (Radin & Kowal, 2017), our cross-cultural comparative findings uncover and allow reflection on cryopolitical mechanisms and the different ways in which those may operate in different social contexts. Our case study demonstrates how different temporalities and time regimes are entangled with different socio-cultural and socio-political framework conditions. Drawing on Oikkonen (2020) observations in the context of DNA research, cryopreservation or in our case cryo-fertility technologies are not epistemically nor politically innocent. Those become particularly apparent by the investigation of the ways such cryo-technologies develop, are invoked and negotiated in culture as well as the uncovering of particular ways of perceiving and negotiating temporality. By gaining empirical insights into the different ways in which frozen materials become located in specific normative temporalities (Kroløkke, 2019, p. 531), our case study extends our understanding of the concept of 'cryopolitics' not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-05-19T06:23:51.420Z | 2022-05-17T00:00:00.000 | {
"year": 2022,
"sha1": "10ce9ffc49574a3e298d2843d4c4bb050b8dda7f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40656-022-00495-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "665594efb821f917dd14c0d31ffcb03eeb9b3cbd",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269446419 | pes2o/s2orc | v3-fos-license | Comparative electroencephalography analysis: Marathon runners during tapering versus sedentary controls reveals no significant differences
Previous studies described various adaptive neuroplastic brain changes associated with physical activity (PA). EEG studies focused mostly on effects during or shortly after short bouts of exercise. This is the first study to investigate the capability of EEG to display PA‐induced long‐lasting plasticity in runners compared to a sedentary control group.
INTRODUCTION
Physical activity (PA) and exercise have been extensively studied for their effects on physical health, particularly on the cardiovascular system (Varghese et al., 2016).Additionally, neurophysiological adaptations resulting from both chronic and acute exercise have been well documented.Chronic exercise induces neuroplasticity, involving molecular, cellular, structural, and functional changes (El-Sayes et al., 2019).Notable effects include elevated levels of brain-derived neurotrophic factor (BDNF), insulin-like growth factor 1 (IGF-1), vascular endothelial growth factor (VEGF), and receptors, promoting processes such as gliogenesis, neurogenesis (Pereira et al., 2007), synaptogenesis, and angiogenesis.These adaptations contribute to increases in gray matter volume (GMV) particularly in regions relevant for cognitive functions like the hippocampus (Firth et al., 2018) and white matter volume (WMV), along with enhanced neural and receptor activity, leading to improvements in cognitive (De Sousa Fernandes et al., 2020) and motor functions (El-Sayes et al., 2019).Moreover, higher levels of PA have been associated with greater structural and functional connectivity (FC) as observed through neuroimaging techniques (Ruotsalainen et al., 2021;Soldan et al., 2022;Stillman et al., 2018), in networks critical for maintaining multiple aspects of overall brain health and hindering cognitive decline (Stillman et al., 2018).Despite these findings, the precise mechanisms underpinning exercise-induced neuroplasticity remain incompletely understood (El-Sayes et al., 2019).
Despite the wealth of knowledge derived from these (neuroimaging) techniques, there is a need to explore simpler and more accessible methods.Electroencephalography (EEG), offering portability, quickness, and cost-effectiveness, presents itself as a valuable tool for investigating brain electrocortical activity and cortical reorganization with high temporal resolution.Identifying EEG correlates of PA effects could enhance our understanding and establish EEG as a feasible proxy for detecting neurophysiological changes, especially when more complex methods like magnetic resonance imaging (MRI) are less feasible.
To ensure a clear understanding and to enable comparability between studies, it is crucial to define terms commonly used in the literature."PA" encompasses any bodily movement resulting in energy expenditure, while "exercise" is a planned, structured and repetitive subset of PA aimed at improving or maintaining physical fitness (Caspersen et al., 1985)."Acute exercise" refers to a single bout or session lasting from a few minutes to a few hours (Jee, 2020), triggering molecular and functional neuroplasticity, associated with heightened oxygen and glucose metabolism, elevated neurotransmitter concentration, and increased cerebral blood flow (CBF) (El-Sayes et al., 2019).This is in contrast to "chronic exercise," involving longterm, regular activity (Jee, 2020).Despite the focus on acute effects in current research, observed during or shortly after exercise cessation in pre-post interventional study designs (Boutcher & Landers, 1988;Crabbe & Dishman, 2004;Schneider et al., 2009;Schneider et al., 2009;Woo et al., 2009), the longevity and subacute/chronic effects of exercise-induced neuroplasticity, measured hours or days after exercise cessation, further including the cellular and structural aspects and lacking the acute metabolic elements (El-Sayes et al., 2019), remain understudied.Additionally, the intensity of exercise, categorized as low, moderate, or vigorous, plays a crucial role (Ikuta et al., 2019).
While moderate exercise has established benefits, the neurophysiological adaptations following prolonged and vigorous exercise are less clear.
Some of the studies investigating effects of acute exercise found an acute increase in alpha power during or immediately post exercise (Boutcher & Landers, 1988;Brümmer et al., 2011;Crabbe & Dishman, 2004;Honzák et al., 1985;Schneider et al., 2009).Alpha activity, the dominant resting-state frequency typically ranging from 8 to 13 Hz (Newson & Thiagarajan, 2018), has been correlated with enhanced cognitive performance (Richard Clark et al., 2004).It has also been reported to be declined in neuropsychiatric disorders associated with cognitive deficits (Newson & Thiagarajan, 2018;Ramsay et al., 2021) and has even been proposed as a biomarker of early cognitive decline (Lejko et al., 2020).Despite these observations, the reported acute effects exhibited overall heterogeneity in both frequency bands and brain localization.This heterogeneity could be confounded by the methodological differences in three main variables: (Haugen et al., 2022).In this study, it serves as a "baseline" since the intensive training period could be interpreted as an intervention.This is the first study to investigate the effects of prolonged and intense aerobic exercise on EEG, not immediately after an exercise intervention, as done in previous studies, but after a relevant time span from exercise cessation.Thus, long-lasting electrocortical adaptations are displayed in a naturalistic design.Building on reported acute effects and given the established evidence of neurophysiological adaptations to chronic and acute exercise along with the recognized association of enhanced alpha power and improvements in connectivity and cognition, we hypothesize that prolonged and intense aerobic exercise, characteristic of marathon training, may manifest in higher alpha power in runners compared to SC measured by EEG.By including the other frequency bands in an exploratory approach we strive for an unbiased assessment.Our study diverges by examining EEG after a relevant time span from exercise cessation, contributing to a nuanced understanding of enduring exercise-induced neuroplasticity beyond immediate postexercise assessments.
Subjects
This study was a subanalysis and the cohort was a subcohort of the ReCaP trial (Running effects on Cognition and Plasticity), a longitudinal observational study of marathon runners (Roeh et al., 2020).
Demographic data, PA assessments
Apart from the acquisition of basic demographic data (age, weight, BMI, smoking history, sex, education), the full long version of the International Physical Activity Questionnaire (IPAQ) (Booth, 2000;Craig et al., 2003) was conducted in both groups, presenting detailed information about daily physical exercise as well as PA in daily routines.As further indicators for fitness, maximal oxygen consumption (VO2max) (Bacon et al., 2013)
EEG measurements and processing
To avoid overlapping acute changes in EEG caused by the marathon in postmarathon surveys, we compared EEG recordings of runners, measured during the tapering phase, 14 to 4 days before the Munich marathon (08.10.2017), with EEG recordings of SC, conducted between July 2017 and January 2018.Runners and SC were instructed not to perform any training on the day of the EEG recordings.
EEG-actiCaps (BrainProducts GmbH) with 32 electrodes adapted to the head circumference of the participant were used, arranged in accordance with the international 10-20 system (Jasper, 1958)
Correction for multiple testing was calculated with the nonparametric randomization methodology (Nichols & Holmes, 2002)
DISCUSSION
The present study marks the first attempt to compare EEG restingstate frequency bands in endurance runners undergoing intensive regular training with a sedentary control group, investigating enduring electrocortical adaptations to physical activity (PA) in a naturalistic design.Despite the athletic condition of our runners, our findings did not reveal detectable physiological EEG changes when compared to the sedentary control group.
We selected EEG as a valuable and practical tool for prognostic and diagnostic purposes, given its reflection of cortical plasticity (Manganotti et al., 2022).Changes in power bands across different cortical areas occur in response to various processes such as fatigue and reactions to training stimuli, influencing learning processes (Manganotti et al., 2022).Nonetheless, changes in EEG resulting from exercise interventions are elusive and challenging to replicate (Gramkow et al., 2020).
Previous studies have reported heterogeneous results regarding the immediate effects of acute exercise on EEG measures, with a frequent observation of increased alpha activity (Crabbe & Dishman, 2004), particularly observed during and up to approximately 6 min after exercise cessation (Crabbe & Dishman, 2004), suggesting an acute (rather than long-lasting) postexercise effect.
Limited studies have explored the effects of prolonged exercise on EEG recordings.For instance, Honzák et al. (1985) observed a decrease in theta activity and an increase in slow alpha component and subtheta activity in marathon runners after a 2-week endurance training period, attributing this to fatigue and poor oxygen and glucose supply to the brain.Another review on plasticity-inducing interventions, including yoga, demonstrated an increase in alpha activity in longitudinal interventional groups, linked to improvements in cognition, memory, mood, and anxiety (Desai et al., 2015).The overall yoga-induced effects were, however, also heterogeneous, and most included studies were interventional, with a small number of subjects and only a few included a control group.While these studies examined longitudinal effects, they primarily did so within an interventional period (PA/ yoga), frequently involving regular pre-/postsession measurements throughout this period.However, there was a lack of reporting, or consideration for, a relevant hiatus between the cessation of PA or yoga and EEG recordings.This absence of a relevant break makes it challenging to differentiate between acute and long-lasting effects.
To explore the enduring effects of PA on electrocortical activity, reflecting long-term neuroplasticity without the interference of acute effects, a significant time span between exercise termination and EEG recordings was crucial in our study design.Similar considerations apply to other modalities assessing neuroplasticity, such as transcranial magnetic stimulation (TMS) (Jannati et al., 2023).While TMS-induced neuroplasticity, measured through motor evoked potentials (MEPs), is detectable for a limited time poststimulation (Jannati et al., 2023), its persistence can vary, echoing our approach to evaluate enduring exercise-induced EEG effects.
Despite established evidence indicating long-term neuroplastic adaptations to PA, including increased gray matter (e.g., in the hippocampus), enhanced synaptic plasticity, connectivity, and spatial memory function (De Sousa Fernandes et al., 2020;El-Sayes et al., 2019), and despite the known associations between enhanced cognition, connectivity, and elevated alpha activity, our study did not identify a correlate of long-term plasticity in the form of elevated alpha power.
While exercise-induced changes in EEG have been previously documented to be of short duration (Bailey et al., 2008;Crabbe & Dishman, 2004), it has also been reported that the effects are more pronounced at higher levels of PA intensity (Schneider et al., 2009).Similar observations have been reported in functional connectivity (Ikuta et al., 2019).
These observations align with our hypothesis of identifying enduring alterations in EEG frequencies among individuals with high levels of physical activity.
Our findings, showing no differences between runners and the sedentary control group, likely result from the chosen timepoint (tapering), with a significant time lapse between the last exercise bout and EEG recordings.This underscores the transient nature of previously reported acute EEG effects of exercise, mainly driven by acute rather than chronic exercise, due to the different physiological underpinnings of the two (El-Sayes et al., 2019).
The physiological explanations for temporary acute exerciseinduced EEG effects are complex and multifaceted.One possible explanation is the temporary increase in central nervous activation, due to increased somatosensory afferents during and shortly after exercise (Krause et al., 1983).Once subsided, the temporarily increased cortical activation likely returns to baseline.Another possible mechanistic explanation are emotional responses to exercise.One study reported that exercise mode, intensity, and individual preferences influence EEG effects (Brümmer et al., 2011).As these activation patterns are directly connected to the ongoing or recently completed acute exercise, they diminish when the emotional response wanes.
Fatigue induced by exercise, particularly central fatigue, can contribute to the temporary nature of EEG effects (Brümmer et al., 2011;Dalsgaard & Secher, 2007).Central fatigue pertains to situations in which the capacity of the central nervous system to activate motoneurons restricts the expression of strength (Dalsgaard & Secher, 2007) and its duration after exercise can vary depending on several factors but usually lasts between minutes and hours postexercise (Carroll et al., 2017).Although central fatigue correlates with EEG changes (e.g., increased alpha power) (Ghorbani & Clark, 2021) the present study's recordings were not during a fatigue stage, possibly explaining the negative results.Exercise-induced changes in mood (Roeh et al., 2020;Schoenfeld & Swanson, 2021), linked to increased cortical excitation, are another significant factor.Despite heterogeneous evidence on EEG effects resulting from exercise-induced mood changes (Lattari et al., 2014) some reported acute effects may be associated with mood changes that subside hours after exercise (Peluso & Guerra de Andrade, 2005).Additional mechanisms contributing to temporary EEG effects encompass alterations in cerebral blood flow (Secher et al., 2008;Smith & Ainslie, 2017), overall metabolism, and neurotransmitters (e.g., catecholamines (Stock et al., 1996), endorphins (Schoenfeld & Swanson, 2021)).All these processes are probably not separate mechanisms, but complexly and dynamically intertwined with each other and seem to be reflected by the acute EEG effects reported in the literature.After hours or days after exercise cessation, they subside, possibly explaining the negative findings in the here presented study.
To prevent misinterpretation, the results of the present study For the EEG analyses, 30 runners with successful registration for the Munich marathon 2017 (08.10.2017) and who had experience in endurance training (at least one finished half marathon) were recruited by announcements in local newspapers, local running groups, and newsletters of the local organizer of the Munich marathon.Exclusion criteria were severe internal, neurological, and psychiatric illnesses, BMI ≥30 kg/m 2 , regular drug-abuse and insufficient knowledge of German language.Regarding the age-and sex-matched sedentary control group (SC, N = 30), recruited via announcements in local newspapers and other channels (e.g., social media), prerequisites were as little physical activity as possible (less than 25 min of self-reported PA a day as definition of a sedentary lifestyle(De León et al., 2007) including everyday activities such as cycling to work) and no experience in endurance running.The other inclusion criteria (age, knowledge of German, no severe illnesses, BMI < 30 kg/m 2 ) were identical to those of the runners.Prior to inclusion in the study, all participants provided written informed consent.The study protocol was approved by the ethics committees of both the Ludwig-Maximilian University Munich (approval reference number 17-148) and the Technical University Munich (approval reference number 218/17).
using standardized low-resolution electromagnetic tomography (sLORETA)(Pascual-Marqui, 2002).The version used in our study is an advanced version of LORETA(Pascual- Marqui et al., 2002) and estimates the current source density distribution and source localization in 6.239 cortical gray matter voxels, with a cubic voxel size of 5 mm 3 .The sLORETA statistical nonparametric mapping tool (SnPM) was used, based on a paired voxel-by-voxel log-Fratio test using 5000 randomizations, for comparison(Villafaina et al., 2019).It is based on estimating the empirical probability distribution for the max-statistic under the null hypothesis by randomization.The analysis included the following frequency bands: delta (1.5-6 Hz), theta (6.5-8.0Hz), alpha1 (8.5-10 Hz), alpha2 (10.5-12.0Hz), beta1 (12.5- already implemented in the sLORETA software package(Pascual-Marqui, 2002).This methodology corrects for multiple testing (i.e., for the collection of tests performed for all electrodes and voxels and for all time samples).Due to the nonparametric nature of the method, its validity need not rely on any assumption of Gaussian distribution(Pascual-Marqui, 2002).To compare demographic data and IPAQ results between runners and sedentary controls, we used independent t tests and χ 2 tests in SPSS 26 (IBM SPSS Statistics, Version 26) with a significance level of p = .05.Mean values and standard deviation (SD) were calculated using descriptive statistics.Conditions for statistical tests such as homogeneity of variances were given.
(
lack of group differences) should be interpreted strictly within the methodological context, considering the timepoint of EEG recording (hours/days after exercise cessation during tapering), or the underrepresentation of biological females (as they show a greater propensity for neuroplasticity (El-Sayes et al., 2019)) and should be evaluated within the context of the limitations.Several limitations should be considered.The low spatial resolution of EEG may underrepresent potential changes.The absence of EEG recordings during or immediately after acute training limits our understanding of exercise's acute effects.While changes in EEG might have been detectable shortly after PA, our study prioritized a naturalistic design reflecting runners' everyday reality, focusing on long-term effects rather than acute interventions.A combination of EEG during or shortly after training and in the tapering phase would enhance interpretative robustness.The SC group lacked VO2max assessments, hindering objective fitness evaluation alongside the self-reported IPAQ data, which covers only the past 7 days, potentially missing more active periods.Further studies should conduct objective fitness assessments to ensure a significant differentiation of the cohorts.The runners group's heterogeneity in PA habits (showing wide dispersion of IPAQ values) and the absence of a standardized training protocol reduce intersubject comparability.A more precise assessment of the exact training routine would facilitate the interpretation of the results.Variability in tapering strategy and the time span between intensive training and EEG recordings among runners and slight circadian inconsistencies in recording times may impact results.The condition of closed eyes, chosen for enhanced alpha activity, limits comparability with studies using open eyes.Although our sample size (N = 60) is larger than some studies, larger cohorts are desirable for more robust results in the future.5CONCLUSIONAlthough persisting exercise-induced central adaptations were registered in other modalities (e.g., MRI) and in EEG shortly after acute bouts of PA, we could not detect EEG differences between runners and sedentary controls in our study.Future studies should further explore the capability of EEG in displaying exercise-induced plasticity.To do so, further improvement of methodological preciseness is needed, that is, recording EEG at different time points during and after PA cessation to better understand the differences between acute and possible long-term EEG adaptations.Additionally, combining EEG with other modalities, such as neuroimaging, would allow a better understanding of the underlying mechanisms.AUTHOR CONTRIBUTIONS AH, AR, JS, PF, and MaH conceptualized and supervised the study (conceptualization, project administration).AR and AH supervised the study (supervision).BP, MH, and JM collected the EEG data (investigation).JM and BP curated the data (data curation).JM and BP performed the statistical analyses (methodology, software, formal analysis).OP and DK provided support in all aspects of data collection and analysis (formal analysis, supervision).MH and JM wrote the manuscript (writing-original draft).JM conducted the main revision.All authors revised the manuscript and approved the final version (Writingreview and editing).All authors provided critical feedback and helped shape the research, analysis, and manuscript.All authors agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Baseline characteristics of the marathon group and sedentary control group.Note: N < 60 means missing data.MR = marathon runners, SC = sedentary control group, BMI = body mass index, IPAQ = International Physical Activity Questionnaire-Long Version (in METminutes per week (Metabolic Equivalent)).
TA B L E 1 * = statistical significance.
Table 1 displays demographic data and PA. | 2024-04-30T06:17:33.322Z | 2024-04-28T00:00:00.000 | {
"year": 2024,
"sha1": "6e4220a854cf04286e463326031cf71bc74b7bee",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.3480",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "7ff189e64d9a4132b768aa49ddba48dec506c4da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51927580 | pes2o/s2orc | v3-fos-license | Prediction of dengue outbreaks in Mexico based on entomological, meteorological and demographic data
Dengue virus has shown a complex pattern of transmission across Latin America over the last two decades. In an attempt to explain the permanence of the disease in regions subjected to drought seasons lasting over six months, various hypotheses have been proposed. These include transovarial transmission, forest reservoirs and asymptomatic human virus carriers. Dengue virus is endemic in Mexico, a country in which half of the population is seropositive. Seropositivity is a risk factor for Dengue Hemorrhagic Fever upon a second encounter with the dengue virus. Since Dengue Hemorrhagic Fever can cause death, it is important to develop epidemiological mathematical tools that enable policy makers to predict regions potentially at risk for a dengue epidemic. We formulated a mathematical model of dengue transmission, considering both human behavior and environmental conditions pertinent to the transmission of the disease. When data on past human population density, temperature and rainfall were entered into this model, it provided an accurate picture of the actual spread of dengue over recent years in four states (representing two climactic conditions) in Mexico.
Introduction
The infection rate of the dengue virus has shown a steady annual increase in Mexico, having potentially overcome acquired immune resistance [1]. Attempts have been made to explain epidemiological data with mathematical models of dengue infection. However, the introduction of climactic data has not led to reliable results. In Mexico, the great geographic variation of the inhabited regions makes the inclusion of this information an enormous challenge. Mexican geography ranges from seacoasts to areas 3 000 meters above sea level, and from desert to tropical climates.
The climactic diversity in Mexico provides fertile ground for the mathematical modelling of the transmission of dengue virus by Aedes mosquitoes. Real epidemiological data can be contrasted with the outcomes obtained from virus outbreak modelling. Mathematical models PLOS ONE | https://doi.org/10.1371/journal.pone.0196047 August 6, 2018 1 / 14 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 of dengue virus have been used to determine the impact of human intervention (in the environment and with therapy) on dengue infection pathology. The results of these models demonstrate that programs applied at the onset of symptoms, such as mosquito eradication techniques, have only marginally affected the rate of vector infection and virus transmission [2].
In an effort to model the risks factors of dengue transmission, Hopp and Foley simulated the mosquito life cycle and density in function of temperature and humidity. They suggested that mosquito density is the most important factor in dengue transmission [3]. Additionally, modeling efforts have considered the influence of geographic variation on dengue transmission [4], the expected effects of vector control policies [5], the climatological relationship with disease outbreaks [6], statistical analysis and projection of dengue transmission [7], and the assessment of individual infection risk as a function of location [8].
Dengue transmission models based on differential equations take certain aspects of disease transmission dynamics into account. Estevan et al. developed a model by employing the theory of competitive systems, compound matrices and the center manifold theorem, estimating that global cases of dengue would reach asymptotic stability [9]. Since the latter approach uses data from past conditions, it cannot adapt to the day-to-day meteorological information or updated epidemiologic data, making it impossible to determine changes in virus biology. The continuous state of climactic changes and human population dynamics have made the prediction of that model elusive [10].
Despite the complex geographic and climactic features of Mexico, analysis of the spread of the dengue virus has been linked only to mosquito density, human population density, and mosquito and human transmission rates in both directions (infected mosquitoes to humans and infected humans to mosquitoes). We herein present a mathematical model that considers most of the biological factors having an influence on dengue virus infection dynamics, both in mosquitoes and humans.
The model
A non-linear time-delayed differential equation system was developed to model the seasonal life cycle of mosquitoes and their interaction with humans. This model differs from previous approaches because it contemplates the survival of mosquito eggs through the drought season and the existence of a vertical transovarial infection from infected female mosquitoes to their eggs. The model weighs the effective bite rates of mosquitoes and the non-constant proportions of susceptible humans and infected mosquitoes. It also includes parameters that are dependent on temperature (T), rainfall (P) and time (t). These parameters have been concisely selected and calculated in accordance with reported mosquito behavior and biology, such as the developmental timing of the aquatic stages, the life expectancy of the adult mosquito, the amount of eggs that can be laid, and the extrinsic incubation period (see S1 Supporting Information for details).
The model comprises twelve different populations that evolve over time. The insect population refers to female Aedes mosquitoes. The population variables are as follows: x 1 and x 2 correspond to non-infected and infected eggs, respectively; x 3 and x 4 to non-infected and infected larva; x 5 and x 6 to non-infected and infected pupa; x 7 and x 8 to non-infected and infected mosquitoes; x 9 and x 10 to infected and immune humans; and x 11 and x 12 to non-infected and infected resting eggs. The latter factor is crucial for the resurgence of disease in regions with extremely dry summers. Infected human populations comprise symptomatic and asymptomatic individuals, both of which are infective for mosquitoes. Once a person overcomes an infection, he or she is assumed to remain immune, and therefore cannot be a carrier (human vector), for a certain period of time (presently, this period is set at six years). The model considers that a recently infected mosquito can only be infective upon the passage of a period of time τ, which corresponds to the extrinsic incubation period (EIP). Likewise, recently infected humans can only be infective for the mosquitoes after a certain period of time λ. A schematic representation of the model is depicted in Fig 1. Distinct dengue virus serotypes have different EIP and infectivity [11]. The biological data from the serotype one virus (DENV-1) was used herein because it is the most prevalent in Mexico [12]. In our model, the total human population (H) remains constant over time. The system of ordinary differential equations is as follows.
System of differential equations dx 9 ðtÞ dt ¼ k 12 l½H À ðx 9 ðtÞ þ x 10 ðtÞÞx 8 ðtÞ sðx 9 ðtÞ þ x 10 ðtÞÞ þ l½H À ðx 9 ðtÞ þ x 10 ðtÞÞ À k 13 þ k 14 ð Þx 9 t ð Þ ð9Þ With l ¼ HÀ ðx 9 ðtÞþx 10 ðtÞÞ H and s = 1 − l. In the model, the parameters l and s control for the percentage of mosquito bites in susceptible humans, representing the vector-bias effect. More specifically, l and s denote the probability that an arriving mosquito will bite an infected and immune person, respectively. The x variables and k parameters are described in Table 1. Table 1. Values and description of the variables and parameters of the model. T in and T out designate the indoor and outdoor temperature, respectively. P is the rainfall and θ(X) is the Heaviside step function whose value is zero for a negative argument and one for a positive argument (see S1 Supporting Information for details of the calculation of parameters).
Different outcomes and scenarios of the model
The equations of the model are solved by using Wolfram Mathematica 8. The density of humans and mosquitoes as well as the incidence of infected humans and mosquitoes are expressed per household unit. The calculations were made for two case scenarios as a function of temperature rainfall. The first hypothetical scenario was constructed by defining the rainfall input (rainy season) as a sinusoidal function with a constant period and intensity (amplitude), while the temperature remained fixed. In the second hypothetical scenario, we considered weekly temperature and rainfall data from eight consecutive years, obtained from meteorological stations in four cities in Mexico.
Differentiation between indoor and outdoor temperatures
The distinct indoor and outdoor temperature is relevant herein because adult mosquito life is mainly spent indoors and the aquatic stages develop outdoors. For our simulations, outdoor temperatures were taken from the respective meteorological station and the indoor temperature was correlated with this. For the latter, there was a region-dependent minimum temperature T min . The maximum indoor temperature was also correlated with the outdoor temperature, but with a pull-down factor that reduces peak values, as observed in the real world [29]. The indoor temperature T in is thus defined as:
Epidemic prediction capacity of the present model after integrating meteorological and demographic data
The predictive power of the model was evaluated by entering the weekly temperature and rainfall data over the last eight years from four Mexican cities (in four different states), and comparing the projected dengue transmission with the number of confirmed dengue infections per year reported by the corresponding state health institutions. To establish a stable set of
(T(t))
Mortality rate of healthy mosquitoes (-90.76-9.54 T out − 0.18 T out 2 ) -1 [17] k 11 (T(t)) Mortality rate of infected mosquitoes 1.56 k 10 [21] k 12 (T(t)) Infectious bite rate from mosquitoes to humans 0.2 (1 -k 11 τ) ϴ(1 -k 11 τ) [17], [15], [22] k 13 Infected human death rate 0.99 k 15 [23], [24] k 14 Immunity acquisition rate 0.14 [25] k 15 Human death rate 6.5 10 −7 [15] k 16 Immunity loss rate 4.5 10 −4 [26] k 17 (P(t)) Mosquito emergence deactivation by drought 1-k 18 Supposed Mosquito emergence activation by rain ϴ(P-1) Supposed k 19 Mortality rate of eggs during drought 0.018 [27] τ EIP λ Dengue incubation period in humans 3 [20] δ(P(t)) Rainfall-dependent ponderations 1 -(0.1389-0.0136 P) [28] https://doi.org/10.1371/journal.pone.0196047.t001 parameters calculated from the model, real data from 2008-2012 were used. The estimated values for 2013-2016 were then compared to the epidemiological records. To test the behavior of the model, we selected four dengue-endemic cities from two regions with distinct climates. The weather in the cities of Guadalajara and Colima is hot and dry, while that in Tuxtla Gutierrez and Campeche is hot and humid. The real incidence of infection is influenced by several natural and socio-economic factors that vary within and between regions and over time. Regarding entomological factors, the chemical control interventions (the most common for dengue control in Mexico) distort the base line of epidemiological data for mosquito density. Although the impact of these interventions is hard to determine, we saw little difference between chemically-treated areas and those with no chemical intervention [30]. This observation justifies performing a direct comparison between the calculated values of the model and the epidemiological data.
The approach of the study is briefly explained. For any given time t, the observed number of dengue patients was assumed to be correlated with both the local healthcare coverage (taking the number of doctors per habitant as a reliable indicator) and the proportion of symptomatic cases, the latter estimated for the most prevalent serotype, DENV-1. Thereby, the observed incidence was computed as (model-projected incidence) Ã (proportion of symptomatic cases) Ã (healthcare coverage) Ã (number of houses). The density unit used here is mosquitoes and persons per household unit. Hence, to obtain the total number of dengue cases, the corresponding density was multiplied by the number of houses in the entity.
The dynamics governing the infection and propagation of the dengue virus as well as the severity of the epidemic are complex and specific to the particular human and environmental factors in any given locality. A homogeneous surface unit was adopted for the model, in which precipitation, temperature, and mosquito and human density were taken into account to simulate a dynamic population. In this fashion, distinct scenarios were predicted.
Ethics statement
The present research was completed in accordance with the INSP (National Institute of Public Health in Mexico) ethical guidelines, and no experimental work was performed. The authors have no conflict of interest in the materials or procedures utilized in the study.
General properties of the model
Considering that transovarial transmission of the dengue virus is still a contentious issue, we initially set this factor to zero. The calculated values of the dengue transmission model are shown for different temperatures (Fig 2). The average annual temperature chosen for any given simulation ranged from 26-32˚C, which is the reported variation among the four virusendemic cities herein studied. Temperature is a relevant factor for dengue transmission.
First, calculations were made for a temperature of 26˚C, finding the mosquito density, prevalence of dengue-infected mosquitoes, prevalence of dengue-infected humans, and prevalence of seropositivity in the population (Fig 2A). The model predicts a maximum average number of mosquitoes per house of five and a minimum value less than one. In the scenario of mosquito density close to one, the model estimates that dengue incidence decreases to a steady state close to eradication, although it does not disappear within a ten-year period.
The same parameters were calculated for a temperature of 28˚C (Fig 2B). The mosquito density increases slightly to an average maximum of seven mosquitoes per house. During a dengue outbreak, the number of infected mosquito peaks at 60% during the third year. The model projects the prevalence of dengue in humans of approximately 6%. In addition, the average estimated prevalence of human dengue seropositivity reaches 40% after four years.
With the temperature set at 30˚C, the results are similar to those found for 28˚C (Fig 2C and Fig 2B), except that the peaks of dengue infection last less time. The pattern of epidemic dynamics is interesting, with seropositivity reaching a transitory plateau of approximately 50% at 3 years of the epidemic, and then rising again after six years. This is probably due to the loss of immunity, on the average, of the humans who are infected during the very first waves of projected infection. Hence, despite the fact that the mosquito density remains roughly the same between 28 and 30˚C, the pattern of the epidemic is different. The distinct pattern can be explained by the temperature dependence of the equations determining mosquito development, survival and life cycle. The temperature affects mosquito oviposition time, post blood alimentation, and the time elapsed before a mosquito is capable of infecting the host (i.e., the time needed for the virus to reach the salivary glands). These factors are optimum for the mosquito vector when the simulation is executed with the temperature set between 28˚C and 30˚C. Under such conditions, the EIP is the shortest and the mosquito life expectancy the longest.
For a temperature set at 32˚C (Fig 2D), mosquito larvae undergo a shorter development time in the aquatic state and a lower mortality. The life expectancy of the mosquito terrestrial stage decreases. The sum of these opposite effects results in a very similar mosquito density when comparing 32˚C with 28-30˚C. Nevertheless, the shorter mosquito life expectancy at the higher temperature (32 ºC or more) influences the dynamics of virus transmission, due to the reduced probability that an adult mosquito will live long enough to be able to infect a human host.
The effects of a 1˚C temperature increase (from 26 to 27˚C) is projected for the dengue infection over ten years (Fig 2E), predicting a reactivation of the epidemic. This increment in temperature creates more suitable conditions for dengue transmission because mosquito eggs in resting state can survive for longer periods of time. The postulated existence of a pool of transovarially-infected mosquitoes could potentially lead to an outbreak in regions where dengue was supposedly eradicated. Finally, another important feature of the calculated values of the model shown in Fig 1 is the presence of peaks and valleys in the incidence of infection, indicating rich internal dynamics. This reveals that meteorological variables are the main driving force of the model, resulting in periodic patterns of the initial assessment.
As an exploratory exercise, we calculated the outcome of the model using different rates of dengue transovarial transmission (Fig 3). The initial conditions computed by the model start with a mosquito density of zero, an egg density of 0.01 eggs per square meter, and all humans considered susceptible to dengue infection. With the temperature fixed at 28˚C, the direct effect of zero transovarial dengue virus transmission was analyzed (Fig 3A), finding the expected value of mosquito density at its normal level and no epidemic. Another scenario was contemplated in which 1/1 000 eggs are originally infected and 1/1 000 eggs laid by infected females carry the virus (Fig 3B). The mosquito density reaches its normal level, but there is a slow and gradual epidemic resurgence, reaching a proportion of seroprevalence in humans of 35/10 000 after ten years. Consideration was also given to a scenario in which 1/100 eggs are originally infected, and 1/100 eggs laid by infected females carry the virus (Fig 3C). The mosquito density again reaches its normal level, but the dynamics of the epidemic leads to the seroprevalence in humans of 5/100 at the tenth year. These results suggest that dengue vertical transmission has the potential of producing a notable resurgence in an epidemic.
Mosquito density
A phase diagram of human infection prevalence versus mosquito density was constructed (Fig 4). The most important predicted pattern of human dengue infection in the epidemic is that its intensity is not strongly dependent on mosquito density, unless such density borders on eradication. This unexpected outcome can explain the inadequacy of dengue eradication programs based on truncated insecticide/larvicide campaigns.
The relation between human infection and mosquito density changes as the temperature rises from 26 to 32˚C. With an increase from 26 to 28˚C, the model produces concentric orbits aligned with a mosquito density of 4 female mosquitoes per house. The most suitable conditions for dengue virus transmission occurs at 28˚C, finding more infections per mosquito. Between 30 and 31˚C, the model projects a greater density of mosquitoes but with less effect on transmission. Again, the model shows that mosquito density is not strongly correlated with the incidence of disease.
In the present simulation, mosquito density has less influence on transmission than in other models. The temperature dependence of a dengue epidemic could account for the apparent contradiction between the observed rise in dengue cases in countries where the density of mosquitoes has been reduced by fumigation and vector control measures [30]. The epidemic expansion may have been due to the fact that simultaneous with diminished mosquito density was a change from slightly to highly permissive local temperature.
Model simulation of past epidemics using meteorological data
To evaluate the precision of the model for accurately simulating dengue transmission over time, we ran the model with the information on rainfall and temperature reported in the four cities under study during seven years. As mentioned in the previous section, the total annual dengue infections estimated by the model were further adjusted to take into account 1) the proportion of symptomatic cases, 2) healthcare coverage and 3) the number of houses in each city. When the projected prevalence of dengue was compared to the actual recorded prevalence (according to the Mexican Health Department data) during the latter three years, the number of dengue infections was closely predicted. Furthermore, the year to year patterns of infection (increases or decreases) were also accurately predicted. Hence, the results confirm the reliability of the present model for projecting dengue infections in the two different regional climates involved in the study (Fig 5). For the histogram, the blue bars portray the projections made by the model and the black bars the observed epidemiological data. In the temperature graph, the black lines correspond to the outdoor temperatures and the red lines to the indoor temperatures. Historical data on epidemics were gathered from the Epidemiological History Bulletin available at https:// www.gob.mx/salud/acciones-y-programas/historico-boletin-epidemiologico. Demographic data were taken from the National Household Survey 2015 (available at: http://www.inegi.org.mx/saladeprensa/boletines/2016/especiales/especiales2016_06_05.pdf.) and from the National Statistical and Geographic Information System (available at http://cuentame.inegi.org.mx/monografias/default.aspx?tema=me). Healthcare coverage information was obtained from the CONEVAL (available at evaluation of social policy http://www.coneval.org.mx/Evaluacion/Paginas/Indicadores_de_acceso_y_uso_efectivo_de_los_servicios_de_salud_de_afiliados_al-Seguro_ Popular.aspx.). Meteorological data (temperature and rainfall) were downloaded from the corresponding meteorological stations by the Mathematica 8 "WeatherData" centers. The proportion of symptomatic cases for DENV-1 was taken as 1/11 [31]. Dengue virus transovarial transmission was set to zero for these calculations. In the graph, the temperature unit is Celsius degrees and the rainfall is millimeters of rain. https://doi.org/10.1371/journal.pone.0196047.g005
Discussion
The dengue epidemic is one of the most mathematically modeled among infectious diseases. We herein compare the current epidemiological mathematical model to others based on differential equations. With the introduction of complexity, previous models tend to become chaotic, unstable and difficult to control, as evidenced by three reported models with a great number of mathematical relations and equations [32,33,34]. Such difficulties explain why many authors have tried to achieve stability in their model by reducing the "noise" in equations (i.e., omitting meteorological data). Contrarily, historical meteorological data and initial epidemiologic conditions are presently included, thus more closely simulating real-time dengue epidemic conditions. This likely explains the improved capacity of our model for predicting outbreak probability. The results reported herein demonstrate that feeding the model with accurate day-to-day data could provide a valuable tool for public health decision makers.
Nowadays, the importance of temperature and rainfall for mosquito development and population maintenance is well documented. Several authors have added these parameters to their dengue transmission models, either setting them at fixed values or permitting slight variations over time [35]. Transovarial transmission, which is now gaining recognition, has been included in some mathematical models [36,37]. Despite contemplating these elements, previous models have not successfully integrated them into a coherent equation system. The current model represents a relevant validated synthesis of many previous dengue modeling attempts.
Special emphasis was placed on the temperature and rainfall dependence of the following mosquito-related factors: the oviposition rate, aquatic development, mosquito bite rate, extrinsic infection period, and mortality rate. Three aspects left out by other authors were introduced in the present model: the existence of a resting state for mosquito eggs during the drought season, the use of two distinct temperatures (indoor for the behavior of adult mosquitoes and outdoors to model the aquatic cycle of development), and the insertion of real meteorological data.
Conclusions
We elaborated and verified an epidemiological mathematical model of dengue transmission that takes environmental variations into account. This model could be utilized to establish a dengue risk prediction map, potentially enabling governments to implement timely public health actions to diminish the impact of epidemics. Interestingly, a dengue outbreak was herein found to depend less on mosquito density than on environmental temperature.
The model includes the potential factor of vertical dengue transmission because it could possibly explain dengue outbreaks in regions where the virus had apparently been eradicated for several years. The impact of such dengue propagation is limited in this model unless the percentage of transmission reaches 1% at the optimal temperature of 28-30˚C (see Fig 3), a percentage never before recorded experimentally. Nevertheless, there is always the possibility of a butterfly effect that would amplify the impact of a vertical transmission event.
The transmission of dengue over long distances should be attributed mainly to human displacement, meaning that humans represent the actual dengue vector. Aedes aegypti mosquitoes are broadly recognized as peri urban, moving about within a mere 120-meter radius over their lifetime, on the average [38]. Although transmission is indeed mediated by the presence of mosquitoes, dengue transmission is inevitably linked to human interurban movements.
The Mexican territory comprises vast areas, including rain forests in the southern states of Chiapas and Yucatan, where the climatic conditions allow for a year-long active mosquito life cycle. On the other hand, a large portion of the central region, such as the states of Mexico and Jalisco, have a six-month dry season. The continuous movement of people between the latter two states alone would outweigh the effect of transovarial dengue transmission on disease outbreaks. One clear example occurred in 2014, in which 160 000 individuals were displaced within the country (idmc, Global Overview 2014: People Internally displaced by conflict and violence, May 2014).
The outcomes of the model as a result of a rise in temperature are particularly striking. The optimal temperature for dengue transmission is 28-30˚C. In recent years, there has been a transition to a more permissive temperature (25-27 ºC) in several regions of the world (27), triggering consequences for dengue transmission. The model predicts that even if the mosquito density diminishes with a higher temperature, transmission will still remain because of more optimum biological factors for the dengue virus. Hence, global warming will affect dengue transmission, as more regions will reach an optimum transmission temperature in the coming years. | 2018-08-14T19:12:27.132Z | 2018-08-06T00:00:00.000 | {
"year": 2018,
"sha1": "0993b50da5b2eb166fe6eb3c1bbb5113cbd7ce8d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0196047&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0993b50da5b2eb166fe6eb3c1bbb5113cbd7ce8d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
} |
232353716 | pes2o/s2orc | v3-fos-license | Political Megatrends
Politics precedes the economy as critical decisions are made at a political level. This assumption requires determining the most important political changes and trends being observed as they directly affect decisions on economic development. Initially, this chapter presents the major political trends, which relate to the redistribution of income and wealth, the role of privacy and the middle-class and changes in social behavior, which are associated with complex and changing social and economic trends. Then, we analyze the way in which political behavior evolves in modern changing societies, but also the mixed-effects Covid-19 had on it, differentiating all the observed trends above. Thus, the chapter also focuses on the effects that Covid-19 has both on a political and cultural level.
Inequality and Social Mobility
The theoretical foundation of bonding between economic and political institutions, income distribution, and resources produced in an economy is an issue that by definition seems self-evident, but in reality, is very difficult to demonstrate empirically. The issue of income and resource distribution is directly linked to the system of political institutions.
It is easy to understand that a political structure supports specific political forces, which then enjoy the benefits of their dominance (Alesina & Perotti, 1996). At the same time, it has been proven that the concept of democracy, as a political institution, is closely linked to income distribution (Acemoglu, Johnson, Robinson & Yared, 2008;Herwartz & Theilen, 2014;Perotti, 1996). This happens either directly, in a legal manner (e.g., income policy that is passed into law), or indirectly, via procedural interventions (e.g., ways of representation and strikes decision process).
On the other hand, at any reference to economic institutions, it makes easy to understand their role in the income and resources (tax system, insurance system, etc.) redistribution. And yet, it is not easy to extract empirically confirmed judgments, for example, on the way financial systems are being developed and on issues of wealth distribution.
It is clear, however, that systems based on bank-centric development are more likely to prefer a redistribution resource system in favor of financial capital. In contrast, systems that rely more on the direct markets functioning result in effects that favor shareholders wealth (Allen & Gale, 2000;Rajan & Zingales, 2001).
Revenue redistribution takes place through different mechanisms and is mainly influenced by fiscal policy. Attention must be drawn to identify policies that are most suitable in efficiently improving income distribution and to the politicians who support them (Frankel, 2014). Income redistribution mechanisms are divided into: • transfer payments (unemployment benefits, disability benefits, social security programs, pensions); • progressive taxation, through which higher levels of taxation are imposed on higher incomes; and • public provision of social services, mainly in education and health.
Although taxes and social transfers have a direct impact on income distribution, the public provision of social services is an indirect redistribution method with a more long-term and qualitative character.
Income inequality has increased worldwide in recent years, mainly as a result of technological progress, globalization, and the liberalization of the labor market (Teulings, 2014). The increase in inequality was even greater due to the 2008 global financial crisis (Hellebrandt, 2014), while at the same time, the opposite does not seem to be the case, as we cannot blame inequality on the global crisis (Bordo & Meissner, 2012).
In reality, fiscal consolidation policies appear to have significant distributive effects, widening inequality, reducing the share of income from wages, and increasing long-term unemployment (Ball, Furceri, Leigh & Loungani, 2013).
Barry Eichengreen identified, in a 2016 speech in Lisbon on the issue of inequality (as cited in de Long, 2016), six similar historical phases in the last 250 years: (a) The increase in Britain's income inequality from 1750 to 1850, where profits from the British Industrial Revolution did not go to the poor but to the middle-class of the cities and the countryside. (b) From 1750 to 1975, income inequality spread worldwide as profits from industrial and post-industrial technologies were not equally shared to all. (c) From 1850 to 1914, living standards and labor productivity levels converged in the global North, as 50 million people left Europe to settle in places with abundant resources, transferring institutions, technologies, and capital. (d) From 1870 to 1914, domestic inequality increased in the global North, as entrepreneurship, industrialization, and financial manipulation channeled new profits mainly to the wealthy. (e) From 1930 to 1980, higher taxes on the wealthy helped to provide benefits and support public programs. (f) From 1980 till present, economic policy choices have again resulted in increasing inequality in favor of the global North. The arrival of Covid-19 crisis affects income and wealth inequality, exacerbating the phenomenon.
The degree of inequality in the case of the Greek economy compared to the Eurozone countries for the year 2018 is presented, through the Gini coefficient index, in Fig. 8.1.
In recent decades, growing global inequality is reflected in the significant cut in labor share seen in most countries (Karabarbounis & Neiman, 2014).
The decrease in labor share is combined with increased income inequalities for two reasons (Dao, Das, Koczan & Lian, 2017): (a) low skill workers have been burdened with the drop in labor share, while there are indications of reductions in professions that require intermediate-level skills and income cuts to middle-skill workers in developed economies, (b) capital is mostly concentrated in high-income groups and, therefore, the income increase from capital tends to increase income inequality.
The decline in labor share taking place to a large extent in developed economies may be due to technological progress, which is accompanied by a sudden drop in the relative price of labor, in combination with the emergence of professions that do not involve repetitive routine tasks-something that seems to have a greater impact on gains for middleskilled workers. In developing economies, the decline in labor share may be mainly due to the global integration process and, in particular, to the expansion of global value chains that contributed to higher capital intensity production.
Income inequality may be of concern to members of a particular society or between societies themselves. It is an issue with an ethical and economic dimension, having a philosophical basis which assumes that the earth's common, to all residents, resources are ultimately used for the production of income and wealth. As a result, the question which is raised concerns as to whether parts of wealth generated have been fairly distributed and how, and to what extent, is it fair to differentiate.
In addition to the ethical dimension, there is also a developmental issue, which is expressed by the question concerning the causal relationship between the income and wealth distribution and growth itself (Cingano, 2014). Despite the vast bibliography on the relationship between inequality and development, there are no clear indications at an empirical level as to whether inequalities' existence have a positive or negative effect-and to what extent-on growth rates.
At the same time, it is very likely that there are forces being developed in the political system fueled by the growing income and wealth redistribution, which may have the tendency to maintain and influence the distribution itself and, ultimately, the potential for economic growth.
However, this approach is only partially valid. While the rich are getting richer in developed economies, developing countries are growing faster than developed ones, thus reducing global inequality ( Fig. 8.2). As a result, Gini coefficient is expected (Hellebrandt & Mauro, 2015) to decrease worldwide from 65 in 2013, to 61 in 2035-it stood at 69 in 2003-albeit it is not clear from the characteristics of the Covid-19 crisis whether this long-term trend will be prevented. And while income paid globally to higher echelons (90th percentile) in 2013 was 31 times higher than the lowest echelon (10th percentile), this ratio is expected to decrease in 2035 to 24 times higher. Of course, this global improvement is expected on the condition that inequality within countries will not increase at an unprecedented rate.
In 2013, 40% of those at the lowest decile (decile 1) of the world's income distribution, were in Sub-Saharan Africa, more than 30% were located in India, 15% in China, and the rest largely in East and South Asia and the Pacific region. Almost 80% of the top decile (decile 10) was located in the advanced economies, with about 10% in China and the rest distributed mainly in the countries of Eastern Europe and Central Asia, Latin America, and the Caribbean as well as East Asia and the Pacific.
A reduction in income inequality is thus observed as: (a) the share of developing countries to high-income levels is being increased; (b) for developed European Union (EU) and Organisation for Economic Co-operation and Development (OECD) countries, Eastern European, Central Asian, Latin American and Caribbean countries, there is a shift in population numbers to lower income levels; and (c) for developing countries (mainly China, India and East Asian and Pacific countries) there is a shift in population levels toward middle and higher income levels. The only area where income inequality is expected to worsen is in Sub-Saharan Africa, where due to the low growth rate of its core economies, its share in low-income groups is expected to significantly increase in 2035.
In contrast, the development of income inequality within countries has varied, especially in some advanced economies, where low and middle incomes have been declining or stagnating, inequality has increased.
The unequal distribution of income is, of course, naturally linked to the unequal distribution of wealth. The unequal distribution of wealth (after taxes) is, on average, twice than that of income (Balestra & Tonkin, 2018). In OECD countries, the richest 10% of households own 52% of the total wealth, with their respective share of income distribution being at 24%. The countries with the highest wealth inequality are United States followed by Denmark and Netherlands, while those with the lowest are Slovakia and Japan. In Greece, 10% of the richest groups held 38.8% of total wealth in 2009, with this percent rising to 42.4% in 2014 (Balestra & Tonkin, 2018).
In regard to households with the lowest levels of wealth, this does not necessarily mean that they are poor in terms of income. Nor that they have no have assets, as in the cases of Netherlands, Denmark, Norway, and Ireland. Households are found to have low levels of wealth (in levels below 20%), combining high reserves of assets with high debt levels.
After the 2008 financial crisis, wealth inequality in United States and United Kingdom increased (Balestra & Tonkin, 2018), as falling prices and lower real estate returns, along with the rising prices of financial assets, benefited those with higher levels of wealth.
Rising inequality rates increase the risk of poverty in societies. In Greece, the percentage of people at risk of poverty (after social transfers 1 ) increased, as expected, after 2008 by 3 points, but at a decreasing pace as of 2013. Specifically, in 2018, 2 was 18.5% (for per capita incomes below 4718 euros per year) and is estimated to relate to more than 760,000 households, or the equivalent of 1,954,400 people (Hellenic Statistical Authority [ELSTAT], 2019). The corresponding rate before social transfers and pensions reaches 50%. 3 More than 1/3 of the world's population, with an income above the poverty line, is economically vulnerable, meaning that they are unable to cope financially with a sharp loss of income (Balestra & Tonkin, 2018). In Greece, this ratio exceeds 1/2, ranking among the highest among OECD countries.
With income inequality and wealth being one side of the coin, developments in social mobility are on the other side. Intergenerational mobility is socially important, both from a justice and an economic efficiency point of view. Figure 8.4 presents the World Economic Forum's (WEF) Social Mobility Index. 4 The main conclusion of the report is that a person's opportunities in life remain linked to his or her socioeconomic status at birth, a fact that reinforces historical inequalities. This is a significant problem, not only for the individual, but also for society and the economy. Low social mobility combined with an inequality of opportunities creates obstacles to economic development.
The Nordic countries achieve the best results as they combine access, quality, and equality in education, offering job opportunities under good conditions, along with quality social protection systems without exclusions. Greece ranks in the last positions among high-income countries (and 48th among 83 economies in the world), beating only Saudi Arabia and Panama. The above indicator reveals that there are few countries with the right conditions able to promote social mobility. Most countries have weaknesses in four areas that affect social mobility: fair wages, social protection, working conditions, and lifelong learning.
Perceptions of mobility vary widely between countries. In United States, if one works hard and "plays by the rules," one can expect to enjoy a living standard higher than that of their parents'. And it is precisely this promise of intergenerational mobility that has been evoked as justification (Jacobs & Hipple, 2018) for persistently high poverty rates and high economic inequalities in the largest market economy in the world. Despite widespread belief in upward mobility, improving economic opportunities in United States is clearly a challenge, more so than in other advanced economies. While relative mobility in the country has not deteriorated dramatically in recent decades, the combination of static relative mobility and a reduction in absolute mobility means that economic opportunities seem to be considerably less for young adults today than in previous generations.
A society with high mobility between generations is one where the well-being of an individual, in comparison to others of his generation, is less reliant on the socioeconomic status of his parents. Forty percent of Greeks believe that they lived better than their parents, while at the same time, the percentage of those who believe that their children will live better than them, falls by almost half (Narayan et al., 2018).
Undoubtedly, there are two basic reasons why higher mobility in a society should be the goal of public policy: justice and economic efficiency. When mobility is low, the chances of success are largely determined by birth, which runs counter to the basic concepts of justice in most societies. Low mobility also hinders the development and effective use of human resources and the efficient allocation of resources, as talented people from disadvantaged families are cut off from opportunities that ultimately benefit those born with greater privileges, and not those with the greatest potential. Limiting this inefficiency is beneficial for economic development. Given that the wasting of human resources is more likely to occur among low-income levels, policies that promote higher social mobility are likely to promote more inclusive development.
It can be seen that for large sections of the world's population born in 1980s, a person's education is still very closely linked to their parents' education (Narayan et al., 2018). Sub-Saharan Africa and South Asia stand out as the areas with the lowest levels of mobility. 13 of the 15 countries with the lowest mobility are either in Africa or South Asia. Adversely, the highest levels of mobility are in Western Europe, Canada, Australia, and Japan. On average, mobility is significantly lower in developing economies (with low and medium incomes) compared to high-income economies (Narayan et al., 2018). Among the developing economies, East Asia and the Pacific, the Middle East, and North Africa are areas with the highest average mobility relative to education, which remain well below averages in high-income economies.
Averages relative to social mobility are lower in developing economies, without any indication that the gap with developed ones is narrowing. Additionally, income mobility in many developing economies is much lower than their level of educational mobility would allow us to expect.
These findings support that improving intergenerational mobility requires policies to reduce opportunity inequality at all stages of life, promoting the development of human capital.
The Strengthening of Privacy, the Role of Individual Skills, and the Development of the Middle-Class
The role of the individual in the context of a changing environment is also changing. While there were previous discriminations that clashed with the "continuation of history," with the world being formed into the western and eastern (communist) bloc, the role of individuals was in a weaker position when compared to "collective societies." But that changed when the world "integrated" providing a new and much stronger position to individuality. In the current period, due to the global crisis, social needs are creating more pressures, leading the individual to face even more challenges globally.
The new individual is possessed by strength, skills, and abilities, in order to be able to meet the changing globalization conditions and also to make decisions and meet current and future goals. When the individual is in a position to adequately perceive the reality around him, then he can achieve what we call "individual empowerment." The stronger the individuals that make up a society, then the stronger a society is.
Thus, the concept of individual empowerment initially refers to a process of transforming the individual, during which the individual is being improved and takes control of his decisions. Then, regarding his empowerment process, he gains self-confidence which allows him to make decisions. It is an interactive process that takes place between the individual and his environment. The results of the process are being translated into skills, based on knowledge and abilities, with key characteristics being (Kieffer, 1984) the formation of political and social consciousness, the ability of co-operating, active participation, the ability to cope with failure, and the struggle for influence over the environment. The individual creates the suitable conditions to be in a position of choosing the most appropriate solution among various alternatives, having full knowledge of all available options. This significantly increases the range of possibilities to shape future situations.
The important role of an empowered individual can be made very clear through his performance in the workplace. No vision and no strategy can be achieved without capable and strong employees (Argyris, 1998). This is why top executives take the responsibility of trying to develop specialized employees. A common feature of countries where the problem of mismatched skills (i.e., the deviation of qualifications and skills in the workforce with labor market demands) is particularly evident, is the low level of public resources poured into the education and training of individuals.
The funding for education in Greece, as a percentage of Gross Domestic Product (GDP), is one of the lowest in EU (Fig. 8.5)a ranking that has been consistent over time (Statistical Office of the European Communities, 2019c). This, in combination with the (often) inefficient use of resources, contributes to a reduction in the workforce quality and also negatively affects its ability to adapt to the labor market's changing needs.
When the individual is empowered, he develops the necessary skills and characteristics to adapt to new conditions. The mismatch of skills negatively affects the competitiveness and growth of economies, increases unemployment, undermines social inclusion and bears significant economic and social costs. These developments increase levels of uncertainty.
Developing the skills of individuals is seen as being necessary in order to take advantage of opportunities that arise and address challenges posed by the ever-increasing demands of changing economies and new globalization technologies. Economic developments include concerns about the reality that is expected to emerge in the wake of the recent large global crisis. Having previously experienced the history of capitalism for centuries, a period of long-term recovery is what is expected, but this is not certain, neither is its pace, nor whether the recovery's share of benefits will be distributed equally among countries. The recovery will be determined by its driving forces and the extent to which it will be driven by the demand coming from the rise of the middle-class 5 in developing countries. Figure 8.6 shows the declining role of consumer power among the middle-class in Europe and United States over time (until 2030) and its replacement by corresponding forces coming from (Kharas, 2010) China and India.
The shift in demand patterns will have multiple effects on the global organization of production and growth. The recovery created may be fueled by supply coming from either energy supply conditions, or the use of new technologies. It can also be strengthened by the supply that comes from reducing debt-to-GDP ratios and the correcting of macroeconomic imbalances.
In many countries, mainly developed (principally in United States and Europe), the role and importance of the middle-class has declined in recent years. Middle-class life is typically associated (beyond specific income levels) with certain goods, services, and living conditions, such as decent housing, a good level of education and health, and a healthy environment.
In most societies, the middle-class consists of the majority of the population. Income and expenditure levels of the middle-class are relatively higher (as a percentage) than the size of its population in OECD countries (OECD, 2019), which shows its contribution to the economy of the countries. In Greece, spending by the middle-class accounts for 57% of total expenditure (OECD, 2019) and is directly proportionate with the size of the population and its income, but is lower than the OECD average. The contribution to expenditure from the lower class is 20%.
The change in income seen by the middle-class in Greece over the last two decades has been impressive, as was the case in Ireland and Spain, while in other countries it has been milder ( Fig. 8.7). During the decade 1995-2005, the middle-class in Greece saw income being increased by more than 40% and expenditure by almost 55%. In contrast, over the next decade, its revenue fell by 45% and its expenditure dipped by 40%, in a trend showing how painful the crisis has been for the country's economy. However, a similar development is observed in Spain. In Ireland, in the decade 2005-2015, expenditure by the middle-class decreased by 27%, but at the same time its income continued on an upward trend (4.5%)! Consequently, as the cost of living increases (costs increase faster than income) many middle-class Greek households have become financially vulnerable with some having excessive debt levels. Nearly 40% of middleincome households in 18 European OECD countries are economically vulnerable, with this figure reaching a maximum of 70% in the case of Greece (OECD, 2019). At the same time, 95% of Greek middle-income households say they are unable to meet expenses related to necessary goods (in OECD countries the average rate is 40%) and 50% say that they spend more than their income (in OECD countries the average rate is 20%).
With the sharp rise in unemployment, the Covid-19 crisis has again weakened the position of the middle class both in Greece and all over the world, since most of the unemployment came from its ranks.
A large and secure middle-class is the solid foundation on which an effective, democratic state is built and maintained, according to the ideas of the French Enlightenment. Its disappearance could play a major role in maintaining very low development levels, high corruption levels, and social tensions, mainly due to the weaker support for development institutions and the negative reign of pressure groups.
The reduction of the power held by the middle-class does not take place through a slow evolutionary process, but occurs abruptly, within a short period of time, mainly in times of economic recession. The factors that put pressure on it come on top of increased taxation, employment reduction, and the introduction of flexible forms of work.
At the same time, in developing countries, the middle-class is getting stronger. Three decades ago, these countries did not have a middleclass, as their societies were marked by high-income inequalities and the majority of people lived below the poverty line, while at the same time the upper classes enjoyed concentrated economic power. Economic growth in these countries after 1990s (with the most important examples being China, India, and Brazil) gave a significant boost to large masses of the poor populations, boosting their per capita income and strengthening the middle-class. In fact, this process continued in countries such as Kenya, Nigeria, Tanzania, etc.
The Cultural Evolution: The Search for Post-materialistic Society
Reflections from the cultural background are located in both economic and political institutions. Initially, the cultural background affects the quality and functioning of political institutions and this consists the firstround effects. Then, political institutions shape economic institutions, and this is the second-round affect, which in turn create structures and provide incentives for individuals to take action. The prevailing economic institutions ultimately determine the wealth distribution and the extent of economic growth. This amounts to the third round of effects. The first and very critical level of influence, the interconnection between cultural background and political institutions, is located when different portfolios of cultural values and practices prevail in a society, developing different political institutions. Societies, for example, that place a big emphasis on the notion of collectivism naturally form participatory institutions at different levels of organization in society. If, on the other hand, the concept of results has a dominant position in a socialcultural organization, then society itself, especially in times of crisis, will more easily accept a solution of authoritarian rule. Inglehart (1997) states that it is foolish to believe that culture is neutral. Every society legitimizes the establishment of a social order, in part because the ruling class seeks through culture to shape the values that will help perpetuate it (Inglehart, 1997). Alesina and Glaeser (2004) argue that in order to identify the forces that have shaped the current form of society, one must include in the analysis a review of historical events, while one must also identify what interests are being served by the dominant cultural background. At the same time, while referring to the Western world, what is needed, is to be focus on how Western political leaderships managed to direct the public into supporting their ideas and positions.
Politicized culture, as it is known, is an aspect of cultural background that has been deliberately created by political leaders to direct groups of people (de Jong, 2009). There are many examples where leaders, in their efforts to lead their country to economic growth, implemented development programs based on the values of other countries or religions (e.g., the case of Malaysia in the 1980s and 1990s). Subsequently, the cultural background may be used, as shown by examples, in order to promote the goals of political leadership.
A cultural background, dominated by features of collectiveness and denial to privacy, will tend to develop processes where the state will have a highly intervening nature. However, a cultural background dominated by collective-type characteristics and the promotion of privacy creates a very different environment, where the two types of political institutions coexist in a context that is more characteristic of Northern Europe. On the other hand, a cultural background dominated by uncertainty develops an environment that will tend to face high levels of uncertainty by creating a large number of complex laws and regulations-a characteristic of Europe's south. Adversely, a cultural background dominated by trust and certainty may organize the functioning of society by using simpler customary procedures. It is clear that a bulky and complex legal framework do not necessarily ensure its efficiency.
The cultural background, therefore, influences political institutions which then shape structures and motivations. However, apart from this indirect process, the individual elements of economic institutions are also directly affected by the cultural background. Economic institutions, finally, shape growth conditions and income and wealth distribution (third round effects). These variables, as well as the individual elements of economic institutions, are directly dependent on cultural values. The extent to which society's set goals are linked to the per capita product or income or happiness enjoyed by its members depends on the culture it offers. At the same time, income and wealth distribution may result from the functioning of economic institutions, but it is directly related with the values of society and, more specifically, with whether the desirable goal of the society is the greater or lesser redistribution. Finally, it should be noted that the growth rate and wealth and income distribution, in turn, affect political institutions.
To sum up, the cultural background influences the formation of political institutions and the policy pursued. However, at the same time, the political field influences the formation of the cultural background.
There is, of course, one more behavior very distinct in society. A distinction formed by the dipole of forces that influence political and social human behaviors: the problematic financial situation (economic have-not) and/or a riveting cultural behavior (cultural backlash). This distinction refers to the extent that citizens take into consideration and vote, based on their economic difficulties (real and/or comparative) or based on their reaction to evolutionary cultural change. It should be noted that these two hypotheses have been used to explain the rise of populist parties worldwide and consist of the most serious political platforms for the prevalence of populist political parties and the widespread repercussions of populism.
Cultural backlash behavior is reinforced by issues such as the impact of civil conflicts (Catalonia, Greece) while loss aversion behaviors are reinforced by a nostalgia for previous levels of prosperity and the safeguarding/confirmation that there will be positive economic developments (recovery), once combined with lower tax rates. Additionally, raising issues related to corruption stems from the cultural characteristics of lacking of trust and loss aversion.
Suspicions that cultural backlash behavior will prevail are strengthened by the fact that in advanced Western societies there is now a perception, in society and in politics, that the economic situation is not the primary issue concerning citizens (Fig. 8.8).
The onset of the European economic crisis has changed attitudes since 2010. In 2010, the biggest problem in European society reflected in key indicators was unemployment and the economic situation, but these indicators later declined.
What do these findings mean? The world-first United States, then Europe and then Greece-is leaning toward a post-materialistic era, where economic issues play a minor role. This will be completed after the improvement of economic conditions, due to the lessening of consequences from the 2010 crisis. That is why political conflicts expected to be in this field are the loss of a position in the society, refugees, terrorism, etc. In Greece, however, economic issues continue to be a priority for citizens, though there is declining tendency.
Covid-19 crisis is too great to expect that it will not affect people's social behavior and priorities, job choices and lifestyle. Of course, at this point it is too early to estimate how this will be eventuated. But it is very logical that, in the short term, issues such as health, unemployment and economic conditions will be of much greater importance.
In conclusion, we believe that the post-materialistic period of reflection will give way to a period of economic uncertainty, where basic behavioral hypotheses-such as insecurity hypothesis, cultural backlash hypothesis (see next section) and economic have not hypothesis-will gain new power, each perhaps for different reasons.
The Evolution of Political Behavior
Growth does not mean that every aspect of life is continually improving. This would not be evolution. This would be a miracle (Pinker, 2018). The belief or perception, however, that things are much worse than they really are, is widespread, having a significant harmful effect on societies. If we assume that disaster can strike us at any time, we will in all likelihood invest mainly in security and not sufficiently in education or other aspects of prosperity. The political consequences are also damaging, as citizens turn to demagogic views. At the same time, the opposite perception-that things are always and inevitably changing for the better-can also be counterproductive. In this case, why make any changes? A more constructive approach is the acknowledging that things are getting better, but that this progress is neither automatic nor optimal (Fengler, 2019).
But, why are we, ultimately, so pessimistic? Firstly, our brains work in such a way, making us exceptionally responsive to risks. As a result, people pay much more attention to the negative, rather than the positive news. Secondly, negative news is "more significant," as it is more dramatic, sudden, and spectacular. Thirdly, this "partiality of negativity" is further reinforced in the era of social media. In the past, traditional institutions, authorities, and bodies (such as the church, political parties, trade unions, etc.) defused extreme positions. Today, these traditional methods of mediation have largely collapsed and the new ways of interacting put people directly in contact with each other.
The next day of the economic crisis of 2008 and Covid-19 as well as the consequent slowdown of the world economy seem to be creating additional problems for the political system (Fig. 8.9). This situation is getting worse due to the intensifying pressure put on the low and middleclasses, as a result of the crisis and income reclassification caused by globalization in United States and EU in recent years. In fact, the Covid-19 crisis somehow "confirmed" these views. In Europe, the average share of populist parties in national elections and European Parliament elections has been doubled since 1960s, rising from about 5.1 to 13.2%, at the expense of centrist parties (Doring & Manow, 2016).
Social anger at the political elite, economic dissatisfaction, and anxiety about rapid social changes have fueled political unrest in many parts of the world in recent years. Leaders, parties, and movements, both on the right and on the left of the political spectrum, have, in some cases, challenged the fundamental rules and institutions of liberal democracy. Discontent with democracy is linked to economic disappointment, the status of individual rights, as well as perceptions that political elites are corrupt and uninterested in citizens (Wike, Silver & Castillo, 2019).
Additionally, in Europe, results show that dissatisfaction with the way democracy is operating is linked to EU citizens' views on the EU, views on whether immigrants are adopting national customs, as well as also linked to attitudes toward populist parties. These emerging trends have had mixed (negative and positive) effects from the Covid-19 crisis, so we explore this issue in more detail in the next section.
Views on economic opportunities also play a role. Those who believe that their country is a place where most people are not able to improve their living standards are more likely to be dissatisfied with the way democracy works. Personal income, however, is not the only factor, and multi-level analysis suggests that, in general, demographic variables (including gender, age, and education) are not closely linked to this discontent. While views on economic conditions are strongly linked to how performance is being perceived, non-economic factors also play an important role. Opinions on how well a democracy functions in a country are related to whether citizens believe that their fundamental rights are being respected. Dissatisfaction with the function of democracy is also linked to perceptions on how people are treated by a country's judicial system. Apart from views on political rights, the stance toward politicians is important, as people, who say, that politicians in their country are corrupt, are more discontent with the way democracy is working in their country.
On a long-term basis, the rise of populism coincided mainly with the electoral decline of the center-left and social democracy, and, in some cases, with the rise of the center-right. The center-left's decline is linked to the reduction in the middle-class' social power, due to technological changes, the reduction of labor value share in products produced after the fall of the Berlin Wall, as well as the representation problems posed by the liberal democracy, mainly due to technology changes and the consequent spreading of social media, that influenced sources of information and the decision-making process.
The financial consequences of populism in the current phase of the global economy are quite controversial, given deflation, low interest rates, insufficient effective demand, weak economic growth, and the dominance of uncertainty in international trade. More specifically, although increased fiscal expansion seems to be gradually fighting off deflationary conditions-stimulating active demand and, consequently, fueling economic growth-the growing influence of populist governments on central banks remains relatively unspecified in economic terms. In general, however, it seems to be hurting the expectations of actors, given that uncertainty surrounding the central bank's decision-making process runs counter to their rational predictions, possibly cultivating a pessimistic environment of economic activity. In addition to these two dimensions of economic policy, populism seems to be also disrupting a third dimension, that of structural reform, as it poses an indisputably large threat to the background of political and economic institutions. It may also have a catalytic effect on free international trade, as when populism is accompanied by nationalism, then mercantilist conditions seem to be emerged, leading to practices of protectionism and limited economic extroversion.
Furthermore, the possibility of a global recession, which seems quite likely, given the current characteristics of the world economy, makes populist-driven policies more attractive, than if the global economy was at a different stage of the economic cycle. In particular, these policies are distinguished by monetary expansion, with the ultimate goal being the stimulation of inflation-low interest rates and increased money supplyand by fiscal expansion aimed at strengthening sluggish demand by boosting deficits and public debt and, in general, by nationalist-oriented policies strengthening the primary sector. Given the above, there is a general tendency of populism to invoke the national feeling, to use the expansive monetary and fiscal policy at the cost of future generations and for the benefit of the present generation, while maintaining a controversial stance toward globalization.
Two fundamental factors favor the implementation of these policies: (a) the plethora of political forces that promote populism, which are gradually strengthening at parliamentary level, and/or winning national elections, gaining thus executive power and taken over the governance in their countries; and (b) the fact that current economic conditions provide the right backdrop for popular economic policies, which further reinforces their position, undermining other approaches to economic policy. Further structural trends that contribute to the revival of populism are: • the gradual weakening of representative democratic governments, given that the principal-agent problem between voters and national governments seems to be intensifying, or a supranational government is in charge, such as in EU; • the limited barriers to the political organization process, due to the power of social media that serve populism, by spreading it; and • the sense of intense economic disappointment that has gradually been established since the Great Repression of 2008 and is reflected in low price levels, sluggish demand, weak growth, and increased income inequalities in developed economies.
Moreover, it seems that a plethora of developed economies (some of which are included in the G7), a product of time, have been caught up in the political trap of populism, while a large majority of populist political forces are increasing their power beyond expectations.
More worrying is the fact that, under the rule of populist regimes and with the spread of the notion that the current monetary policy framework is outdated and inefficient, the very existence of central banks and monetary policy has been severely doubted. In particular, the accusation that central banks are institutionally biased toward keeping inflation low is being supported by the persistent behavior of low prices and deflation.
The pursuit of influence on central banks and expansionary fiscal policy are common key features among populist governments. Based on these two characteristics, interventionist economic policy, the degree to which nationalist views prevail within the countries as well as attitudes toward immigration are listed in Table 8.1, which ranks 13 countries based on the degree of populism distinguishing each government, with Greece in a relatively high position. The cultural backlash theory of Inglehart and Norris, which sees populism as being the result of rigidity in traditional social categories driven by social change, does not allow us to understand why so many voters have gone from an economic definition to a cultural definition of their identity. Political identity is a group stereotype. As neither "camp" exactly meets our expectations, we choose the one we are closest to and which is also the furthest from the ideas we reject (Pisany-Ferry, 2020). This identification, once implemented, colors our perceptions of reality. There are, however, different ways of defining ourselves politically: on an economic basis, based on work challenges, income distribution, and social mobility and, alternatively, in a cultural basis, in relation to levels of openness to minorities or attitudes toward migration. The coexistence of these two dimensions (and possibly more) can lead voters to move from one stereotype to another. This analysis makes it possible to understand how moderate-sized social developments can cause political restructuring. At any moment, economic and cultural preferences coexist and only a small change is required to change human actions.
After all, economic nationalism, which is defined as being the preference of policies to promote national economic interests to the detriment of foreign interests, has increased since the mid-2000s. This is a broad increase and includes markets of advanced and emerging economies, right and left-wing parties, as well as existing and new entrants. While parties labeled as being populist by political scientists tend to have far more nationalist proposals for economic policy, the shift in preferences toward economic nationalism is broadly visible. Clearly, these shifts are not universal. In advanced economies, the biggest changes are being related to limitations on migration and trade. In emerging economies, the biggest shifts in preferences were in connection to industrial policies that favored specific sectors and industrial concentration. Trade protectionism and skepticism toward multilateral organizations and agreements have increased in both advanced and emerging economies. Right-wing parties tend to be more nationalistic than left-wing parties in terms of imposing restrictions on migration and foreign direct investments and in being involved in confrontations, but there is no significant difference in terms of trade protectionism (de Bolle & Zettelmeyer, 2019).
A recent attempt to measure populism by free-market think tank Timbro, that calculated the Timbro Authoritarian Populism Index (TAP), from 1980 to 2020, for 33 European economies 6 (countries are included as soon as they are categorized as a "free" society by Freedom House). Figure 8.10 shows the percentage of votes for populist political parties, highlighting the difference between Greece and the rest of Europe. It is It is worth pointing out that since 2018 it seems that the populist expressions of the electorate in Greece tend to be limited, especially after the July 2019 elections, when the SYRIZA government, which had ruled since 2015, was replaced by the New Democracy government.
In concluding, after the domination of capitalism at the end of the twentieth century, a typology of political systems emerged, which, as it turned out, has played a decisive role in economic decision-making.
It is therefore clear that this typology is influenced by global forces that shape the future and has its starting point in the disintegration of the archetypal political formations belonging to both the left and the right. We could argue that this distinction was strong up until the fall of the Berlin Wall (November 9, 1989) that led to the global domination of capitalism. Then, after a period of dominance of center-left and social democracy political forces in the West, the lack of division among political lines with the simultaneous dominance of capitalism began to favor the center-right in the West. At the same time, a different form of state and authoritarian capitalism emerged, mainly in China, the oil-producing countries and Russia.
The impact of the 2008 economic crisis, the decline in the special weight of the middle-class, the resumption of GDP growth but with increases to the difference between domestic and transnational income and changes to the cultural background (cultural backlash hypothesis), along with the rise of technology emphasizing on social networking and information platforms, is accompanied by the general decline of liberal democracy-separation of powers, political rights, etc.-and particularly the shrinking of the center-left and social democracy.
Their position has been taken by political expressions of a center-right nature, which are in line with the general prevalence of capitalism, which is the combined result of all the above.
In particular, in Greek society and beyond, after the turbulent period 2010-2018, there are factors of political change that are in line with international trends, with the main feature being the prevalence of center-right and populist views (Petrakis, Kafka, Kostis & Valsamis, in press) that refer to the cultural and political background of Greek society. To the extent, however, that political populism fails to deliver on its promises, the scales of political behavior will return to center-right and center-left approaches.
Political and Cultural Effects of Covid-19
The effect of the COVID-19 crisis on political systems, their functioning, and cultural attitudes are far-reaching and will evolve over coming years.
When the crisis was in its early stages, the question arose as to whether its outburst raised issues of the superiority of certain social systems in major crises over others (China vs Western liberal democracies). The impression is that the centralized character of the Chinese state allows for more effective responses to major crises, such as the pandemic. Obviously, this position has a high degree of truth to it, but it soon became clear that the experience of previous crises (SARS) that were similar played a major role in China. Moreover, many Western liberal democracies (Germany, Greece, etc.) have shown a satisfactory level of reaction.
Later, concerns turned to deeper political fields. That is, we wonder whether the liberalization of trust in market efficiency has been dealt a decisive blow since everyone's eyes turned to the "land of last resort" that was the state and central banks.
In this sense, the political forces in liberal democracies that gather around the political center should be strengthened, since traditionally this area has much better ties with the state's regulatory factors. In fact, it seemed that populist political regimes (Italy, the United States, Great Britain) showed characteristic inability in controlling the sanitary phenomena. However, non-populist regimes, such as France and Switzerland, were not able to efficiently react.
It is certain that a pandemic is within the logic of markets that are impossible to manage. Moreover, the theoretical infrastructure based on the market supremacy ideology allow for the existence of the regulatory factor of the state with the presence of Leviathan of Hobbes. Therefore, the question "how much and where the state applies" is not freshly raised by Covid-19. It existed before and will continue to exist. However, what appears to be a new dimension is an emphasis on supporting public health systems. Let's not forget that deaths from Covid-19 are not caused by the virus itself but in combination with the absence of capable medical facilities.
So, what is it that separates the reactions of political systems to the crisis? It is too early to have similar answers and political science research will help us in the future.
However, it is now certain that the timely mobilization of experts and their good cooperation with politics is key to a satisfactory response.
The specialists include epidemiologists and doctors of all specialties, economists, communications specialists, etc. After all, it is a given that from the moment a crisis arrives, there is always an initial accumulation around the leadership and only later do political conflicts concerning the management of the crisis develop.
But the effects of the crisis are also fueling change in social behavior that is significant and noteworthy. We knew that this happens from all the great crises of the past (1929, 2008, etc.). Children that live in crisis conditions create more permanent attitudes and behaviors, especially during the shutdown of schools.
At the same time, it is certain that the increase in uncertainty levels has a profound effect on all aspects of economic activity with the main focus being on consumption, savings and investment.
But the question that remains is whether and to what extent confidence in the institutional framework in which the economy and society operate in is weakened, or strengthened.
If a society successfully copes with the crisis then it will emerge from it with a much better chance of implementing policies that have a social cost to the whole of society or to certain groups. This brings it closer to the possibility of implementing structural policies, a possibility that is usually seen among younger societies. If the need to tackle the pandemic also leads to very large horizontal programs improving overall demand, then again, conditions are in place for reform programs to be implemented, as it is well known that reform programs under austerity conditions have fewer chances of succeeding.
At the same time, societies that experience a successful management of such a crisis, with the help of the scientific community, seem to have increased confidence in the research and expertise of experts.
The opposite applies in societies that experience failed management of the crisis. They become much more vulnerable to the spread of random or malicious news, they create representations of injustice, racial and nationalist segregation, illusions of national isolation, etc.
In practice, this means that they support political forces that deny the cost of their proposals, particularly to future generations that are easy to ignore.
However, a crisis of this dimension may have a much deeper impact on society's attitudes that are summarized in the "insecurity hypothesis," Ingelhard's "backlash hypothesis" and the "economic not-have hypothesis." We understand that in societies particularly hit by the 2008 crisis, had increased feelings of insecurity and led social behaviors to be driven by citizens' pressure to improve their finances, leaving behind their postmaterialistic concerns.
The current crisis, with its deep-fast medium-lasting character, may not reach the point of being able to activate these behavioral issues on a more permanent basis, in contrast to the 2008 financial crisis which was deep, slow-moving and long-lasting.
But the health crisis of 2020 is likely to create wider economic turmoil by creating Covid-moment situations (according to the Minsky moment) accompanied by financial imbalances. This is likely to activate classic behaviors of "insecurity hypothesis" and "note have hypothesis," if the economic crisis lasts much longer.
Additionally, if we live in a world where money has no cost (zero interest rates) then social demands are likely to lose their rationality and take on an anarchic formulation. However, this creates an environment that is much more difficult to control, making it more difficult to implement policies. At the same time, however, forces created by the behaviors keep people away from strongly anchored perceptions that are often likely to weigh on development.
However, cultural liquidity can have both positive and negative dimensions, so that some may act as a deterrent to economic development, while others promote it! Notes 1. The risk of poverty after social transfers is defined as the percentage of people living in households whose total equivalent disposable income is less than 60% of the national median-equivalent disposable income. 2. With the year 2017 being the income reference period. 3. That is, not including social benefits and pensions. 4. The World Social Mobility Index compares 82 economies, and is designed to provide policy makers with a way to identify areas for improving social mobility and the promotion of respective opportunities in economies, regardless of their development. 5. The "middle-class" is defined as the share of the population whose disposable income ranges from 60 to 200% of the median disposable income. Those households with an income below this limit are considered to be "lower-class" and those who earn more than 200% of the median income belong to the "upper-class".
6. Non-democracies are excluded, since there is no real meaning in comparing countries where democratic rights systematically are being limited or violated to consolidated democracies. The same goes for semi-authoritarian countries with regular, but only somewhat, free elections: North Macedonia, Albania, Bosnia and Herzegovina and Moldova. Very few parties call themselves populist and even fewer brag about their authoritarian streak. It is also, given the scope of the material, not possible to scrutinize each and every party. Since the aim of the categorization is to reflect deeply held ideological views of the party, the index relies heavily on secondary sources. To the extent that it has been possible, it follows typical and existing categorizations. Thus, a number of different sources have been used: scholarly literature on the European party system focusing in general on populist parties, as well as particular parties; ideological labels from internet sources, and the expert study Chapel Hill Expert Survey (CHES), a quantitative summary of where parties belong on the left-to-right spectrum, combined with additional dimensions that serve to identify right-wing populists (but not left-wing populists) using, for instance, views on minority rights, immigration and multiculturalism. In general, it is not as difficult to categorize political parties as one might expect. Despite some disagreement on labels, there is a rather wide consensus among scholars on where parties fit in-when in doubt, Timbro has tried to judge the very core of a party's ideology using both secondary and primary sources (such as official party platforms). The division between "authoritarian" or "extreme" depends on the specific view on the concept of democracy. Only explicitly anti-democratic parties have been categorised as anti-democratic. Parties embracing nazism, fascism, communism, trotskyism and maoism have been regarded extreme. Parties classified as authoritarian are anti-liberal, but still democratic. | 2020-11-05T09:06:28.913Z | 2020-08-19T00:00:00.000 | {
"year": 2020,
"sha1": "1f4d4585b34723496dca43b2371beef06d1ec71f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3b4a89b142b1e3b86d37fedc2ee3488a30ca8335",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
157089425 | pes2o/s2orc | v3-fos-license | Smes ; Localisation Vs Internationalisation , a Critical Review of Theoritical Frameworks and Business Strategy
This literature review is aimed at providing deeper insights into how the state of SMEs internationalisation proceeds overtime as well as what propels them to pursue internationalisation. Also, the synthetic study format which involves crucial review of various arguments from different researchers will be employed. This paper will begin by discussing how SMEs expand. The factors, motives and strategies that drive internationalisation of SMEs will be evaluated. Theoretical frameworks of SMEs internationalisation will be critically reviewed and the linkages between Scholar’s assertions about theoretical frameworks and the business strategy of firms will be discussed.
I. Introduction
Localisation according Stefaniak, Parker and Rees (2010) can be described as the process of organising businesses or industries such that their main activities happen in local areas rather than in international contexts.Internationalisation on the other hand can be defined as the process of increasing involvement in international operations across borders changing both perspectives and positions (Welch and Luostarinen, 1993).Chetty and Campbell-Hunt (2003) emphasised that internationalisation has become increasingly crucial to the competitiveness of enterprises of all sizes.However Rutihinda (2011) argued that most scholars have attempted to review literatures on internationalisation of large multinational firms and have placed little emphasis on small and medium sized enterprises (SMEs) despite the fact that SMEs are becoming increasingly internationalised.European commission (2004) refers to SMEs as independent firms that are non-subsidiary, employs less than 250 persons and has an annual turnover not exceeding 50million Euros.Moreover Ruzzier and Konecnik (2006) noted that the role played by SMEs in the economic development of countries in the world today through innovation, employment and wealth creation cannot be overemphasised.Furthermore, Senik (2010) argued that within the mainstream literature on internationalisation of SMEs there is little understanding of the roles and processes of SMEs internationalisation as well as how and why they internationalise.This literature review attempts to answer the research question highlighted below.How does the state of SMEs internationalisation proceed overtime in comparison with Localisation?
II. Purposes and processes of SMEs internationalisation
According to Knapp and Kronenberg (2013) it is important to understand the different stimuli that lay behind the internationalisation process of SMEs.This is because these driving forces influence the decisions of SMEs to pursue international expansions.In agreement with Knapp and Kronenberg (2013), Fletcher and Prashantham (2011) emphasised that to comprehend the internationalisation process, it is imperative to explore the causative elements that initiate the decision of SMEs to internationalise as well as how the process is implemented and maintained.Ovaiatt and McDougall (1994) argued that the proliferations in technology, recent changes in economic and social conditions are the main influential factors that encourage the worldwide trend of accelerated internationalisation of SME"s.Additionally Chetty and Campbell-Hunt (2000) found that changes in mediating variables such as resources, environment and organisational strategies influence SME"s internationalisation process.Contrary to Ovaiatt and McDougall (1994) and Chetty and Campbell-Hunt (2000), Wilson (2007) posit that the main purpose of SMEs" international expansion can be categorised based on two perspectives which are primary motivators and secondary motivators.According to Wilson (2007) the primary motivators for SMEs" pursuit of international expansions are closely linked to maximising returns, minimising costs in production, purchase and sales.The secondary motivators on the other hand, include strategic developments of the company which can be achieved through gaining access to international competencies, technology, labour and capital.
Regardless of the motivating factors expressed by Wilson (2007), Pinho (2007) in agreement with Freeman and Cavusgil (2007) noted that the decisions and purpose of internationalisation of SMEs is solely dependent on the owners or managers of the business.Pinho (2007) found that SMEs unlike large scale enterprises (LSEs) have limited management hierarchies and internationalisation is dependent on the characteristic features or experiences exhibited by the managers of the business.These experiences include demographics (age, education), international exposure, knowledge of international business (familiarity with culture and international business practices) and international transaction experiences (Chetty and Campbell-Hunt, 2003;Pinho, 2007).Moreover Fletcher and Prashantham (2011) argued that most studies conclude that factors such as strategic developments, profit maximisation and foreign awareness are the general drivers underpinning SMEs internationalisation. However Rammer and Schmiele (2008) stated that there is no static driver for internationalisation of SMEs because SMEs have different resources, operate in different industries and have different niche markets.
Another imperative aspect of SMEs and internationalisation is the strategies employed by SMEs for internationalisation. Rammer and Schmiele (2008) assert that SMEs use strategies such as exporting and importing activity, strategic alliances, mergers and acquisition, inter-firm networking and collaboration for international expansions.Conversely, Fletcher and Prashantham (2011) stated that SMEs internationalisation is facilitated by knowledge and for SMEs to internationalise rapidly; learning is a vital strategy that must precede the decisions to internationalise.According to Chetty and Campbell-hunt (2003) most SMEs do not have adequate knowledge of foreign markets which inhibits their growth internationally.Autio et al (2000) believes that Knowledge accumulation, learning and formation of business networks play a vital role in the international growth of SMEs.
III. Critical review of the theoretical frameworks of SMEs internationalisation
Many of the early literatures on internationalisation assert that the process involves a series of incremental stages in which SMEs gradually become involved in exporting and other forms of international business (Cavusgil, 1980;Petersen and Pedersen, 1997).Levie and Lichtenstein (2010) in their study consider internationalisation to be a gradual sequential process involving different stages with SMEs increasing their commitment to foreign markets as they advance through each stage.Similarly, Bell, Carrick and Young (2004) in their notion of the extant stage theory argued that the underlying assumptions of the internationalisation of SMEs is that they must be fully established in the domestic markets within which they operate before considerations of developing internationally.Furthermore Chetty and Campbell-Hunt (2003) in their study found that there are a lot of scholastic enthusiasms about stage models of internationalisation.They found that the Uppsala process model is the most frequently used form to explain the internationalisation process of SMEs.The Uppsala process model emphasises that for firms to minimize risk and overcome uncertainty, internationalisation should be a step by step process in which SMEs learn and gain adequate market knowledge before committing resources to the foreign market (Welch and Luostarinen, 1988).However despite continuous enthusiasms for the notion of incremental internationalisation, there are varying studies with conceptual criticisms.Chadee and Mattson (1998) argued that there is lack of congruence between theory and practice in the stage models.They further emphasised that most SMEs instead of internationalising gradually through an incremental stage, enter foreign markets rapidly.On the contrary, Andersen and Kheam (1998) noted that the ability of SMEs to internationalise is crucially linked to organisation"s resources.In the resource based view firms can sustain and improve their competitiveness as well as grow internationally if they acquire resources that are unique, inimitable and difficult to substitute Andersen and Kheam (1998).
A contrasting argument by Efrat and Shoham (2012) posit that the advancements in technology, increasing role of niche markets and the growth of global markets has facilitated the mutual beneficial relationships between domestic and foreign partners and has triggered the "born global" phenomenon.According to Evers (2010) "born global" are firms that are international from inception and their influence on SMEs" internationalisation cannot be overemphasised.Moreover, Aspelund, Madsen and Moen (2007) stated that the "born global" prodigy poses a substantive challenge to the internationalisation stage theories and the notion of incremental internationalisation.This is because many SMEs no longer develop in incremental stages with respect to their international activities; they start international activities right from birth entering into foreign markets without experience (Aspelund, Madsen and Moen, 2007).Cuervo-Cazurra (2011) conversely described "born global" as an internationalisation model that is associated with high risk and proposed that SMEs strategically select a non-sequential internationalisation.The non-sequential internationalisation involves SMEs acquiring adequate knowledge in their domestic markets before growing internationally.The main argument from Cuervo-Cazurra (2011)"s empirical study is that heterogeneity in the knowledge base enhances SMEs internationalisation strategy.He further argued that through acquisition of extensive knowledge in the domestic market, SMEs can choose countries that are completely different from their home operations for international expansion, build a reputation and grow substantially.
The transaction cost model is another paradigm that has received a lot of attention in the internationalisation field with substantial empirical support (Figueira-de-Lemos, Johanson and Vahlne, 2011).Rugman and Chang Hoon (2008) argued that there are two pragmatic questions in the transaction cost theory.
According to Rugman and Chang Hoon (2008), upon deciding to enter a foreign market, SMEs need to decide if it should be done through internalisation within its own boundaries (a subsidiary) or through some form of collaboration with an external partner (externalisation).Hence, the transaction cost model proposes that SMEs should begin development internationally in regions where transaction costs are minimal.Nonetheless, Fernharber and Li (2012) articulated the network model which emphasises social exchange, inter-organisational and interpersonal relationships.In justifying their perspective, they argued that network relationships are crucial to acquisition of resources and knowledge needed for international development.Additionally the relationship of SMEs in the domestic market can be used as bridges to other networks in the foreign market (Osarenkhoe, 2009).
In view of all the postulations and contrasting findings from various researchers from the preceding, it can be argued that SMEs do not necessarily follow any consistent model for internationalisation.However, it is worthy of note to mention that the most likely foundation for accelerated internationalisation of SMEs is the "born global" phenomenon.Anderson et al ( 2012) believes that the nature and pace of internationalisation of SMEs is conditioned by product, industry and environmental factors, all of which are influenced by the recent significant breakthroughs in globalisation and increased liberation of markets.In the last decade, the "born global" phenomenon exerts a strong influence on internationalisation of small firms principally because of three factors (Oviatt and McDougall 1999).Firstly, advancing technology is important to the social development of all countries.Secondly, small emerging SMEs play a vital role in the discovery of technological innovations that are used worldwide.Finally, the "born global" SMEs" pattern and pace of internationalisation is rapid with a global strategy that allows them to quickly engage in cross-border activities (Oviatt and McDougall 1999).This helps drive job creation and economic growth through acceleration of innovation as well as promotion of the full use of human, financial and other resources (Wilson, 2007).Melin (1992) found that little attention is paid to the inter-relationship between internationalisation theories and strategy issues of small firms at both conceptual and practical levels.However, a more recent study by Malhotra and Hinnings (2009) highlight the fact that the absence of linkages is particularly relevant to SMEs and can be explained due to a number of factors.Initially, Malhotra and Hinnings (2009) in their evocative work on internationalisation characterised the international behaviour of SMEs as essentially unplanned and reactive, whereby the motivation to think or act strategically may only be brought about by a critical incident or a combination of several incidents occurring simultaneously within the organisation.Furthermore, Bell, Carrick and Young (2004) emphasised that most firms perceive strategy as a behaviour that emerges over time rather than having a strategic intent for internationalisation in their stated formal plans.Subsequently, Knapp and Kronenberg (2013) elucidated that local and international developments of SMEs are often viewed as diverse strategic solitudes rather than complimentary strategies for growth.They assert that many of the early literatures such as Bilkey and Tesar (1977) and Mintzberg (1991) regard international involvement to be of secondary importance to small firms which is only considered once they have developed a strong footing in the domestic market.On the other hand, Chetty and stangl (2010) argued that strategy making is about changing perspectives and positions, while internationalisation involves increasing operations across borders which comprises of both changed perspectives and changed positions.Consequently, Chetty and stangl (2010) explained that the role of internationalisation theories in the overall business strategy of firms cannot be overstated.They stressed that strategic foundations (knowledge, skills, experience and network) are imperative to small and medium enterprises whether they have identified planned or unplanned frameworks for internationalisation. Furthermore Kalinic and Forza (2012) emphasised that the absence of explicit and formal strategy within a firm does not equate to lack of strategic vision whether or not it involves international expansion.This is because strategic planning activity will become more formal and refined over the life cycle of a business (Bell, Carrick and young, 2004).This assertion can be attributed to the non-sequential internalisation theory which emphasises that the enthusiasm to think and act internationally can occur progressively after SMEs have gained adequate knowledge of the domestic market (Cuervo-Cazurra, 2011).Besides, Tuppura et al (2008) used the resources based framework to analyse the international growth strategies of SMEs.They proposed that to understand resource use, and strategic management practices, greater understanding is needed by entrepreneurs on the interrelationships between domestic and export activities within the context of the firms" overall business strategies.Furthermore, the "born global" phenomenon is another notion that has been receiving increasing attention from various scholars and is important in this context too.Bell, McNaughton and young (2001) argued that the "born global" phenomenon is not an internationalisation form per se; rather it should be regarded as a strategy that increases SMEs" value through accelerated and dedicated internationalisation.According to Freeman et al (2010) the phenomenon suggests that many firms no longer regard international expansion as simple appendages to the domestic market, but as business strategies with an international focus from outset.Nevertheless, Hashai (2011) argued that the "born global" phenomenon makes more sense in the context of a large and developed economy."Born global" firms begin international activities within 3 years of their establishment and building sizable domestic markets is not part of the firm"s strategy, instead all resources are devoted to the international market place (Bell, McNaughton, Young & Crick, 2003).Additionally, Dib, Da Rocha and Da Silva (2010) assert that new firm creation is a complex phenomenon; information may be available as quickly in emerging economies as it is in the developed economy.However, the paths to exploiting the available information differ widely.But, Kontinen and Ojala (2012) in their study accentuated that at the rate with which globalisation is proliferating; the opportunities for "born global" firms will grow in the next 10 to 15 years for all countries, particularly for emerging markets.
IV. Business strategy and SMEs' internationalisation
Delving from the preceding, knowledge accumulation about domestic and international markets appeared to exert a significant influence on the initial business strategies, international orientation and overall growth objectives of firms.The extant stage theory suggests that in-depth understanding of the domestic and international behavior of firms is required in all circumstances (Etemad, 2004).Similarly the "born global" phenomenon expresses knowledge base as one key characteristic that Enhances competitive advantage and enables SMEs offer value-added products and services.Also, it is noteworthy that internationalisation theories are models that strongly influence firms" business strategies, internationalisation pattern and international focus (Ramón-Rodriguez, Moreno-Izquierdo and Perles-Ribes, 2011).Hahsai (2011) argued that in terms of patterns and pace of internationalization "innovation oriented" and "traditional" firms responds differently, with the latter being less aggressive in their growth strategies and more cautious in internationalising.While the introduction of new process technologies often forced the "innovation oriented" or "born globals" to revise their strategic direction.
V. Conclusion
To conclude, the paper identified and reviewed the purposes and processes of SMEs internationalisation. Furthermore a number of key issues were empirically explored.These issues include evaluation of driving forces that influence the decision of SMEs to pursue internationalisation.A critical review of theoretical frameworks in addition to scholar"s underlying assumptions on the internationalisation process of SMEs.The motives which influence the choice of strategies and the role of internationalisation in the overall business strategy of SMEs were discussed.Based on the critical review of literatures from different researchers, there are some important differences in the approaches to internationalisation, internationalisation pace and overall business strategies of SMES.The findings from the examination of different literatures provided further insights into the fact that the internationalisation of SMEs is a process influenced by varying factors because SMEs have different resources, operate in different industries and have different niche markets.In addition, it is important to mention that the "born global" phenomenon has also been seen as an amalgamation in preceding theories suggesting that internationalisation literature is moving towards a unified theoretical framework.
Notwithstanding, most of the scholarship on SMEs internationalisation and business strategy provides an understanding based on theoretical perspectives from an academic point of view.There is an opportunity for further research evaluating the critical interrelationships between firm and industry specific influences to address the lacuna of the role internationalisation plays in the early growth of small firms.As well as in what ways other strategic decisions interact with and impact upon the internationalisation decision of these firms in the context of their overall business strategies and internationalisation approaches. | 2019-05-19T13:03:20.435Z | 2016-07-22T00:00:00.000 | {
"year": 2016,
"sha1": "142f811f0f920d7a0de0338c982e0aaa5a93afc6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2169-026x.1000191",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5ff1aece5ef44637ab5d4c069a4da7140cd02164",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
233254900 | pes2o/s2orc | v3-fos-license | Motivators of Job Satisfaction Among Financial Managers and The Role of Gender
Copyright © 2020, Mark Mattia. This article is published under a Creative Commons BY-NC license. Permission is granted to copy and distribute this article for non-commercial purposes, in both printed and electronic formats Despite extensive efforts to recruit females to the financial management sector, women in the industry remain a distinct minority. For instance, only 23% of Certified Financial Planners (CFP’s) are women (CFP Board, 2019). Additionally, more executive females and senior managers (23%) exit the financial services labor market compared to males at the same levels (17%) (Mercer Study, 2016). This paper utilizes the 2017 National Survey of College Graduates to examine the motivational factors related to job satisfaction among males and females in the financial management field. Research findings reveal that highly intrinsic job satisfaction factors that can be considered more intrinsic are more highly correlated with overall job satisfaction for females. Correlations of the factors with job satisfaction tend to be higher for females compared to their male peers. The female model of job satisfaction produced odds ratios that were the highest for two intrinsic type motivator variables (intellectual challenge and degree of independence). Among male financial managers, the odds ratios were the highest for two extrinsic motivators (salary and job security). These findings are promising in terms of relating Self-Determination Theory (STD) and motivation among financial managers. Such findings provide the impetus for original survey research incorporating the Motivation at Work Scale to explore more deeply the connection of STD and work motivation in the financial services areas. Determinants of job satisfaction among full-time employed financial managers include several intrinsically oriented satisfiers. The factor intellectual challenge of the work is a major driver of job satisfaction.
D espite extensive efforts to recruit females
to the financial management sector, women in the industry remain a distinct minority. For instance, only 23% of Certified Financial Planners (CFP's) are women (CFP Board, 2019). Additionally, more executive females and senior managers (23%) exit the financial services labor market compared to males at the same levels (17%) (Mercer Study, 2016). This paper utilizes the 2017 National Survey of College Graduates to examine the motivational factors related to job satisfaction among males and females in the financial management field. Research findings reveal that highly intrinsic job satisfaction factors that can be considered more intrinsic are more highly correlated with overall job satisfaction for females. Correlations of the factors with job satisfaction tend to be higher for females compared to their male peers. The female model of job satisfaction produced odds ratios that were the highest for two intrinsic type motivator variables (intellectual challenge and degree of independence). Among male financial managers, the odds ratios were the highest for two extrinsic motivators (salary and job security). These findings are promising in terms of relating Self-Determination Theory (STD) and motivation among financial managers. Such findings provide the impetus for original survey research incorporating the Motivation at Work Scale to explore more deeply the connection of STD and work motivation in the financial services areas.
Introduction
Financial services sectors actively recruit female college graduates with finance degrees and those with financial services certifications. The pipeline of female recruits into the financial services segment is augmented by the rapid growth of financial planning departments at major colleges around the country (Chandler, 2016). However, the retention of females in the financial management field (particularly in the financial planning field) has proven elusive. Moreover, the representation of women in top financial positions of financial services companies pales in comparison to their male counterparts (Mercer Study, 2016). Part of understanding retention in the field is to understand the motivators for job satisfaction, affective commitment, and turnover intention. In this study, I investigate the first of these three possible outcome variables with respect to possible motivators overall and by gender. The purpose of this study is twofold. The first is to understand the drivers for job satisfaction among males and females in the financial management (or closely related) field. The study participants have at least undergraduate degrees in finance. This inclusion is significant to business practices since positive job satisfaction is strongly linked to affective work commitment, which is strongly and inversely linked with intention to leave the position (See for instance, Williams & Hazer, 1986;Meyer et al., 1993;Lambert & Hogan, 2009;Pepe, 2010). Secondly, I examine the relationship between job satisfaction and the satisfaction drivers with consideration of gender as a possible determinant. Ultimately, Self-Determination Theory (SDT) may help explain the lack of retention of women in the financial management field. This theory holds that people can be motivated by intrinsic or autonomous motivation factors (work is done for the benefit of doing it) and extrinsic motivation factors (work is done for some separable outcome, such as tangible rewards). In SDT, the intrinsic factors are segmented into three primary areas: autonomy, relatedness, and competence. Work motivation studies that have incorporated aspects of SDT to model work satisfaction and job retention have been positive. This analysis is a first step toward understanding the role that parts of the theory may play in understanding the motivational factors contributing to job satisfaction within the financial management sector. In this study, high levels of correlation and significant regression coefficients are shown to exist for several variables (intrinsic and extrinsically oriented) with respect to overall job satisfaction.
Job Satisfaction and Self-Determination Theory
The predominant literature that addresses the first research problem is dedicated to studying work motivation and job satisfaction. Theories that have been developed to address the linkage between motivation and job satisfaction. include: Dispositional Theory, Hierarchy of Needs Theory, Expectancy Theory, Job Characteristics Theory, and Self-Determination Theory. As an example, Herzberg et al. (1959) linked satisfaction with motivation and created a two-factor model (Mattia, 2020). Self-Determination Theory has been in literature for more than four decades. Such literature details the concepts of autonomy, competence, and relatedness as fundamental to a person's well-being. Specifically, need satisfaction areas should be nurtured and addressed in life and at work. The primary differentiator for the theory is "the relative strength of autonomous motivation versus controlled motivation" (Gagné & Deci, 2005). Autonomous motivation is typically a derived aggregate of intrinsic motivation elements. For this study, we consider and test several intrinsic type motivators. These include: "Job is an intellectual challenge," "degree of independence," and "societal contributions of the job. " The idea that intellectual challenge may be considered intrinsic originates with Gagné and Deci (2005), who view the challenge of the job as an environmental aspect that acts as an antecedent of autonomous motivation. The relationship of SDT with overall well-being is explained in the seminal work of Ryan and Deci (2000). Deci et al. (2001) found positive relations in two countries between levels of intrinsic motivation and work engagement and job well-being. Baard et al. (2004) found Self-Determination Theory related to motivation in the workplace. Since about 2005, aspects of positive psychology, particularly SDT, have been utilized to address employee needs and states Self-Determination Theory (SDT) may help explain the lack of retention of women in the financial management field.
of satisfaction (Gagne & Koestner, 2002;Meyer & Gagne, 2008). Intrinsic motivation has been found to be positively related to job satisfaction (Gillet et al., 2013, Guntert, 2015. In a meta-analytic study, Van den Broeck et al. (2016) found that the satisfaction of the basic needs of autonomy, competence, and relatedness are related to higher job satisfaction.
Research and confirmations of the theory in practice are mainly in the areas of education, sports, health care, and more general work areas, such as the factory worker labor force. To my knowledge, applications of SDT or SDT type motivators to the world of financial management or financial services management have not been published. Since the retention of females in the business sector remains problematic, we expect SDT to eventually provide insights into this dilemma. For this study, we examine and test the following three hypotheses:
Methodology
The data for the study was taken from the National Survey of College Graduates (NSCG), which is conducted in wave fashion; the particular wave used for this study was completed in early 2017. The U.S. Census Bureau is responsible for the NSCG data collection. The NSCG "uses a trimodal data collection approach: online survey, mail questionnaire, and telephone interview. " (https://census.gov/programs-surveys/nscg/about.html). Respondents were drawn from the American Community Survey. The respondents have at least a bachelor's degree earned prior to 2016, are under the age of 76, and live in the United States. The 2017 NSCG includes 124,000 sample cases with the response unit being those who have attained at least a bachelor's degree. The data was collected under the direction of the National Science Foundation. According to the National Science Foundation, the NSGC items were pretested in focus groups and cognitive interviews to reduce measurement errors. Sampling error estimates associated with this survey were calculated using replicate weights, and the weighted response rate was 71% (with adjustments made to reduce non-response bias; https://www.nsf.gov/statistics/srvygrads/#sd.) The variables and scales in the study have been utilized for numerous years and in many other research documents. The survey allows for the studying of relationships between respondents' degree field and their current field of work. Key for the present analysis, the data base also includes factors that can be analyzed as possible determinants for respondent job satisfaction and other factors that can be viewed as reasons for not working in the field of study. The survey is dedicated primarily to those in the engineering and science fields; however, many other segments of the work force (including business and the financial management sub segment of business) are included. Following Hunt (2016), we only consider respondents under 66 years old. Several types of analyses have been conducted, including descriptive analysis, hypothesis testing, correlation analysis, and logistic regression modelling. We also include the intrinsic and extrinsic type motivators among the possible factors driving job satisfaction and other possible factors for those respondents employed in the financial management or closely related field. We analyze whether gender plays a role in job satisfaction among the respondents via correlational and logistical regression analyses. All of the regression analysis was conducted using Statistical Analysis Software (SAS), version SAS® 9.4 TS1M3..The T-test procedure for comparison of job satisfaction by gender was conducted in Jeffreys's Amazing Statistics Program. (JASP), version 0.14. The survey instrument is limited with respect to the number of intrinsic and extrinsic motivators it contains. Competency is a major element of Self-Determination Theory, and we are able to examine perceptions of competency through the attribute, job is an intellectual challenge. We are greatly limited in terms of inclusion of the STD element, autonomy. For this study, we can utilize the job's degree of independence as a possible intrinsic factor. However, I acknowledge autonomy and independence are not the same. Independence can imply a rejection of dependence, which acts contrarily with another important STD need, relatedness. However, independence is the closest attribute we had available presently, and I can consider it an intrinsic attribute, if not an SDT element. The data set affords a fair number of extrinsic motivators to be examined, including satisfaction with salary, benefits, job security, and opportunities for advancement. The primary analyses for all the considerations in the study are inferential statistic through the use of Ordinal Logistic Regression. Variables such as job location and level of responsibility are considered "other" variables as are other respondent characteristics, such as having an MBA or certifications, such as the Certified Financial Planner (CFP) or Chartered Financial Analyst (CFA). The variables are defined in Table 1.
The primary outcome measures are the significance levels of the factors (Chi Square values), the odds ratios and their corresponding confidence intervals, the model concordance percentages, and the comparison of the female and male models.
Research Hypotheses
H1) The intrinsic motivation factor "job is an intellectual challenge" will be associated positively with overall job satisfaction among male and female financial managers employed in financial management.
H2) The intrinsic motivation factor "degree of independence" will be associated positively with overall job satisfaction among male and female financial managers employed in financial management. H3) Motivation factors driving job satisfaction among financial managers fully employed in financial management will be the same for males and females.
Gender, Job Satisfaction, and The Financial Management Profession
Conclusions from studies involving the role of gender on work satisfaction have pointed toward women having higher levels. Studies by Sousa-Poza and Sousa-Poza (2000) and Sloane and Williams (2000) have found that women tend to have higher levels of job satisfaction. Bender et al. (2005) also report that women have higher job satisfaction than men. Sousa-Poza and Sousa-Poza (2000) also found that the determinants of job satisfaction (from a workrole input perspective) did not differ when comparing males and females. One exception being that females appear to value relationships with management more while males value compensation more. Thompson and Prottas (2006) find that gender correlated with stress and job satisfaction, with women reporting higher levels of both. In a more recent study, Stefko et al. (2017) claim that women and men differ in terms of the need to have supervisor recognition, (a form of extrinsic motivation). In the same study, it was found that gender differences occurred for demotivating job aspects in only two of twenty variables considered. These include having unjust conduct from a superior and having an unstable job; in both contexts, women were more demotivated (Stefko et al., 2017). Among other variables that could be considered extrinsic, no gender differences were discovered when considering determinants of job satisfaction in the workplace. Also, Vansteenkiste et al. (2007) found no differences in job satisfaction due to gender in a two sample differences test. A review of the literature reveals little direct work concerning the combination of women, job satisfaction, retention rates in financial management, and reasons for well-being on the job. In a mixed methods approach, Neck (2015) suggests that women leave their financial positions "due to a combination of frustration, change and choice (pp. 533). " While her conclusions are based on analyses from her open-ended questions and quantitative survey, she does not attempt to explain the results fully within any theoretical framework. Hunt's (2016) study
Figure 1: Schematic Diagram of Independent and Dependent Variables
Mattia presents evidence that women are more likely to leave fields that have a predominance of male employees. She also finds that women leave the engineering field at a higher rate than many other fields of employment.
Conceptual Framework
The conceptual framework (Fig. 1) describes the variables and the process for the variables' possible interrelationship to job satisfaction. The definition of the variables included in addressing the research questions are provided in Table 1.
Findings
Tables 2 and 3 detail the satisfaction scores for males and females for each of the possible determinants and for the overall satisfaction level. Respondents ranked satisfaction with independence the highest while satisfaction with opportunities for advancement ranked the lowest. Overall job satisfaction scores for males and females did not differ significantly (See Table 4).
Variable Conceptual Definition Operational Definition Job Relation to Highest Degree
Self-assessment by respondent as to whether they are working in their field of highest degree Table 5 provides the mean satisfaction scores (M), the correlations, and partial correlations as well as the overall job satisfaction correlations for female financial managers. Among financial manager respondents working in the financial field or closely related to the financial field, we found relatively high to moderately high significant degrees of correlation in the intrinsic motivation areas for females. The correlations for having positions that are intellectually challenging, provide job independence, and job's contribute to society are all .52 or higher, with intellectual challenge having the highest (.68) degree of correlation. An extrinsic motivator, opportunity for advancement, is also a moderately high correlation (.57). Satisfaction with level of responsibility, while not classified here as intrinsic or extrinsic, had the second highest correlation (.64). These levels contrast somewhat with the correlations for males, where the correlations are in the .46 -.51 range for the more intrinsic motivators (see Table 6). Moreover, the highest correlation among males is opportunity for advancement (.53) closely followed by the need for the job to provide an intellectual challenge (.51). As an added analysis, a comparison of the intrinsic motivator correlations across job functions are found to be more highly correlated with overall job satisfaction in the financial management fields than in the physical sciences, social sciences, or aggregate of all other fields (See Appendix A, Table A1).
Key Findings for Ordinal Logistic Regression Analysis
For total respondents, the likelihood ratio chi-square of 497.8 with a p-value of < 0.0001 tells us that our model is statistically significant as a whole, as compared to a model with no predictors. The slope esti- mates and significance levels are provided in Table 7 for the driving factors. Based on the estimates from the logistic regression procedure, we conclude that seven significant factors drive the overall job satisfaction among financial managers: salary, job's intellectual challenge, job's degree of independence, job security, job's contribution to society, opportunity for advancement, and job lo-cation. Several factors (benefit, having advanced degrees or financial certifications) are not deemed to be significant determinants; therefore, I excluded them from the final analysis. We include the gender dummy in the final model to show that when controlling for the other variables gender did not play a significant role in determining overall job satisfaction. I later develop a separate female model to investigate
Volume 4, Number 22
the primary determinants of satisfaction among that segment. Based on the model coefficients, for a one unit increase in salary satisfaction (i.e., going from 1 to 2), we expect a 1.12 increase in the log odds of being in a higher level of satisfaction, given that all the other variables in the model are held constant. For a one unit increase in the job being an intellectual challenge, we expect a .92 increase in the log odds of being in a higher level of job satisfaction, given that all the other variables in the model are held constant. Likewise, we find the degree of independence the job accords is also important and has the third highest coefficient (.81). Examining the odds ratio estimates for the total respondent model (Table 8), we conclude that for a one unit increase in salary (i.e., going from 1 to 2), the odds of high level of satisfaction versus the combined middle and low categories are 3.07 greater, given that all the other variables in the model are held constant. Likewise, the odds of the combined middle and high categories versus low is 3.07 times greater, given that all the other variables in the model are held constant. For a one unit increase in the job providing an intellectual challenge, the odds of a high level of job satisfaction versus the combined middle and low job satisfaction categories are 2.50 greater, assuming all other variables in the model are constant. For the total respondent sample, the largest odds ratios are for the variables, salary, job's intellectual challenge, salary, and the job's level of independence afforded. The percent concordant for a model only considering significant factors and gender is 91.4% Addressing female financial managers separately, the final model yields a likelihood ratio chi-square of 158.4 with a p-value of 0.0001. The slope estimates and significance levels are provided in Table 9 for the driving factors among female financial managers. Using an ordered logistic regression procedure, we conclude that five significant factors drive the overall job satisfaction among female financial managers: job's intellectual challenge, job's degree of independence, salary, job security, and job's contribution to society. Four factors (job location, job's opportunity for advancement, job's level of responsibility, and benefits) are not deemed to be significant determinants, so we exclude them from the final analysis. We also fail to find significance for the variables relating to having advanced degrees or financial certifications (See Table 9 for details). Examining the odds ratio estimates for the female model (Table 10), we conclude that for a one unit increase in salary (i.e., going from 1 to 2), the odds of high level of satisfaction versus the combined middle and low categories are 2.49 greater, given that all the other variables in the model are held constant. Likewise, the odds of the combined middle and high categories versus low is 2.49 times greater, given that all the other variables in the model are held constant. For a one unit increase in the job providing an intellectual challenge, the odds of a high level of job satisfaction versus the combined middle and low job satisfaction categories are 6.41 greater, assuming all other variables in the model are constant. For females, the largest odds ratios are for the variables job's intellectual challenge, level of independence, and salary. The female final model includes two variables Standard interpretation of an ordered logit coefficients is that for a one unit increase in the predictor, the dependent variable level is expected to change by its respective regression coefficient in the ordered logit scale while the other variables in the model are held constant.
(salary and job security) that are considered extrinsic motivators and three (job's intellectual challenge, job's level of independence, and job's contribution to society) that may be considered intrinsic motivators. Developing a model for male financial managers and comparing that model to the female model reveals that two additional variables are significant among males: opportunity for advancement and job location; both were deemed significant in the overall model (Model 1). Table 11 summarizes the comparison of the female and male models via their respective odds ratio estimates. The models share five similar drivers and neither model includes having advanced degrees or certifications as determinants of job satisfaction. Among females, the percent concordant for a model only considering significant factors is 90.9%. Similarly, the concordance percentage for the male model is 91.3%.
Discussion
My primary objective was to delineate the drivers of job satisfaction among male and female financial degreed respondents working in financial management or a related position. Factors associated with overall job satisfaction tend to have higher correlations with intrinsic type motivators as opposed to the extrinsic motivators (although opportunity for advancement was a relatively high correlate). Despite an anticipated difference between genders, the result of the highest correlate for males is the job's opportunity for advancement was surprising. The finding worthy of further exploration was that correlations tend to be higher among females in the financial sector for factors deemed intrinsic motivators. The finding that there was no difference in overall job satisfaction between female and male financial managers is consistent with Vansteenkiste et al. (2007), but at odds with the studies by Bender et al. (2005), Sousa-Poza and Sousa-Poza (2000), and Sloane and Williams (2000). The results from the multiple ordinal logistic regression analyses somewhat mirror those from the correlation analyses. However, the extrinsic reward variable salary yielded the highest point estimate coefficient. This result was driven by the male segment of the sample. The next two highest coefficients, intellectual challenge and independence, are considered more intrinsic motivators. Similar results regarding the intrinsic type motivators were found by Gillet et al. (2013). The results regarding the extrinsic motivators salary and job security on job satisfaction are consistent with Cho and Perry (2012), Kuvaas (2006), Linz andSemykina (2012), andO'Driscoll andRandall (1999). Contrarily, Terera and Ngirande (2014) conclude that rewards correlated positively with job retention but not with work satisfaction. Among female financial managers and controlling for the other factors, we found the variable job's intellectual challenge (a possible proxy for the construct, competence) as the most significant determinant of higher overall job satisfaction. This result is important as many women may not be fully engaged in the field because they are not in jobs providing this challenge. They may be pushed to less challenging jobs that are only somewhat related to finance. We also conclude that the level of job independence is the next highest driver of satisfaction. These variables are followed in importance by two extrinsic variables: salary and job security. The variable level of responsibility had a high correlation with job satisfaction for females and a moderate correlation for males, but this factor was not statistically significant in the regression analysis once the other variables were controlled. Lastly, the variable contribution to society, a variable that can also be viewed as intrinsic, was also deemed significant. Therefore, the first hypothesis (H1) is confirmed. The intrinsic motivation factor intellectual challenge will be associated positively with overall job satisfaction among male and female financial managers employed in financial management. Likewise, the second hypothesis (H2) is confirmed.
The intrinsic motivation factor degree of independence is associated positively with overall job satisfaction among male and female financial managers employed in financial management. The third hypothesis (H3) stated that the determinants of job satisfaction level would be the same for males and females. Here, we find correlates among females differ in magnitude from their male counterparts. However, our gender dummy variable in Model 1 was not a significant predictor of job satisfaction. Overall, we found minor differences in the drivers of female satisfaction factors versus the male satisfaction drivers. Males and females share the aforementioned five variables as determinants. The male model included one additional extrinsic motivator, job's opportunity for advancement, and another variable, job location. We reject the second hypothesis and conclude that male and female financial managers have differing drivers of job satisfaction. Note: NS = Not a significant factor. n = 170 for females and n= 593 for males I am surprised by the negative results (in both models) of the variables involving more advanced education. I believed that having a financial certification or advanced degree would serve as another proxy for competence and influence the overall job satisfaction of the managers. I intend to test this aspect among financial advisors in the next study.
Limitations and Future Research
The research is subject to several limitations. Due to the database question/answer limitations, we are unable to completely incorporate a comprehensive list of intrinsic and extrinsic motivators that may serve to drive female financial managers' work satisfaction. That is, the factors that contribute to autonomy, relatedness, and competence are not addressed in the government's educational database to the extent we would prefer and to the extent the topic demands. This limitation also prohibits us from aggregating the intrinsic and extrinsic motivators as factors and conducting the appropriate EFA and Cronbach's alpha analyses. The areas for future research are apparent and important if we are to create the interventions necessary in the financial management field that can create more positive female retention rates. Those areas are: 1. Refine the analysis to the financial services sector since the women retention issue has been reported to be quite pronounced among female financial planners. 2. Develop the independent variables and the advancement of knowledge in this field by utilizing the full Motivation at Work Scales (Gagné et al. 2015) to help explain the female exodus from financial services. We will ascertain the association between SDT related variables and outcome variables, such as job satisfaction, affective work commitment and turnover intention in the financial services domain. 3. Utilize more complex forms of extrinsic motivators (such as integrated motivation) that were not available from the present survey data. These forms can be addressed in a custom survey to the financial planner population. 4. Consider the use of a structural equations model approach. We also consider the development of a separate study that will be longitudinal in nature. This study would help determine whether the current female job participation rate among those educated in finance has been increasing or remaining stagnant over the recent past. The government dataset we use in this study is expected to apply to this future study.
Conclusions
When considering the possible drivers of job satisfaction among female financial managers in a job closely or somewhat related to their degree, we find relatively high correlations with intrinsic type motivators (e.g., need for independence, the need to be intellectually challenged, and the need for contributing to society), as opposed to more reward-oriented motivators, such as benefits, salary, and job security. Likewise, when conducting an ordinal logistic regression analysis, we find the intrinsic motivator (being intellectually challenged) as the most significant determinant of job satisfaction among females. The models of job satisfaction for males and females separately share five variables, three of which can be considered intrinsic type motivators. The regression model for male financial managers includes two additional factors, which help to explain the levels of overall job satisfaction. This study advances knowledge through the development of factors that drive job satisfaction among financial management professionals. It also serves to illustrate the role SDT might play in determining the factors driving overall work satisfaction in the financial management sector.
Research has shown that elements of SDT apply to driving job retention and well-being in other professional fields. Therefore, I take a step in addressing a gap in the literature through application of those elements of SDT that were available to the public in the U.S. database that was utilized in this study. The implications for practice are related to the methods by which financial managers are managed and the types of incentives utilized to foster success for the firm. Without the proper incentives and management techniques being applied in the financial sectors, large sums of monies dedicated to attracting females (for instance) to financial positions may be misapplied. Major financial firms have stated publicly their interest in promoting females to their organizations; however, despite these goals, females leave the sector at disproportionately higher rates. It may be that the job satisfaction programs offered in the financial services sector are not consistent with female retention objective. Our ultimate goal is to create an intervention in the job sector that will lead to improvements in the retention rate of financially trained female executives. This research is the first step of three that we have planned to that end. I find the intrinsic motivator (being intellectually challenged) as the most significant determinant of job satisfaction among females. | 2021-04-16T08:21:46.784Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "13f0cd0a40ca6fb74e19685a72658976ec2808c8",
"oa_license": null,
"oa_url": "http://pubs.mumabusinessreview.org/2020/MBR-04-22-221-234-Mattia-GenderJobSatisfaction.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "13f0cd0a40ca6fb74e19685a72658976ec2808c8",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
232087479 | pes2o/s2orc | v3-fos-license | Migrated Intravesical Intrauterine Contraceptive Devices: A Case Series and a Suggested Algorithm for Management
Introduction Intrauterine contraceptive devices (IUCD) are a commonly used, reversible, contraceptive method. Complications from insertion rarely include migration into the bladder. We report on two cases of intravesical migrated IUCD and present an algorithm for management based on recently published data. Materials and Methods The case records of two patients who underwent surgical procedures for migrated IUCD into the bladder were reviewed. A Pubmed search was performed to identify similar studies. A total of 25 papers met the criteria for inclusion. Results Both cases were managed with laparotomy and partial cystectomy. A review of literature suggests recently reported cases of IUCD migration are rising, with most cases having been reported in the last decade. Bladder calculus developing over the migrated IUCD is the most common presentation. Most cases have been managed using endourological techniques. A small number of cases have required open vesicolithotomy or laparoscopic surgery. Rarely, laparotomy has been required. Discussion IUCD migration into the bladder remains rare, however, recently the number of reported cases has risen. A thorough physical examination and radiological evaluation are warranted. Management is surgical in all cases. Most cases can be managed with endourological techniques. A treatment algorithm has been suggested in this paper based on recent data. Conclusion With the rising use of contraception worldwide, the incidence of IUCD migration is possibly going to increase. Treating doctors need to be aware of the possible complications that may arise from a migrated IUCD, including bladder calculi.
Introduction
Intrauterine contraceptive devices (IUCD) are a popular method of contraception used by approximately 14.3% of women worldwide [1]. Insertion of an IUCD carries a risk of perforation in 1/1000 cases [1]. Migration of IUCD usually occurs after a uterine perforation, which may occur at the time of implanting the device, called primary perforation, or many years later, due to infection or device-related inflammation, called secondary perforation [2]. Migration into the peritoneal space is most reported [3,4]. Migration of the IUCD into the urinary tract is rare and has been reported in only a few dozen cases in the published literature as of 2020. When an IUCD migrates into the bladder, it may cause a local reaction, and deposition of calcium when it enters the lumen. This may progress and form a calculus over many years [5].
In this paper, we examine two cases of spontaneous IUCD migration into the bladder and then review the literature on the management of this rare complication.
The case records of two patients who had bladder involvement of migrated IUCD were studied. For a literature review, a search of terms, 'Intrauterine contraceptive device migration,' 'Urinary tract complications of intrauterine contraceptive devices,' 'Intrauterine contraceptive device migration in urinary bladder,' 'Intravesical intrauterine device migration,' and 'Urinary bladder intrauterine contraceptive device,' were performed in Pubmed. Relevant studies in the English language were perused and studied and data collected.
Case 1
This is a case of a 40-year-old female patient who was referred with a lost IUCD. On presentation, there was no fever, no urinary symptoms, and no menstrual irregularities. A physical examination was normal. An Xray revealed the IUCD (Copper-T) to be displaced outside the area of the uterus but within the pelvis. Ultrasonography (USG) performed confirmed that the uterine cavity was empty and showed an echogenic shadow suggestive of an IUCD on the right-side of the uterus. The kidneys, ureters, and bladder were normal. The patient was taken up for laparoscopic removal of the IUCD. During surgery, the IUCD was seen in the right parametrium, densely covered with adhesions. On mobilizing the IUCD, the IUCD was grasped but did not come out. A limb was found embedded in the right, lateral bladder wall. An urgent urological consult was called for. The procedure was converted into a laparotomy and the bladder mobilized on the right side. The IUCD limb was then isolated and the bladder wall marked with electrocautery, which was deepened using sharp dissection until the mucosa. The bladder wall and the IUCD were removed in total and the bladder wall repaired in three layers with absorbable sutures. A Foley catheter along with an intraperitoneal drain was left in situ. The patient made an uneventful recovery.
Case 2
A 35-year-old, para 3 patient presented with dysuria, frequency, and urgency. On history, she revealed that she had had an IUCD insertion around a year prior to the presentation but had forgotten about it. At the time of presentation, she was amenorrhoeic for three months, and a pregnancy test was positive. A USG performed showed a bladder calculus of about 1 cm in size. She was taken up for cystoscopic removal of the calculus. However, the calculus was found to be adherent to the bladder wall, and on gentle traction, revealed a limb suggestive of an IUCD limb ( Figure 1). The patient was advised surgery, but as there was a risk to the fetus, she elected to postpone the surgery until after the delivery of the child. After completion of term, she was taken up for cesarean section delivery but again elected to postpone the IUCD surgery until after the baby was older. She came for follow-up six months after the cesarean section and was reinvestigated at the time. A contrast-enhanced CT scan was performed to rule out any other adjacent organ involvement ( Figure 2). An elective exploratory laparotomy was performed, in which intra-operative findings revealed an anteriorly displaced IUCD, which was densely adherent to the anterior bladder wall. (Figure 3). A partial cystectomy was performed with excision of the adjacent bladder wall and the IUCD was removed in toto ( Figure 4). A bladder repair was performed. An intraperitoneal drain, a supra-pubic catheter, and a Foley catheter were left in situ. The patient made an uneventful recovery and was symptom-free one year after surgery.
Discussion
IUCD is a widely accepted method of contraception. It is easily inserted, is reversible by removal, and causes few side effects [1]. The common side effects are abdominal pain, and heavy menstrual bleeding, especially in the first few months after insertion. Rarely, expulsion, menorrhagia, dysmenorrhoea, pregnancy, and abortion may occur.
IUCD's can perforate the uterus and then migrate into the pelvic or abdominal spaces. IUCD perforations have been divided into four types according to the anatomical spaces affected. The first compartment is the uterine cavity (type 1), the second is when the IUCD is confined to the myometrium (type 2) and the third compartment is when the peritoneal cavity is breached (type 3). When an IUCD penetrates the surrounding viscera, the perforation is type 4 [6].
A uterine perforation may be primary or secondary. A primary perforation occurs at the time of insertion, whereas a secondary perforation occurs after a delay, probably due to pressure necrosis and inflammation of the uterine wall. [2,28,29] IUCD migration may follow uterine perforation. It is a rare complication, occurring between 1.2 -1.6 per 1,000 insertions [8] Mechanisms that explain migration of an IUCD include iatrogenic perforation, spontaneous uterine contractions, involuntary bladder contraction, gut peristalsis, and peritoneal fluid movement which together contribute to the migration and implantation of the IUCD in other adjacent organs. IUCD's have most commonly been found in the Pouch of Douglas. They have been found in the ceacum, the bladder, and adjacent to the ureter. Kassab reported 165 perforations of the IUCD with the IUCD located in various organs [3].
IUCD migration into the bladder is a rare complication and most commonly occurs between two and 10 years after implantation. In the first case of this series, the migration was detected three years after insertion, and in the second case, migration was detected after 12 months.
After being in the bladder for a long time, encrustations form over the limbs of the IUCD which can then form a vesical calculus [5]. Rarely, the IUCD can embed in the wall of the bladder and be difficult to remove, necessitating a cystotomy or a partial cystectomy [9].
The initial approach to surgery in the first case was laparoscopic. However, due to dense adhesions between the IUCD and the surrounding tissue including the bladder, conversion to laparotomy was required. A partial cystectomy was needed in this patient. Shin et al demonstrated the use of laparoscopic approach alone to manage an embedded IUCD [10] Sharma et al performed a cystoscopic retrieval of an intravesical IUCD [11]. Sano et al have described a case in which laser lithotripsy was used to remove a bladder calculus under general anaesthesia [12]. In the second case of our series, this was attempted, but the limbs of the IUCD were embedded in the wall of the bladder and covered with a calculus and the procedure could not be safely performed. This necessitated a thorough evaluation and subsequent laparotomy.
Of the twenty-six papers that have been cited in Table 2, 18 papers (69.2%) have been published in the last decade alone. A growing world population along with an increase in the use of contraception worldwide, as is evidenced by falling birth rates, translates to a potential increase in the incidence of IUCD migration in the coming years. Doctors treating women with potential complications of IUCD insertion need to be aware of this fact.
Based on the published data, an algorithm is suggested for the management of patients with migrated IUCD's that may involve the urinary tract ( Figure 5).
Conclusions
Most IUCD migrations occur at the time of insertion, and proper training of healthcare workers is imperative to prevent complications. Although rare, IUCD migration is a complication with high morbidity. IUCD migration into the bladder is a debilitating condition for the patient and warrants a multi-disciplinary approach with the use of imaging techniques and cystoscopy to locate the IUCD. Proper patient preparation is vital to a successful outcome. With an increasing number of women worldwide adopting some form of contraception, including IUCD, the incidence of migrated IUCD's is going to rise in the future, and gynecologists, surgeons, and urologists need to be aware of this complication.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.
Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-03-03T05:23:47.275Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "9bdd7b88f7ec24e66723a61902cd36f50b76ccbc",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/49932-migrated-intravesical-intrauterine-contraceptive-devices-a-case-series-and-a-suggested-algorithm-for-management.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bdd7b88f7ec24e66723a61902cd36f50b76ccbc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
72775828 | pes2o/s2orc | v3-fos-license | On the Dispersive Ordering and Applications
It is to be noticed that the dispersion function in (1) also is known as the absolute deviation function of X at a point u ∈ (−∞, +∞) (see [1] for more details). Up to the present, some results related to dispersion functions in term of (1) have been investigated by Muñoz-Perez and Sanchez-Gomez [2, 3], Pham-Gia and Hung [1], and Hung and Son [4]. Also note that the dispersive functions and stochastic ordering have been considered in various papers and they are effective tools in many areas of probability and statistics. Such areas include reliability theory, queuing theory, survival analysis, biology, economics, insurance, actuarial science, operations research, and management science (we refer to [5, 6] for a complete treatment of the problem). It is worth pointing out that the dispersion function
Introduction
Let be a random variable defined on a probability space (Ω, A, ), with distribution and mean ().A random variable is called to belong to class L 1 , if its mean is finite.From now on, () denotes the dispersion function of L 1random variable at a point ∈ (−∞, +∞), defined as follows: It is to be noticed that the dispersion function in (1) also is known as the absolute deviation function of at a point ∈ (−∞, +∞) (see [1] for more details).Up to the present, some results related to dispersion functions in term of (1) have been investigated by Muñoz-Perez and Sanchez-Gomez [2,3], Pham-Gia and Hung [1], and Hung and Son [4].Also note that the dispersive functions and stochastic ordering have been considered in various papers and they are effective tools in many areas of probability and statistics.Such areas include reliability theory, queuing theory, survival analysis, biology, economics, insurance, actuarial science, operations research, and management science (we refer to [5,6] for a complete treatment of the problem).
It is worth pointing out that the dispersion function () of at a point ∈ (−∞, +∞) has attracted much attention as a dispersion measure of in L 1 -norm and it can be considered as a generalization of the mean absolute deviation 1 () := |−| and the median absolute deviation 2 () := | − | of a random variable when and exist and are unique; here and throughout this paper and denote the mean and median of random variable , respectively.The mean absolute deviation and median absolute deviation that play particular roles in Applied Statistics and Economics have been investigated by Pham-Gia et al. (we refer the reader to [1,[7][8][9] for deeper discussions).
The dispersion function as was stated above is convex and almost everywhere differentiable, and its derivative has most a countable numbers of discontinuity points (see for instance [2,3]).Lately, some interesting results concerning the connections of the weak convergence of a sequence of L 1random variables with the convergence of their corresponding dispersion functions have been investigated by Hung and Son (see [4] for more details).Thus, the dispersion function () of random variable at a point ∈ (−∞, +∞) has attracted much attention as a dispersion measure of a random variable in various problems related to limit theorems of probability theory, applied statistics, and economics.
The main aim of this paper is to present some results related to the dispersive ordering of probability distributions via dispersion functions of the L 1 -random variables.The received results are extensions of authors studies in [4], and they are showing a new approach to the Laws of Large Numbers in L 1 -norm.
ISRN Applied Mathematics
The organization of this paper is as follows.In Section 2 we will recall the main properties of the dispersion functions that will play fundamental roles in the study of next section.For more details about the proofs of results in this section, we refer the reader to Muñoz-Perez and Sanchez-Gomez [2,3] and Hung and Son [4].The last section gives some main results on dispersive ordering of probability distributions via dispersion functions and applications.
Preliminary Results
Some properties of the dispersion functions have been investigated so far; they can be listed as follows.For more details we refer the reader to [2][3][4].Throughout this paper the symbols →, →, and → stand for the convergence in distribution, convergence in probability, and convergence in L 1 -norm, respectively.For the convenience of the reader we repeat the relevant material from [2,3] without proofs.Specifically, we have for every ∈ L 1 the following.
(1) The expression of the distribution of and the derivative () of the dispersion function is defined as follows: where is a set of continuity points of .(2) The dispersion function () of at a point ∈ (−∞, +∞) is L 1 -distance between and : where is the distribution function of the degenerate variable at the point .
(3) The equivalent formulae of (1) are or On the other hand, Hung and Son in [4] have established the connections of the weak convergence of the random variables with the convergence of the dispersion functions as follows.(4) Let { , ≥ 1} be a sequence of L 1 -random variables.If there exists > 1, such that sup and if then () → (), for all ∈ (−∞, +∞).
(c) Let H 0 be the set of all points such that () exists.Then for ∈ H 0 , lim (see [4] for more details).
Main Results
In this section, all the random variables or probability distribution functions mentioned are related to L 1 space.
According to the results from Muñoz-Perez and Sanchez-Gomez [2,3], for , ∈ L 1 , we say that the random variable is at least as dispersed as , denoted by It can be easily seen that a degenerate variables is the lower bound of the family of finite-mean random variables.Before stating the main results of this paper, we first study some properties of dispersive ordering.Lemma 1. Suppose that and are two independent random variables.Then, where and are distribution functions of and , respectively.
Proof.Let be the distribution function of + ; we have where Besides, Using the results just obtained, we have ( This completes the proof of the lemma.
Theorem 2. Suppose that the random variables , , ∈ L 1 .Then, we have the following.
( According to that, the dispersion function is a convex function whose derivative exists almost everywhere and is bounded by −1 and 1.Hence, This gives Besides, Combining the ( 27) and ( 28) or (29), we get the complete proof.
The last property (4) can be obtained from the previous lemma.
The following theorem gives us an important property of dispersive ordering.Theorem 3. Let ( ) ∈N be a sequence of distribution functions.If they are monotone and bounded in the meaning of dispersive ordering, then they converge weakly.
According to the properties of the dispersion functions shown in Section 2, we get the complete proof.
From the Theorem 3, we have the following interesting results.
Corollary 4.
If { , F } is a martingale and L 1 -bounded, then the corresponding sequence of distribution functions converges weakly.
Proof.It is know that if { , F } is a martingale, then (| − |) is a nondecreasing sequence.From the bounded condition and Theorem 3, we obtain the conclusion.Theorem 5. Let { , ≥ 1} be a sequence of L 1 -independent random variables.Moreover, suppose that is a L 1 -random variable , satisfying ≤ , ∀ ∈ N. (30) Then where Proof.Without loss of generality, we assume that () = ( ) = 0, for all ≥ 1.
We have where * is the notation of convolution between distribution functions and And this completes the proof.
Note that it makes sense to consider that the dispersive ordering can be applied to limit theorems in probability (for the L 1 -weak law of large numbers) as a new approach and should be more investigated.
Moreover, as we know that a famous estimator is minimum-variance unbiased one.This is based on the existence of variance, which is considered as a measure of dispersion.It is natural to link to an estimator based on dispersive ordering.Definition 6. Suppose that ( 1 , 2 , . . ., ) is a random sample from the family of distribution (, ) and the estimator θ is called a minimum-dispersive unbiased estimator if θ is unbiased and holds θ ≤ * for every unbiased estimator * of .
It can be shown that the sample mean is a good estimator in the meaning of dispersive ordering.
The proof is completed. | 2019-03-10T13:05:20.869Z | 2013-09-11T00:00:00.000 | {
"year": 2013,
"sha1": "a8aa2037f8e6f059a0b71a4f9c60fd85463fe198",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2013/780587.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a8aa2037f8e6f059a0b71a4f9c60fd85463fe198",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
84853411 | pes2o/s2orc | v3-fos-license | BIODISTRIBUTION STUDIES OF BEE VENOM AND SPIDER TOXIN USING RADIOTRACERS
The use of radiotracers allows the understanding of the bioavailability process, biodistribution, and kinetics of any molecule labelled with an isotope, which does not alter the molecule’s biological properties. In this work, technetium-99m and iodine-125 were chosen as radiotracers for biodistribution studies in mice using bee (Apis mellifera) venom and a toxin (PnTX2-6) from the Brazilian “armed” spider (Phoneutria nigriventer) venom. Incorporated radioactivity was measured in the blood, brain, heart, lung, liver, kidney, adrenal gland, spleen, stomach, testicle, intestine, muscle, and thyroid gland. Results provided the blood kinetic parameter, and different organs distribution rates.
INTRODUCTION
The advantages of using radiotracers are widely acknowledged in medicine, both for diagnosis and therapy.Though radiotracers allow an important approach to understand the bioavailability process, biodistribution, and kinetics of any toxin, their use in animal venom researches has been very scanty.
Animal venoms and mainly the isolated toxins have been important tools in biochemical, physiological, and pathological studies, as well as in the development of new drugs.Their biological activities are selective, specific, and very often present synergetic action.Animal venoms share basic features, such as being complex mixture of proteins and peptides with great structural diversity.
In this work, there are two assays with different radiotracers: technetium-99m and iodine-125.They were chosen as radiotracers for biodistribution studies of a complex mixture (bee venom) and an isolated peptide (PnTX2-6 toxin) in mice.These toxins have very different chemical characteristics, such as presenting low molecular weight components with high toxicity.They are also of clinical interest due to the high frequency of severe accidents involving human beings (1,3).
The aim of this study is to show the feasibility of assays with radiotracers and learn more about transport mechanism, compartments involved, and residence time in sites of action for the toxin and venom chosen.
MATERIAL AND METHODS
Apis mellifera (Hymenoptera, Apidae) venom in high degree of purity was supplied by the Center for the Studies of Social Insects (CEIS -UNESP, Rio Claro, São Paulo State, Brazil), and Phoneutria nigriventer venom was obtained from Butantan Institute (São Paulo, Brazil).Both were collected by the electric shock method, then frozen and lyophilized.
Na 125 I was purchased from MDS Nordion, and the 99 Mo/ 99m Tc generator from the Radiopharmaceutical Center, IPEN (São Paulo, Brazil).The toxin was labelled with iodine-125, and Apis mellifera venom with technetium-99m
PnTX2-6 toxin preparation
Phoneutria nigriventer venom was dissolved in cold ammonium formiate buffer, 100 mM, pH 6.0, and centrifuged at 10000 g for 5 minutes.The soluble phase was fractionated in two steps: in Sephadex G-50 gel with the same dissolution buffer, and in reverse phase HPLC (column µRPC C2/C18 -Pharmacia, using extended linear gradients of acetonitrile in 0.1% (v/v) aqueous trifluoroacetic acid).
Apis mellifera venom labelling with 99m-Tecnetium
Sodium pertechnetate (Na 99m TcO 4 ) in saline solution was obtained from the 99 Mo/ 99m Tc generator.The labelling reaction was performed by incubating for 30 minutes, at room temperature, a solution with 99m Tc (55.5 MBq, radiochemical purity ≥ 98%) and bee venom solution: 20 µL of 99m Tc MDP.SnCl 2 .H 2 O (5.1 mg/mL, pH 6.4) and 125 µL of venom solution (2 mg/mL in saline).After the reaction, volume was increased with saline solution in order to carry out the biodistribution studies in animal models.
PnTX2-6 toxin labelling with Na 125 I
For the isolated peptide labelling, a method that uses chloramine T under mild conditions and low temperature was chosen (4).The reaction mixture was fractionated in gel filtration to separate the radioiodinated toxin from free iodine-125 and other contaminants.
Tracers control
In order to evaluate the 99m Tc-labelled compounds radiochemical purity, the analyses were conducted in HPLC, with TOSO HAAS, TSK-G2000 column, using linear gradient of water-acetonitrile-phosphoric acid 85% (500:1:1).For125 I-toxin, an additional control with measurement of iodine-125 in all the injected mice thyroid gland was carried out.
Kinetic assay and bioavailability
For the kinetic assay and bioavailability, adult Balb/C male mice were used.The animals received food and water ad libitum, and were kept in a room with natural light and controlled temperature (20 o C). 99m Tc-venom was intraperitoneally injected, and In the biodistribution analysis, radiotracer concentration in each organ was expressed as the percentage of injected tracers per mass unit of tissue (%D/g).Kinetic behaviour was estimated by using a graph of the mean %D/g versus time (logxlog).
Blood analysis was performed by plotting the radioactivity measured in the collected samples versus time (cpm/50 µL for 125 I-PnTX2-6 toxin and cpm/mL for 99m Tc-bee venom).Data were fitted to exponential decay models (one or two compartments).This adjustment uses the conventional compartments model and allows the access to information about distribution, metabolism, and elimination.
Statistical analysis and calculations were carried out with Microsoft Excel and
GraphPad Prism softwares.
RESULTS
Technetium-99m incorporation and radioiodination procedure yields were adequate.
The chromatographic analysis demonstrated that unbound radioisotope decreased to less than 3% in both cases after purification (removal of unreacted radioisotope).This percentage was insignificant and is reflected by the low initial thyroid gland uptake in mice injected with radioiodinated PnTX2-6 toxin.This gland actively uptakes free iodine, being a good indicator of this preparation radiochemical purity, mainly at the initial time.The thyroid gland graph shows that free iodine-125 uptake was only observed after 30 minutes, suggesting that it would be the degradation product of toxin metabolism.
In the Tables and Figures, the mean value obtained shows a kinetic profile for each organ and the blood.
Figure 1 shows the kinetics for 125 I-toxin PnTX2-6 in different organs; data were summarized in Table 1.Analysis of the data obtained from the blood and the kinetic parameter calculated suggest a two-compartmental model -a central and a peripheral compartment.
There are many organs in the peripheral compartment that interact with the toxin; so it is not possible to indicate only one target.No significant difference was found in the final phase of the curves for organs with similar biologic half-life.2 show the 99m Tc-bee venom results.The experimental data could not be fitted to the one or two-compartmental models and only a plotted curve was proposed.Technetium-99m radioisotope label was chosen for this pharmacokinetic study with bee venom due to its favourable radiation characteristics, possibility of labelling all bee venom components, easy availability, and low costs.
Iodine-125 has been indicated for innumerous applications due to its easy, simple and safe radioiodination system, labelled product stability, and detection facility (2).
However, radioiodination shows some limitations for labelling reaction with mixtures and small molecules.Iodine-125 binding in molecules occurs in the tyrosine amino acid (and in some special conditions, in the histidine or tryptophan).These amino acids are present in the primary structure of PnTX2-6, but they did not appear in all the bee venom components.
In some cases, this radioisotope size causes some important structure alterations or active site blockade with loss of biological activity.So, it must be checked if the radiotracer maintains the native toxin properties in control tests.
The obtained radiotracers yield and radiochemical purity analyses were all adequate, giving warranty for biodistribution assays.
The first study was done with purified PnTX2-6 toxin obtained from Phoneutria nigriventer venom.This spider is responsible for a great number of arachnid accidents in Brazil (3), which can be very severe in aged men and children.Its venom caused intense pain, salivation, dyspnea, vomit, and priapism when subletal doses were administered in male dogs.Various toxins were identified in this venom (7), and PnTX2-6 has been completely sequenced.It has 5,290 Daltons and specific activity at sodium channels (8).
125 I-PnTX2-6 biodistribution analysis suggested that the major percentage of the labelled toxin uptake occurred in the kidneys, and indicated that it is practically eliminated by the renal excretion.Organs with larger vascularization, in an increasing order, are the lung, spleen, heart, and liver.The lung is a critical organ because respiratory distress and possible pulmonary oedema are serious symptoms observed in patients.The heart may be a target organ for this toxin since cardiac alterations were observed in experimental assays and patients (5).
In heart muscle cells, high intracellular sodium concentration promotes calcium release from the sarcoplasmatic reticulum.An abrupt rise in the cytosolic calcium can trigger rapid concentration mechanism and produce characteristic ultrastructural damage in the cardiac and skeletal muscle fibbers of amphibians and mammals (9).
Presence of toxin in the liver suggests hepatic metabolism.A rapid brain uptake within 0.083 hours after the injection indicated that the tracer had an efficient passage through the intact blood-brain barrier.One part (non-specific binding and free ligands) of the radioactivity was washed out from the brain about 1 hour after the injection, while the remainder fraction (specific binding) was retained for a long time (about 20-25% of the brain accumulated radioactivity).This binding proved to be efficient in the brain, and remained metabolically stable at the binding sites for a sufficient long period.It also remained stable for a longer period in the testicles and muscles, but at low concentrations.
In the muscle, this fact is probably related to myonecrosis by a direct action on the sarcolemma, which first changes its permeability to sodium ions and promotes osmotic-like alterations (muscle cell vacuolation); and by a secondary indirect action, in which membrane disruption occurs as a response to the intracellular pressure and cell volume uncontrolled increase due to sodium and calcium ions influxes (9).
The second sample analysed was Apis mellifera venom.It is a complex mixture of biogenic amines, peptides, and enzymes.All its components have synergetic action leading to local, humoral, and systemic effects (1).Men have been victims of attacks by bees, which use venom to protect their colonies from intruders.Depending on the victim immunological sensitivity and number of stings, anaphylaxis or other pathological effects can occur.Therefore, it is interesting to know more details about the venom activity duration, distribution, and retention in animal models, particularly regarding to the biological effects observed in its victims.
Analysis of the data obtained from Apis mellifera venom demonstrated that the kidneys and liver showed higher radiotracer concentration in all periods of time, while spleen, heart, and brain presented the lowest quantities.The quantities found in the liver suggest that a hepatic metabolism may have happened.The quantity detected in the skeletal muscle confirms this tissue selectivity for bee venom, and that induced mionecrosis can be related to PLA 2 and melitin action (1).The venom radioactivity profile detected in the kidneys over 24 hours suggests that they may excrete the toxin or some of its metabolites.It was expected that the kidneys would be the main elimination route, but the amount excreted by feces showed to be more important.
This fact may be caused by the venom nefrotoxicity.Therefore, any venom compound could have hepatic and renal metabolism.
Reabsorption was observed in both assays: with 125 I-PnTX2-6 toxin and with 99m Tcbee venom.Radioactivity concentration increase in diverse organs occurred just after the increase in the blood stream and decrease in the stomach and intestine, which suggests that these organs could be included in transcellular reservoirs.According to Mattiello-Sverzuta et.al. (9), there is a mechanism of slow gastric release caused by spider venom intravenous injection, so the stomach may be a target organ for the PnTX2-6 toxin.There is no similar data for bee venom.
After clearance of the non-specific and free ligands from the whole organs analysed, the internalised radioactive molecule demonstrated to have similar slope in fitting curves with very long residence time for both, 125 I-PnTX2-6 toxin and 99m Tc-bee venom.
CONCLUSION
Biodistribution study conducted with radiotracers can be successfully used in animal venom and toxin studies.This method is sensitive, accurate, and simple to be applied.
It allowed in vivo observations of the Apis mellifera venom and PnTX2-6 toxin (isolated from Phoneutria nigriventer venom) behaviour.
Figure 2
Figure 2 and Table2show the 99m Tc-bee venom results.The experimental data
Figure 1 .
Figure 1. 125I-PnTX2-6 toxin kinetic behaviour in different organs over 24 hours after intravenous administration.Data were expressed as the percentage of total dose (%D) injected per tissue weight (g) or 50 µL of blood sample versus time.Blood sample data were adjusted by two phases: exponential decay or open two-compartmental model (C t = A.e -αt + B.e -βt ).B is the intercept of the back-extrapolated monoexponential elimination slope β with the ordinate; A is the intercept of the distribution slope α with the ordinate (27,890 and 12,780 cpm/50µL); α and β are the absorption and elimination hybrid constants, respectively (9.242 and 0.1873).
Figure 2 .
Figure 2. 99m Tc-Apis mellifera venom kinetic behaviour in different organs over 24 hours after intraperitoneal administration.Data were expressed as the percentage of total dose (%D) injected per tissue weight (g) or 1 mL of blood sample versus time.Blood curves were plotted with experimental points.
Table 1 .
125I-PnTX2-6 toxin distribution in different organs of mice sacrificed over 24 hours after intraperitoneal administration.Data were expressed as the percentage of total dose injected per tissue weight (% D/g ± S.D).
Table 2 .
99mTc-Apis mellifera venom distribution in different organs of mice sacrificed over 24 hours after intraperitoneal administration.Data were expressed as the percentage of total dose injected per tissue weight (% D/g ± S.D).RADIOTRACERS.J. Venom.Anim.Toxins incl.Trop.Dis., 2005, 11, 1, p. 47Radioisotopes were chosen as radiotracers based on the molecule chemical structure, convenience of the labelling method, and labelled detection in different biological samples.
(6) blood perfusion and the facility of passing through vessels and penetrating the cell(6).Diffusion time for different tissues, identification of target tissue, and residence time are important information revealed by biodistribution studies. | 2018-12-10T22:09:23.360Z | 2005-03-01T00:00:00.000 | {
"year": 2005,
"sha1": "1b985f43e0cfefc05d8114da9106a83372af7c1f",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/jvatitd/a/TBCCLkCmNCCNfWgC4qPWg8j/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1b985f43e0cfefc05d8114da9106a83372af7c1f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
603529 | pes2o/s2orc | v3-fos-license | Distinct seasonal migration patterns of Japanese native and non‐native genotypes of common carp estimated by environmental DNA
Abstract Understanding behavioral differences between intraspecific genotypes of aquatic animals is challenging because we cannot directly observe the animals underwater or visually distinguish morphologically similar counterparts. Here, we tested a new monitoring tool that uses environmental DNA (eDNA), an assemblage of DNA in environmental water, to specifically detect Japanese native and introduced non‐native genotypes of common carp (Cyprinus carpio) in Lake Biwa, Japan, and estimated differences between the two genotypes in the use of inland habitats. We monitored the ratios of native and non‐native single nucleotide polymorphism alleles of a mitochondrial locus of common carp in a lagoon connected to Lake Biwa for 3 years using eDNA. We observed seasonal dynamics in the allele frequency showing that the native genotype frequency peaked every spring, suggesting that native individuals migrated to the lagoon for spawning and then returned to the main lake, whereas non‐native individuals tended to stay in the lagoon. The estimated migration patterns corresponded with the estimates of a previous study, which were based on commercial fish catch data. Our findings suggest that eDNA‐based monitoring can be useful tool for addressing intraspecific behavioral differences underwater.
| INTRODUCTION
Intraspecific behavioral differences driven by genetic variation are an important factor that affects ecological and evolutionary processes (Wolf & Weissing, 2012). However, in situ observation of such differences is generally very difficult in aquatic animals because of the inability to observe underwater animals and the difficulty in visually discriminating morphologically similar counterparts. A new development in monitoring animal distribution during the last decade can fill this methodological gap, as this method uses environmental DNA (eDNA) as a trace for animal presence. eDNA is defined as DNA assemblages existing in environmental media, such as water and soil. As genetic assessment was first applied to eDNA recovered from water to estimate the distribution of bullfrogs (Ficetola, Miaud, Pompanon, & Taberlet, 2008), eDNAbased monitoring of animal distribution has rapidly developed and gained general acceptance for use in detecting specific species (see reviews by Lodge et al., 2012;Rees, Maddison, Middleditch, Patmore, & Gough, 2014;Thomsen & Willerslev, 2015) and taxa (Miya et al., 2015;Thomsen et al., 2012;Valentini et al., 2016) in aquatic systems.
The eDNA method can precisely identify species from genetic information, which is especially useful when dealing with similar-looking organisms, such as intraspecific genotypes. We previously developed an eDNA method to quantify the relative biomass ratio of intraspecific genotypes of common carp (Cyprinus carpio) based on a single nucleotide polymorphism (SNP) (Uchii, Doi, & Minamoto, 2016). Here, we tested the potential use of this method to estimate the difference in the seasonal migration patterns of two genotypes of common carp.
As the Japanese archipelago broke apart from the Eurasian continent, a unique strain of common carp has evolved in Japan. Based on mitochondrial sequence data, the Japanese strain is estimated to have diverged from Eurasian ones ca. two million years ago (Mabuchi, Senou, Suzuki, & Nishida, 2005). However, domesticated Eurasian strains were introduced to Japan in 1905 at the latest (Maruyama, Fujii, Kijima, & Maeda, 1987), resulting in today's extensive invasion of non-native common carp throughout Japanese freshwater systems (Mabuchi, Senou, & Nishida, 2008). Lake Biwa, the largest and oldest lake in Japan, is an important habitat for the Japanese native strain and other endemic freshwater fishes of Japan. An early report suggested that the native strain inhabited the main lake and migrated to littoral and inland habitats in the spring for spawning, whereas the nonnative strains inhabited the littoral and inland habitats all year round (Hurukawa, 1966). This estimate was based on only 1 year of commercial catch data in which the two strains were visually discriminated; thus, we still do not have concrete evidence about the migration patterns of native and non-native strains.
In this study, we applied eDNA monitoring to estimate the seasonal use of inland habitats by the native and non-native genotypes.
The ratio of the two genotypes in a lagoon connected to Lake Biwa was monitored for 3 years using a previously developed eDNA assay that quantifies the ratio of native and non-native DNA alleles based on a SNP (Uchii et al., 2016). We observed reproducible seasonal dynamics in the ratio of two genotypes, which suggested the differences in the migration patterns between the two genotypes.
| Water sampling and filtration for eDNA collection
Water sampling was performed in Ibanaiko Lagoon (Fig. 1), which is connected to Lake Biwa, Japan, 19 times between April 2013 and April 2016. To cover the wide range of the lagoon system, we set four monitoring sites on the shore, three of which were located from upstream to downstream in the lagoon (sites a-c; Fig. 1), and one of which was located in a channel flowing from the lagoon to the main lake (site d; Fig. 1). We collected 2 L of surface water at the four shore sites during every sampling occasion. In total, 76 samples (4 sites × 19 sampling times; Table 1) were collected on the shore. We also collected 2 L of surface water from three offshore sites (sites e-g; Fig. 1 with detergent and distilled water in the laboratory, were rinsed with environmental water collected on site at least three times before sampling. Water samples were immediately placed in a cold box and transported to the laboratory within 5 hr. Each water sample was filtered onto a Whatman ® GF/F filter (47-mm diameter, 0.7μm particle retention size; cat no. 1825-04, GE Healthcare Japan, Tokyo, Japan) to capture eDNA. The filtered volume per filter ranged from 330 ml to 1 L but was 500 ml in most samples (see Table S1). The filtration apparatus was decontaminated with 5% bleach for at least 5 min and thoroughly washed with tap water and distilled water between samples. At the end of each sampling day, 1 L of distilled water was subjected to filtration in the same manner as the samples to control cross-contamination during filtration and subsequent DNA extraction procedures. The filters were stored at −20°C until DNA extraction.
| eDNA extraction from GF/F filters
DNA was extracted from each GF/F filter using the DNeasy ® Blood & Tissue Kit (Qiagen, Hilden, Germany). Each filter was soaked with 400 μl of Buffer AL and 40 μl of Proteinase K in an internal container of a Salivette ® tube (Cat no. SAR-511534; Sarstedt, Nümbrecht, Germany) and incubated at 56°C for 30 min. After centrifugation at 5,000 g for 5 min, 220 μl of TE buffer (pH 8.0) was added to each filter in the tube, and the tubes were centrifuged again under the same conditions. Buffer AL (200 μl) and 100% EtOH (600 μl) were then added to each filtrate and mixed by pipetting. The mixture (~600 μl) was applied to a DNeasy Mini Spin Column and centrifuged at 6,000 g F I G U R E 1 Sampling locations in Ibanaiko Lagoon, which is connected to Lake Biwa. Shore sites are labeled a-d and offshore sites are labeled e-g Offshore sites 0 0 4 3 for 1 min. This step was repeated until the mixture was completely processed. We followed the manufacturer's instructions for further steps. Finally, DNA was eluted from the column with 100 μl of Buffer AE, with some exceptions (150 μl; Table S1).
| Quantification of ratios of native and nonnative alleles in eDNA
Cycling probe technology is a highly sensitive method that uses DNA-RNA-DNA chimeric probes to detect SNPs; it involves using ribonuclease H to cut the RNA portion of the chimeric probe that hybridizes to the target DNA, thereby enabling sequence-specific detection (Bekkaoui, Poisson, Crosby, Cloney, & Duck, 1996;Yatabe et al., 2006). We quantified the relative abundances of the DNA al- The last base (A) at the 3′ end of the original reverse primer (Uchii et al., 2016) was removed to increase mismatches to related species within the last five bases at the 3′ end (Appendix 1). The standard mixtures of native and non-native DNA (50-100 copies/reaction) in the ratios of 15:1, 7:1, 3:1, 1:1, 1:3, 1:7, and 1:15 (15, 7, 3, 1, 0.33, 0.14, and 0.07 in terms of native/non-native DNA concentration ratios, respectively) were prepared, and at least five of them were included in triplicate or duplicate in every run (see details in Table S2). Using standard curves created between ΔC T (calculated as C T(non-native DNA) -C T(native DNA) ) and the native/non-native DNA concentration ratios of the standard mixtures, the native/non-native DNA concentration ratios of eDNA samples were quantified when the amplification signals were detected at C T < 40 for both native and non-native probes in at least two PCR replicates of a sample. A replicate in which either probe showed no signal was omitted from the calculation. At least two PCR blanks were included in every run to control for DNA contamination in the PCR. See Appendices 2 and 3 for the PCR inhibition test.
| Quantification of common carp cytochrome b DNA in eDNA
When estimating the native and non-native allele frequencies in the lagoon, we quantified the amount of common carp cytochrome b DNA (CytB), which was previously demonstrated to be linearly correlated with carp biomass in aquarium experiments (Takahara, Minamoto, Yamanaka, Doi, & Kawabata, 2012), to adjust for variations among sites in the amount of DNA. The copy numbers of CytB were quantified according to Takahara et al. (2012). Briefly, real-time PCR was performed in triplicate in a reaction volume of 20 μl containing 1× TaqMan Table S3). The CytB concentrations of eDNA samples were quantified when the amplification signals were detected in at least two PCR replicates of a sample. The concentration of CytB at each site (copies/L) was calculated from the volumes of filtered water (330-1,000 ml) and eluted DNA (100-150 μl).
| Estimation of native and non-native genotype frequencies using eDNA
The ratio of native DNA was calculated for each eDNA sample as fol- The time-series data of the genotype frequencies for 3 years at each of the four shore sites were checked using a unit root test to discard the possibility of randomness. We performed the Phillips-Perron unit root test for the null hypothesis that the time series has a unit root against a stationary alternative. To check the possibility that the genotype frequencies were linearly decreased/increased over time, the genotype frequencies of the four shore sites were also analyzed by a binomial generalized linear model (GLM) with logit link function, with sampling year and site as explanatory factors. All statistical analyses were performed using R ver. 3.3.1 (R Core Team 2016),
| Contamination during filtration, DNA extraction, and real-time PCR
No amplification signals were detected in the eDNA extracts from the control GF/F filters used to filter 1 L of distilled water in the real-time PCRs quantifying the native/non-native DNA ratios and CytB. No amplification signals were detected in the PCR blanks in any real-time PCR assays.
| Dynamics of the frequencies of native and nonnative genotypes
The native/non-native DNA concentration ratios, which were quantified based on the SNP in the D-loop region, were successfully estimated for 85 of 97 samples (see Table S2 for C T values and standard curves). The 12 eDNA samples that failed to meet estimates had low copy numbers of common carp mtDNA (i.e., <1 copy/μl of CytB). or interactions between these two factors (Chi-square = 1.12, df = 9, p > .9), discarding the possibility that the genotype frequencies were linearly decreased/increased over time. The nonweighted frequencies of the native genotype estimated by a simple averaging of the ratios of native DNA at the shore sites showed repetitive seasonal dynamics in which the frequencies were highest in the spring when spawning occurred and lower in other seasons (Fig. 2a, plotted as open squares). The nonweighted frequencies of the native genotype estimated from the shore and offshore sites combined (Fig. 2a, open circles) showed similar values to those estimated from the shore sites alone (open squares). The CytB-weighted frequencies of the native genotype ( Fig. 2b) showed slightly different dynamics compared with the nonweighted frequencies. However, the overall trend was similar, as the CytB-weighted frequencies of the native genotype were also highest in the spring every year. The CytB-weighted frequencies estimated from the shore sites alone (Fig. 2b, solid squares) showed similar values to those estimated from the shore and offshore sites combined (solid circles).
| DISCUSSION
We estimated the differences in the seasonal use of inland habitats between Japanese native and introduced non-native genotypes of common carp in Lake Biwa by monitoring SNP allele frequencies in eDNA. Samples collected over 3 years, including four springs, showed repetitive seasonal dynamics in the frequency of the native genotype for both the nonweighted (Fig. 2a, open squares) and CytB-weighted These frequencies were calculated when the ratios of native DNA were quantified at more than two sites on each sampling day. Vertical bars represent standard errors. The ratios of native DNA at each site (a-g) are also plotted. (b) CytB-weighted frequencies of the native genotype in the lagoon estimated from the shore sites (solid squares) and the shore and offshore sites combined (solid circles). These frequencies were calculated when both the ratios of native DNA and the concentrations of CytB were quantified at more than two sites on each sampling day suggesting that many individuals with the native genotype left the lagoon and moved, presumably to the main lake, whereas non-native individuals tended to remain in the lagoon. Such a difference in migration patterns between the native and non-native genotypes corresponds well to the previous estimate based on commercial fish catch data (Hurukawa, 1966). The latter showed that the catch amount of the native strain increased in the spring and decreased after the summer in littoral and inland habitats and increased in the winter in pelagic habitats. However, the catch amount of the non-native strains did not fluctuate substantially in the littoral and inland habitats and was very low in the pelagic habitats throughout the year (Hurukawa, 1966). Furthermore, the seasonal changes in the native genotype frequency observed in the present study were roughly consistent with the changes detected in common carp individuals that were caught in Ibanaiko lagoon in a previous study (Uchii, Okuda, Minamoto, & Kawabata, 2013; see Appendix 4). These observations suggest the potential use of eDNA for detecting behavioral differences between closely related species.
When we deal with morphologically similar organisms, such as intraspecific genotypes and subspecies, there is often difficulty in visual discrimination. The eDNA method can readily discriminate genotypes because it uses genetic information. Furthermore, we can obtain long-term data of genotype frequencies with substantially less effort than is needed for traditional catch sampling or biotelemetry (i.e., genetic analyses of large numbers of individual fish after catch or long-term tracking of large numbers of individuals after tagging them, respectively). As research generally requires long-term observations to detect seasonal patterns of animal migration, the time-and costeffectiveness of the eDNA method would be a great advantage.
On small spatial or short temporal scales, there is difficulty in linking eDNA information with animal presence and biomass due to eDNA dispersal, eDNA degradation, and animal movement (Barnes & Turner, 2016;Barnes et al., 2014). Thus, eDNA monitoring would be more suitable for tracking animal movement at larger spatial and temporal scales, such as seasonal migration, because the dispersal range of eDNA and the stability of DNA molecules are limited in the environment (e.g., Barnes et al., 2014;Jane et al., 2015;Maruyama, Nakamura, Yamanaka, Kondoh, & Minamoto, 2014;Strickler et al., 2015). A recent study demonstrated that the existence of target DNA was correlated with the migration range of anadromous fish (Yamanaka & Minamoto, 2016). Another study demonstrated a significant positive correlation between DNA concentration and the number of bighead carp detected by telemetry in a spawning habitat in which the increase in DNA was presumably attributable to the individuals that migrated for spawning (Erickson et al., 2016). These studies, together with the present study, suggest the great potential of the eDNA method for monitoring seasonal animal movement.
Most eDNA studies conducted thus far have used mtDNA markers to increase the detection probability because the copy number of mtDNA is much greater than that of nuclear DNA. We also targeted mtDNA for the same reason. However, mtDNA markers have a disadvantage in that they do not distinguish hybrids due to the maternal inheritance of mitochondria. As intraspecific genotypes can hybridize, we cannot exclude the possibility that the native genotype frequency based on mtDNA haplotypes would be over-or underestimated if cytonuclear disequilibrium was to exist. The use of nuclear DNA markers would solve this problem. Although several nuclear DNA markers that distinguish the Japanese native and non-native genotypes of common carp have been reported (Mabuchi, Song, Takeshima, & Nishida, 2012), these markers are single-copy and thus very difficult to detect in eDNA. However, because these markers can detect intraspecific genetic structure, the development of an effective eDNA concentration method to detect single-copy genes would greatly expand the scope of eDNA studies. Alternatively, the development of markers in multiple-copy nuclear genes would offer great potential for evaluating intraspecific genetic structure. Nuclear ribosomal DNA (rDNA) would be a strong candidate, as a recent study demonstrated that a marker in the internal transcribed spacer region of rDNA has greater sensitivity than a mtDNA marker in eDNA detection . As many eDNA studies have repeatedly noted, the greatest advantage of eDNA monitoring is the simplicity of its sampling and analysis pro-
O)
A P P E N D I X 2 Shift of threshold cycle (C T ) values in field eDNA samples spiked with the standard DNA mixture of native and non-native DNA at a ratio of 1:1 (100 copies each/reaction). The copy number of the spiked DNA was ca. >10 times larger than that of target the DNA originally contained in the filed samples. Real-time PCR was performed in duplicate. Delayed C T values were observed in some samples when 5 μl of eDNA was used as template, whereas little or no inhibition effect was detected with 2 μl of template. Non-native A P P E N D I X 4 Frequency of the native genotype by month for common carp (≥350 mm in standard length and appeared to be adult) captured in Ibanaiko Lagoon in 2006, and 2009in Uchii et al. (2013. Because common carp are no longer commercially fished in this area, all carp were caught as bycatch. Note that the native strain of common carp is more susceptible to the lethal pathogen Cyprinid herpesvirus 3, which was introduced to Lake Biwa in 2003, than are the non-native strains (Ito, Kurita, & Yuasa, 2014); thus, the proportion of the native strain in Lake Biwa is likely to gradually decline with time (Uchii et al., 2013) | 2018-04-03T05:38:48.727Z | 2017-09-12T00:00:00.000 | {
"year": 2017,
"sha1": "3e3af217f94e2de8341ca938ccfa132ad3f1b4c1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/ece3.3346",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e3af217f94e2de8341ca938ccfa132ad3f1b4c1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
19145460 | pes2o/s2orc | v3-fos-license | Proposed changes to the reimbursement of pharmaceuticals and medical devices in Poland and their impact on market access and the pharmaceutical industry
ABSTRACT In Poland, two proposed amendments to the reimbursement act are currently in preparation; these are likely to substantially change the pricing and reimbursement landscape for both drugs and medical devices. Proposed changes include: alignment of medical device reimbursement with that of pharmaceuticals; relaxing the strict reimbursement criteria for ultra-orphan drugs; establishment of an additional funding category for vaccines; introduction of compassionate use, and a simplified reimbursement pathway for well-established off-label indications; appreciation of manufacturers’ innovation and research and development efforts by creating a dedicated innovation budget; introduction of a mechanism preventing excessive parallel import; prolonged duration of reimbursement decisions and reimbursement lists; and increased flexibility in defining drug programmes. Both amendments are still at a draft stage and many aspects of the new regulations remain unclear. Nonetheless, the overall direction of some of the changes is already evident and warrants discussion due to their high expected impact on pharmaceutical and device manufacturers. Here we evaluate the main changes proposed to the reimbursement of drugs, vaccines, and medical devices, and examine the impact they are likely to have on market access and pharmaceutical industry in Poland.
The reimbursement system in Poland is soon likely to change significantly, with two amendments to the reimbursement act currently under discussion. The first amendment relates to the reimbursement of medical devices, aligning it with that of pharmaceuticals [1], while the other one proposes major changes to the overall reimbursement system for drugs and vaccines [2]. Both amendments are still in the relatively early stages of the legislative process, with public consultations completed only in the second half of 2016. The amendment on medical devices was expected to come into force in mid-2017 [3]. Following public consultations, in late April 2017 the revised amendment was subject to cross-departmental discussions and approved by the Permanent Committee of the Government [4], allowing it to be proceeded further. However, as the deadlines for the next legislative steps are not fixed, the actual timeline for implementation is hard to predict. The timeline for the general amendment is also unclear. Major changes to the pricing and reimbursement regulations included in these amendments are listed in Table 1. This article aims to review the key changes proposed to the reimbursement of drugs, vaccines, and medical devices, and assess their potential impact on market access and pharmaceutical industry in Poland.
Definition of medical devices used in Poland
For the purpose of the 2011 Reimbursement Act, medical devices cover medical and in vitro diagnostic devices, and supporting equipment [5], whichwhile not a medical or diagnostic device itselfis necessary for using the device as intended by the manufacturer [6] (e.g., a PC sold together and configured for use with an ultrasound machine or a CT scanner). The proposed amendment leaves this medical device definition unchanged, but adds a concept of a disposable medical device, that is 'a device intended for single use in only one patient' [1]. It is worth noting here that Poland uses a broad definition of medical devices. According to the 2010 Medical Devices Act [6], a medical device is 'any device, tool, software, material or article that, used independently or in combination, including with any accompanying software intended by the device manufacturer to be used for diagnostic or therapeutic purposes and necessary for proper use of the device, is designed by the manufacturer for use in humans in order to: a. diagnose, prevent, monitor, treat, or alleviate the symptoms of a disease, b. diagnose, monitor, treat, alleviate the symptoms, or compensate the consequences of an injury or handicap, c. examine, replace, or modify an anatomical structure or a physiological process, d. prevent conception, that does not achieve its intended effect through pharmacological, immunological, or metabolic means, but which action may be supported through such means.' From this definition, it is clear that the proposed changes to medical device reimbursement will affect a wide range of products, including, but not limited to, therapeutic, diagnostic and implantable devices.
Overview of the current regulations on reimbursement of medical devices Similar to other European countries, Poland operates a public health insurance system. As of the end of Q1 2017, the national third party payer (National Health Fund [NHF], Polish: Narodowy Fundusz Zdrowia) provided healthcare to over 35.23 million people, including the insuredthat is those paying health insurance contributions (25.8 million) and their family members (8.04 million)and people eligible for publicly funded healthcare for other reasons (1.39 million) [7]. Care is free at point of delivery for those covered by the NHF, although some out-of-pocket payments exist for drugs and medical devices issued in the outpatient sector.
At present, the NHF reimburses three broad categories of medical devices [1]. The first category comprises all types of devices used in the inpatient setting, which are supplied to the patient free of charge (in line with the 2004 Publicly Funded Healthcare Services Act), as are drugs and special nutritional products 1 administered to hospitalised patients [1,8]. This universal full reimbursement includes both devices used within a procedure (e.g., stent placement) and included in its reimbursement, and those used outside medical procedures (e.g., continence pads). Public hospitals are required to conduct tenders for their supplies in line with the regulations on public procurement; price is generally the main selection criterion, although other criteria may also apply, such as quality, delivery timelines, or payment schedule [9]. Tenders usually take place at individual hospital level. Group tenders are not as common in Poland as they are in many other countries, although public hospital managers have been growing more in favour of them lately [10]. Local governments may also support and facilitate group tenders among public hospitals in their area [10]. However, in the private sector, group tenders are commonplace; a single tender may cover supplies for a whole network of private hospitals and outpatient clinics [10]. The remaining two categories encompass devices used in the outpatient setting. Some devices used in the outpatient setting are included on the reimbursement list for drugs [1] these are available in pharmacies [11] and are generally simple devices that do not require personalisation for each patient, such as dressings and test strips for glucose monitoring. The pricing and reimbursement of this device group is regulated in the same manner as that of drugs [1]. Briefly, that means prices of reimbursed products are fixed and a reimbursement limit is set, defining the maximum amount that can be reimbursed [5]. Varying rates of co-pay apply (0%, small fixed co-pay, 30% and 50%) [5]. Thus, for products priced below the reimbursement limit, patients only cover the co-pay (if applicable), while for products priced above the reimbursement limit patients pay the difference between that limit and the actual price, in addition to the co-pay [5,12]. Further details of the reimbursement regulations are available in the literature [13]. Finally, devices such as prostheses, infusion sets for insulin pumps, glasses and hearing aids, among others, require a special prescription from an appropriate specialist, which has to be approved by the NHF regional office 2 before the device can be reimbursed [1,14,15]. For this device category, the legislation defines the indications in which the product may be reimbursed, the reimbursement limit (a cap on the reimbursed amountif the patient chooses a more expensive product, they will need to cover the difference between the limit and the actual price out-of-pocket, in addition to any co-pay), the applicable patient co-pay, how often a new replacement device can be reimbursed, and a separate reimbursement limit for any necessary repairs of the device [15]. The fixed prices and margins that affect drugs and simple devices sold in pharmacies do not apply to this device group, which is freely priced, often leading to excess pricing of reimbursed devices [1].
It is worth noting that, in terms of medical device supply, the boundary between inpatient and outpatient care is not clear-cut. The first category of medical devices comprises all types of devices that are supplied to patients admitted to hospitals and other providers of inpatient care (e.g., psychiatric hospitals, hospices, rehab clinics, etc.) [8,16]. In addition, some devices used in ambulatory/outpatient care (routine or emergency) will also fall into this category if they are necessary for care provision, including devices used for treatment, care, diagnostic and rehabilitation purposes [8,16]. For instance, a patient having a blood test funded by the NHF will not have to pay for the syringes, swabs and storage vials used, even if the test is conducted at their GP clinic or another outpatient clinic (e.g., a diabetes care centre). Similarly, a patient attending an accident and emergency department with a broken leg will not be required to pay for the cast (as having the leg put in a cast would fall under emergency care), but may subsequently be required to purchase an orthosis, which would fall into the third category of devices, i.e., those issued based on a special prescription.
Proposed changes to medical device reimbursement
In order to improve access and optimise NHF spending on medical devices, the Ministry of Health (MoH) proposed a number of changes to medical devices reimbursement [1]. The project proposes to combine the three different reimbursement categories outlined 1 According to the current regulations (the 2011 Reimbursement Act and the 2006 Act on Food and Nutritional Safety [available at: http://isap.sejm.gov.pl/Download?id=WDU20061711225&type=3]), these include infant formulas and dietary products used for special medical purposes (e.g., protein substitutes used in phenylketonuria). The proposed major amendment to the Reimbursement Act aligns the definition of these special nutritional products (termed 'foodstuffs with special nutritional use') with the EU definition of foods for special medical purposes, as per article 2g of the Regulation No. 609/2013 of the European Parliament and of the Council of June 2013 [available at: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri= CELEX:32013R0609&from=EN]. This change is unlikely to have a substantial effect on the types of special nutritional products reimbursed. above using a single approach, whereby the reimbursement of all medical devices would be aligned with that of drugs [1]. Manufacturers will need to submit a pricing and reimbursement application, accompanied by a device quality analysis performed by the Office for Registration of Medicinal Products, Medical Devices and Biocidal Products [1]. Novel devices, for which no equivalent device is available, will be subject to health technology assessment (HTA) and risk-sharing agreements, analogous to those applying to pharmaceuticals [1]. Under this new regulation, following a reimbursement application from the manufacturer, the MoH will approve reimbursement of the device and assign a maximum manufacturer price, 3 taking into account the outcome of pricing negotiations with the Economic Commission; devices will also be subject to fixed wholesale and retail margins, and clustered into reference price groups [1]. The MoH will define reference price groups for simple devices sold in pharmacies (second group of devices described above when discussing current regulations) based on reimbursement in the same indication and similar effectiveness [1,5]. For more complex devices issued based on special prescription or used within procedures (groups 1 and 3 above), similar mechanism of action and technical features, adherence to quality standards and cost-effectiveness will also be taken into account when defining reference price groups [1]. A reimbursement limit will be set for each group, and, in case of devices used in the outpatient setting, the amount of co-pay (0, 10, 30 or 50% of the reimbursement limit for devices issued based on special prescription, with no planned changes to the current co-pay levels for simple devices available in pharmacies) will be calculated, depending on patient, disease and device characteristics [1]. In addition to the co-pay, patients will cover any difference between the product price and the reimbursement limit out of pocket [1].
Importantly, devices used as part of a procedurewhich are currently included in a bundle payment with the procedure itselfwill go through their own, separate pricing and reimbursement process [1]. Patients undergoing a procedure will be able to choose from a range of appropriate devices, opting for a product priced within the limit (which will be the default option, guaranteed to all those insured) or a more expensive device, in which case patients will cover the difference between its price and the reimbursement limit [1]. Patients may also choose an appropriate product that is not reimbursed, covering its full cost [1].
The MoH is hoping that the proposed changes will allow public spending on medical devices to become both more transparent and more efficient [1]. The changes will be introduced gradually to ensure continuity of supply. To allow manufacturers to smoothly transfer their currently reimbursed devices to the new system, the MoH will allow them to do so on preferential terms. When submitting a reimbursement application under the new rules for a device that is already reimbursed under current regulations, manufacturers will not have to pay the application fee or supply the analyses of clinical and pharmacoeconomic data, which would otherwise have to be provided according to the new regulations [1]. The transition to the new system will be regulated through decrees [1]. The MoH will specify groups of devices that may be issued a reimbursement decision, based on a recommendation from the Polish HTA agency, AOTMiT, that will discuss the necessity of reimbursing such a group, its precise scope and the indications/procedures that the devices are used in [1]. Reimbursement decisions will be issued on request of the manufacturer, but such requests may only be filed for device types covered in the decrees [1]. Continence pads may become one of the first medical device groups to fall under the new regulations [17].
Possible implications of the proposed changes
The attempt to apply a uniform set of rules to the diverse market of medical devices has sparked considerable criticism [17,18]. Manufacturers have raised concerns about the fact that many types of devicesfor instance leg prostheses or wheelchairsvary considerably in their properties and therefore in price [19]. Although devices will be assigned a reference price group based not only on their target indication and mechanism of action, but also on their effectiveness, technology used, adherence to applicable quality standards and cost-effectiveness, for some devices, it may prove difficult to cluster their myriad different types into reference price groups. The reason for this is that devices are usually selected to suit individual patients and their needs (some may even be personalised, e.g., prostheses) and are therefore not as easily 3 Article 12 of the Reimbursement Act specifies 13 factors that affect the decision on reimbursement and maximum manufacturer price. Among others, these factors include the opinion of the Economic Commission (which negotiates prices on behalf of the MoH), Polish HTA Agency (AOTMiT) recommendation (if applicable), importance of the condition, clinical efficacy and effectiveness, safety, budget impact and cost-effectiveness. The proposed amendment on reimbursement of medical devices leaves these criteria largely unchanged, but adds the opinion of the Office for Registration of Medicinal Products, Medical Devices and Biocidal Products on the quality of the product as a factor influencing maximum manufacturer price. comparable as pharmaceuticals [19,20], where generics containing the same active substance may be used interchangeably.
Furthermore, the proposed application fees might be a concern for some, especially smaller, companies [18,21]. Although the amendment itself does not specify the fee amount, a draft decree accompanying the latest amendment version suggests a basic application fee of PLN3,885 (approximately €900) [22,23]. When applying for reimbursement of different device variants (models, versions, sizes, etc.), the fees for consecutive reimbursement applications may be substantially lower at 10-70% of the basic fee, depending on the nature of differences between variants (e.g., a different model will incur a 70% fee, compared with only 10% for a different size) [23]. Nonetheless, the fees may come up to a substantial amount when the manufacturer applies, for example, for reimbursement of a family of related devices.
Finally, introducing out-of-pocket payments for devices used within a procedure provides patients the opportunity to opt in for a higher-quality product, without having to pay for the entire procedure to be conducted in the private sector [21]. However, the proposed amendment requires healthcare providers to allow patients access to only one appropriate device priced within the reimbursement limit, although they can offer more options [1]. As such, device choice for patients who cannot afford out-of-pocket payments may be limited.
More broadly, aligning reimbursement of all groups of medical devices with that of drugs means relying on out-of-pocket payments for products priced above the reimbursement limit. As some patients may be unable to afford out-of-pocket payments for more expensive products [18,20,21], care should be taken to provide patients with a reasonable range of high-quality, affordable devices priced within, or only slightly above, the reimbursement limit.
For manufacturers wanting to access the Polish medical devices market, the proposed changes are indeed revolutionary. Following their introduction, the reimbursement landscape for most deviceswith the exception of those already included on the drug reimbursement listwill change completely. While the exact shape of the new regulations remains to be seen, their overall direction is clear, likely bringing substantial challengesespecially for smaller device manufacturers, who will now have to face additional bureaucracy associated with reimbursement applications and, in case of novel devices, HTA.
Changes to reimbursement policy in general
Orphan and ultra-orphan drugs Current regulations apply uniform reimbursement rules to all drugs, regardless of the size of their target patient population. Thus, orphan and ultra-orphan products (targeting diseases that affect <5 in 10,000 people [24], and ≤1 in 50,000 people [2], respectively) are required to demonstrate cost-effectiveness at the current willingness-to-pay (WTP) threshold of three times the gross domestic product (GDP) per capita per quality-adjusted life year (QALY) gained [5] (in 2015, GDP per capita was PLN46,764, or approximately €11,000 [22,25], which would translate into a WTP threshold of approximately PLN140,000 or €33,000 per QALY). This threshold appears rather stringent, given that reimbursement decisions based on clinical value and measures of innovationrather than a formal cost-effectiveness analysismay lead to broader coverage for orphan drugs [26], thus improving patient access. The proposed changes exempt ultra-orphan drugs from the aforementioned cost-effectiveness requirements, and focus reimbursement decisions in ultra-orphan indications on factors that may justify the drug's price, such as clinical effectiveness [27]. However, despite earlier suggestions from MoH officials that the reimbursement changes will apply to both orphan and ultra-orphan products [28], the proposed amendment focuses solely on ultra-rare diseases [29]. A commentary from one of the orphan disease patient organisations interpreted this narrowing of the patient population in focusfrom orphan to ultra-orphan indications onlyas an attempt to control spending, and criticised the lack of proposed means for increasing the reimbursement level of orphan drugs, given that few orphan products are reimbursed in Poland [29]. At the same time, however, the patient organisation representative noted that the proposed amendment is the first to introduce special provisions for orphan indications, and saw this as a positive change from orphan diseases not being previously considered as a separate entity [29].
Reimbursement of vaccines
Poland has operated a national vaccination programme since the 1950s [30]. The programme is revised annually and an updated version is published every year by the Chief Sanitary Inspectorate [30]. The National Immunization Technical Advisory Group (NITAG), including the Sanitary-Epidemiology Advisory Board and the Paediatric Group of Experts on Immunisation Program, advises the MoH and recommends vaccines for inclusion in the programme [30,31]. NITAG members include clinicians, epidemiologists, immunologists, paediatricians, public health experts, vaccinology experts and virologists/microbiologists, who may be additionally supported by external experts from the National Institute of Public Health-National Institute of Hygiene (NIPH-NIH), Polish Medical Societies, as well as institutes and universities [31]. The decision to recommend a vaccine for inclusion in the national vaccination programme is evidence-based, factoring in disease burden, efficacy, effectiveness and safety of the vaccine, pharmacoeconomic analysis [31], epidemiological situation in Poland and neighbouring countries, and relevant vaccination policies in other countries [30].
At present, all mandatory vaccines included in the national immunisation programme are fully reimbursed [31]. Other vaccines may be recommended (e.g., flu and human papillomavirus [HPV] vaccines), but are not reimbursed by the NHF [32], which means thatunless they are paid for from other public funds (e.g., the healthcare budgets of local governments)patients need to cover the entire cost of the vaccination out of pocket [31]. The amendment proposes to introduce partial reimbursement for these vaccines, with patient co-pay likely to be set at 30 or 50% [33]. The proposed reimbursement mechanism is similar to that for drugs, requiring an application from the manufacturer, followed by a HTA and pricing negotiations, before the MoH issues a decision [27]. The MoH is hoping that, by reducing patient costs, the changes will improve access to vaccines that are currently not mandatory according to the national programme [27].
Compassionate use of medicines
At present, the Polish regulations do not include any guidance on compassionate use of drugs, severely limiting patient access to investigational treatments that show promises in clinical trials but have not yet been granted a marketing authorisation [27]. The amendment introduces the concept of compassionate use in Polanda change that has been well received by the industry [34]. Eligible patients will be those suffering from a chronic, serious or life-threatening illness, for whom there are no effective approved treatments [2].
The product manufacturer will have to apply for MoH approval of a compassionate use programme; among other items, the application form should include a description of the target patient group together with an estimate of its size, a description of the disease state, with information on the lack of approved products that could be used in this setting, and criteria for patient inclusion and exclusion into the programme [2]. The manufacturer will have a number of obligations linked to the programme; among others, they will be required to monitor patient safety, sign contracts with healthcare providers that would administer the product (if applicable), and assess patients' medical records to verify eligibility for the programme [2]. Thus, with the introduction of compassionate use programmes, patients who need them most will gain access to novel, investigational therapies, providing additional treatment options where no other therapies exist or are effective. The reimbursement mechanism for compassionate use programmes is, however, unclear at present.
Off-label use of drugs
Currently, only licensed indications of pharmaceuticals are reimbursed in the outpatient setting (i.e., for prescription-only drugs, devices and special nutritional products sold in pharmacies), although for some drugs additional off-label indications specified by the MoH may also be reimbursed [5,35]. These offlabel indications are established through discussions with the relevant Chief Medical Officers 4 and AOTMiT [35], allowing the reimbursed indications to be extended beyond those specified in the summary of product characteristics (SPC). For each drug dosage and pharmaceutical form, reimbursed indications (onand off-label) and prices are available on the reimbursement lists published by the MoH [36], as are the co-pay and reimbursement limit that apply to each indication (see for instance [11] for an example reimbursement list). The amendment proposes extending the 'default' reimbursement scenario to off-label indications that are well-grounded in clinical practice, and for which the drug or device is known to be effective and safe [27], which will simplify the reimbursement of established off-label indications. The option to limit reimbursement to a specific condition will, however, remain unchanged, thus separating reimbursement from the licensed indications specified in the SPC [27]. 4 In Poland, a Chief Medical Officer is the most senior advisor to the government on health-related matters. However, rather than primarily focusing on public health, a Chief Medical Officer is appointed in each medical discipline (e.g., neurology, epidemiology, intensive care, public health, etc.).
Focus on innovative drugs, research and development
In line with the 2011 Reimbursement Act [5], the MoH can, at present, when making a decision on reimbursement, take into account the manufacturers' investments into research and development (R&D) and public health, both in Poland and elsewhere in the European Union (EU) or the European Free Trade Association [27]. However, the current regulations do little to facilitate evaluating R&D activities and their impact on public health in practice [27]. The proposed amendment addresses this issue by linking innovation to pricing and reimbursement decisions. An additional, dedicated reimbursement budget will provide funding for the reimbursement of innovative products (i.e., products for which no equivalent product is reimbursed in the same indication, and those targeting ultra-orphan diseases [2]), especially those developed by manufacturers whose R&D activities have a considerable impact on the Polish economy [27]. This impact will be assessed by the Ministry for Economic Development; their opinion will affect the decision on spending of the investment-based funds and will be taken into account during pricing negotiations [27]. In order for the Ministry of Economic Development to issue an opinion, the manufacturer will have to provide information on revenue, investments made, production volume, net profit or loss, income tax paid, volume of goods and services imported and exported in/out of Poland, spending on R&D activities in Poland, and expenses related to employees and the amount of social and health insurance contributions paid [2]. Details of innovation-based funding mechanisms will be defined in MoH decrees [2], drafts of which are currently not publicly available. At a conference with the Polish pharmaceutical industry, an MoH representative mentioned that the aforementioned opinion from the Ministry for Economic Development will also be taken into account when calculating the incremental cost-effectiveness ratio for innovative products [37]. Furthermore, novel risk-sharing instruments will be introduced, with provision of innovation-based funds contingent on the manufacturer performing R&D activities agreed with the MoH [27]. To control innovation-based spending, the risksharing instruments will also state that the manufacturer has to pay back any reimbursement exceeding a specified limit [27].
The MoH is hoping that the aforementioned solutions will promote industry investments into public health through preferential treatment of those manufacturers whose R&D activities are located in Poland [27]. The industry considered these changes to be positive; however, they recognised that little detail on innovationbased reimbursement has been provided thus far [34]. Indeed, no specific criteria for assessing R&D activity have been officially published as of March 2017, and the industry suggestions are also vague, recommending assessment criteria that will promote science-and technology-based economic development [34].
Risk-sharing instruments and payback applied to health products
The proposed amendment changes the way in which the reimbursement budget is formed to include income from risk-sharing instruments and payback [27]. This can be seen as an attempt to 'recycle' public funds spent on reimbursement by re-investing it into the coverage of drugs and medical devices, and has beenunsurprisinglyconsidered a step in the right direction by the industry [34].
The amendment includes legislation that substantially simplifies payback [27]. At present, the amount of payback is determined through a complex mechanism, triggered by the NHF exceeding its total reimbursement budget [5]. Payback only applies to those products for which the spending on reimbursement has increased since the preceding year, and the amount paid back depends on the reference price group that the product is in [5]. The amendment proposes that payback should be triggered only by the NHF exceeding its budget on outpatient drug reimbursement, without taking into account the total reimbursement budget, or referring to the product's reference price group [27].
Current exceptions from statutory payback will continue to apply [27]. Thus, products for which risk-sharing instruments are in placethat is, mostly novel, expensive treatmentswill be exempt from payback [27]. Changes to the payback mechanism are criticised by domestic manufacturers, who mostly producerelatively cheapgenerics and therefore do not see themselves as really contributing to the NHF exceeding its reimbursement budget [38].
Parallel import of pharmaceuticals
Drug prices in Poland are among the lowest in Europe. In 2015, a report that compared the prices of innovative oncology drugs in Poland and 12 other European countries (the Czech Republic, France, Germany, the Netherlands, Slovakia, Spain, the UK, Italy, Austria, Hungary, Romania and Switzerland), showed that in Poland the prices of 15 out of 22 drugs were lower than the average price in all the European countries analysed; furthermore, only two drugs exceeded the average price in the 13 countries by more than 10% [39]. Export margins are, at present, not specified in the Reimbursement Act [5], and the fixed wholesale margins are not applied to drugs intended for export, leaving wholesalers free to set their own margins [40]. The combination of free wholesale margins [41] and manufacturer prices being often substantially lower than in Western Europe [42] means that pharmaceuticalsespecially cardiovascular drugs, anticoagulants and drugs used to treat asthmaare commonly exported, often leading to shortages of those drugs in Polish pharmacies [42,43]. The amendment proposes that the 5% wholesale margin should apply to all products that are reimbursed, including those intended for export [2]. Thus, the amendment establishes a fixed margin for exporters in an attempt to make pharmaceutical export less profitable and, consequently, limit it [3,34].
Other aspects of the drug pricing and reimbursement process
The proposed amendment also includes new regulations on a number of other aspects of pricing and reimbursement in Poland. Among other changes, the amendment proposes that the descriptions of drug programmesspecifying the conditions under which costly, novel and otherwise paid for out-of-pocket therapies are fully reimbursed for patients meeting the inclusion criteriaare separated from reimbursement decisions for products included in those programmes [27]. At the moment, the drug programmes and the relevant reimbursement decisions are linked, so that implementing any changes to the programme itself requires the permission of all manufacturers whose products are included in it [27]. The MoH intends on separating the two [27], which will increase the flexibility and efficiency of defining drug programmesa proposal well-received by the industry [34].
Another proposed change relates to reimbursement decisions themselves, which are currently valid for two, three or five years, depending on how established the drug is in the given indication [5]. This time period is proposed to be extended [27] at an industry conference, an MoH representative mentioned that the decisions will be valid for up to five years, with the duration determined individually for each product during pricing negotiations [37], rather than being fixed dependent on the time that has passed since the product's first positive reimbursement decision. However, the amendment also introduces another substantial change, as the MoH will be able to initiate a review of reimbursement conditions before the expiry of an existing reimbursement decision (e.g., within two years, despite a decision being valid for 5 years) [27]. Such review at the discretion of the MoH is not possible under the current regulations [27] and this change is likely to raise concerns amongst the pharmaceutical industry [3,34].
Further, the time period between updates to the list of reimbursed products will also be extended, so that the list is published quarterly [27], rather than every other month, as is currently the case. Finally, the MoH will publish a list of all reimbursement applications received, together with their progress status, in a bid to increase the transparency of the reimbursement process [27].
Conclusion
Similarly to other healthcare systems in Europe and around the world, the Polish healthcare system faces substantial cost constraints. Patient access to novel medications is often severely restricted and the reimbursement process itself is relatively non-transparent. The MoH is hoping that introducing the proposed changes will resolve some of the issues, optimising the reimbursement process, improving patient access and ensuring public funds are spent more rationally. Indeed, many of the new regulationssuch as the introduction of compassionate use schemes, relaxing cost-effectiveness requirements for ultra-orphan drugs, improving the flexibility of defining drug programmes for innovative therapies, reducing the export of reimbursed products, and creating a dedicated innovation-based budgetcan be considered important steps along the way towards a better healthcare system attuned to patients' needs. However, the effective practical implementation of these changes is just as important as the ideas they cover, and that remains largely uncertain, as the MoH has not yet presented many of the draft decrees that would accompany the Reimbursement Act and specify the details of its implementation. Furthermore, issues such as the reimbursement of drugs for orphan diseases are not addressed in the proposed amendments, and some of the new regulations (e.g., the assessment of manufacturers' R&D activities) appear to require further refinement and consultations with interested stakeholders.
Indeed, while many of the proposed changes have been well received by the pharmaceutical industry, someincluding those related to medical deviceshave raised considerable criticism. With the reimbursement of medical devices aligned with that of pharmaceuticals, the changes to the reimbursement system are likely to result in closely similar regulations being applied across the market, for pharmaceuticals, medical devices and special nutritional products alike. In the long run, this may potentially simplify access to the Polish market by making the pricing and reimbursement regulations more straightforward to interpret. However, simplifying the regulations alone seems insufficient, despite the apparent high hopes of the MoH regarding the proposed amendment. Although some of the proposed regulations may need to be specified in more detail and/or somewhat revised, there is still room for dialogue between the MoH, manufacturers and patients, given that both amendments are at a relatively early draft stage. It will be interesting to see which of the proposed changes are, in fact, implemented, and what exact shape they take.
Key highlights
• The proposed changes introduce uniform reimbursement mechanisms for all medical devices, replacing the three distinct reimbursement groups currently in place. The manufacturers applying for reimbursement of their devices will have to face a new, complex procedure, which will also involve HTA of novel devices. • Rather than being bundled in a single payment, procedures utilising devices (e.g., stent placement) will be reimbursed separately from the device itself, which will allow patients to choose between appropriate devices, opting to make a co-payment if applicable. Thus, patients will no longer have to pay for the procedure to be conducted in the private sector if they wish to use a non-reimbursed device; they will only cover the costs of the device itself. • The proposed changes relax reimbursement criteria for ultra-orphan drugs, focusing them on price justification rather than economic analysis, which may improve access to these medications. • Hoping to improve patient access, the amendment introduces partial reimbursement for vaccines that are recommended, but not mandatory. This may lead to wider uptake of these vaccines in Poland. • The amendment introduces the concept of compassionate use in Poland, allowing patients who have no other treatment options to access innovative products. • A dedicated innovation-based budget will be formed to fund products for which no equivalent is available or those used in ultra-orphan indications, which may encourage R&D activities within Poland. • The combination of low prices and unregulated export margins often leads to drug shortages in Polish pharmacies; in response, the amendment proposes a fixed wholesale margin for drugs intended for export, which aims to limit exporters' profits and consequently reduce pharmaceutical export volume. | 2018-04-03T02:58:48.763Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "8900d41111cf0773d4353ddaa720170d61d9ed4b",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20016689.2017.1381544?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c8e6490e37dd095a30d71d4b8cb5409a03a6289",
"s2fieldsofstudy": [
"Business",
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
44208955 | pes2o/s2orc | v3-fos-license | Life Cycle Assessment of a Small Hydropower Plant in the Brazilian Amazon
Brazil as well as the rest of the world, faces a major challenge related to the electricity sector, to meet the growing demand with energy production from renewable sources. Many hydroelectric plants are being implemented, especially in the northern region of Brazil, but its environmental impacts are yet unknown. Energy produced by hydropower plants has been considered totally renewable and clean, but more recent studies describe analysis pointing to the existence of emissions by hydroelectric plants, especially if a lifecycle approach is considered. Thus, the objective of this study is the investigation of environmental impacts of the construction, operation and decommissioning of a hydroelectric power station based on life cycle assessment. The main focus is the Curuá-Una hydropower plant that is located in the Amazon forest in northern Brazil, in Santarém municipality (Pará state).
INTRODUCTION
A major challenge with regard to the energy sector in the coming years is to produce clean energy at low cost and minimal emissions, with sufficient capacity to meet the growth in demand.The Energy Research Office (Empresa de Pequisa Energética -EPE) studies record the growth forecast of the world economy of 3.7% per year for the next 10 years, which consequently leads to an increase in electricity consumption to meet the different sectors such as industry, services, households, etc. [1].
Brazil's electrical demand in 2014 was 463.1 TWh and the Brazilian Energy Planning has considered scenarios with increased electricity consumption of 3.9% per year until 2024 [2].In addition, research from the International Energy Agency (IEA) acknowledges the electricity sector as responsible for 40% of Greenhouse Gas (GHG) emissions, being considered an important factor of environmental impact and consequently of climate change [3].
Discussions on the subject already in the 70's (World Climate Conference), pointed the need for action on climate change.In 1997, the Kyoto protocol established the commitment of many countries to reduce GHG emissions.Thus, the search for possibilities of replacing products and processes that have significant amounts of GHG emissions by those more sustainable has increased, including energy production from renewable sources.
Hydropower Plants (HPP) generate most of the electricity produced in Brazil (65%) and there are more projects for implementation in order to increase production.The hydroelectric potential of the northern region alone is estimated at 100,370 MW (40.6% of all Brazil), examples of which are the Belo Monte HPP on the Xingu River (to enter into operation in 2016, with 4,500 MW) and other planned plants [4].Thus, the northern region is configured as the major power generator in the country, with potential for development given its strategic position.However, such potential of the region in the electricity sector should be further investigated because, although several studies point out that the hydropower is considered a renewable source with low environmental impact [3], some researches show that there exists environmental impact factors linked to these plants, especially in dams located in rainforest areas [5,6].
Knowing the environmental impacts of different product systems is an essential requirement for decision-making that can be achieved with Life Cycle Assessment (LCA).LCA is an environmental management tool defined as "a process to evaluate the environmental burdens associated with a product, process, or activity by identifying and quantifying energy and materials used and wastes released to the environment, and to identify opportunities to effect environmental improvements" [7].LCA enables identification of the most significant impacts and the stages to be observed for improvement, avoiding the damage to be spread from one stage to another, an environmental problem to another or from one region to another within a systematic and holistic approach [8].According to ISO 14040, LCA shall include four steps: definition of goal and scope, inventory analysis, impact assessment and interpretation of results [9].
Currently, the LCA is being used for decision making in choosing the best option in many contexts as chemical engineering [10], use of disposable packaging [11], transporting products [12], and frequently in energy production.For example, Yue et al. [13] used LCA to study the sustainable design of potential hydrocarbon biofuel supply chain network.Guineé et al. [14] review the history of the LCA and discuss current and future developments; and Finkbeiner [15] used LCA as basis for environmental declarations and carbon footprint of products.Queiroz et al. [16] used LCA to analyze the energy balance of biodiesel production from palm oil in the Amazon.Dones et al. [17] implemented LCA to cover all main energy chains associated with installed electricity and heating technologies, with focus on the Swiss and Western Europe.Matuszewska [18] used LCA to identify the optimal configuration for geothermal systems.Garcia-Valverde et al. [19], Desideri et al. [20] and Laleman et al. [21] applied LCA to estimate the environmental impact of photovoltaic systems.Brizmohun et al. [22] implemented LCA to analyze the electricity generation in Mauritius.Many authors have analyzed HPP's lifecycles because of the importance of hydropower in the world scenario [23][24][25].
LCA is used in this work to survey environmental impacts in the production of energy from a small HPP in the Brazilian Amazon.The plant analyzed is the Curuá-Una HPP -(Santarém, Pará State, Northern Brazil).These data are analyzed in the openLCA 1.4.2software (www.openlca.org),and the Ecoinvent 3.1 dataset (www.ecoinvent.org);producing values linked to a functional unit, allowing the comparison between other plants.Hydroelectric LCA should be performed including all the phases.The results obtained in the present study point out the main contributors for environmental impacts of the Curuá-Una HPP and that the constructions phase is mainly responsible for the burdens.Some studies show only the operation phase, because, generally, it is very difficult to find data regarding the construction of the plants.The inventory of Curuá-Una HPP was conducted through interviews with experts who participated in the construction work, as well as technical reports.
The methodology used in this study follows the ISO standards but there are also other references for this work.LCA studies conducted by other researchers such as Turconi et al. [26] that developed a framework for environmental analysis and modeled scenarios for low carbon emission.Ribeiro [24] developed a work to create Brazilian bases of LCA using the Itaipu HPP data and Pang et al. [23] developed LCA of the small HPP in China.
The main contributions of the present work are: Curuá-Una HPP inventory including equipment data and infrastructure material; LCA of the Curuá-Una HPP covering the construction, operation and decommissioning phases; Evaluation of the main impact categories for these phases including Global Warming (GWP), Abiotic Depletion (ADP), Acidification (AP), Freshwater Aquatic Ecotoxicity (FAETP), Human Toxicity (HTP); Reports of the inputs that are major contributors of environmental impact; Sensitivity analysis of a different scenario, with a lower level reservoir.The remainder of this article is organized as follows.First, the energy production scenario considering renewable energy resources in Brazil is presented; then the materials and methods are described.In the last section, results are discussed and conclusions are presented.
ENERGY PRODUCTION SCENARIO CONSIDERING RENEWABLE ENERGY RESOURCES IN BRAZIL
According to REN21 [27], 77.9% of the electricity produced in the world is by non-renewable resources (fossil fuels and nuclear) and only 22.1% is from renewable sources.Countries with greater capacity to produce energy from renewable sources are China, USA, Brazil, Canada and Germany [28].The Brazilian energy grid is considered an example for the world in the use of clean and renewable sources.According to the statistics the installed capacity of electric power matrix of Brazil reached in 2015, 133,914 MW, with 65.2% of hydropower, 13.8% of natural gas, 7.3% of biomass, 6.9% of oil products, complemented by other sources.With size and characteristics indicating that it is unique worldwide, Brazilian production and transmission electricity system has strong predominance of HPP's [4].
The HPP expansion takes advantage of the existing potential in the country, especially in the northern region, in the Legal Amazon.The International Amazon refers to the northern part of South America, and the Brazilian part is called Legal Amazon, covering an area of approximately 5,215,423 km 2 .It represents 59% of the Brazilian territory, with a population of approximately 24 million people [29].The main energy resource of the region is through hydropower.Tucuruí (8,370 MW) provides most of the electricity to the region, which complements the other plants like Samuel (210 MW), Coracy Nunes (78 MW), Balbina (250 MW) and Curuá-Una (30.3 MW).Thermoelectric power is used in some situations to complement the demand.The plants of Belo Monte (4,500 MW), São Luiz do Tapajós (6,133 MW), Santa Isabel (1,080 MW), Jamanxim (802 MW) among others, are expected to come on stream in the coming years and will also integrate the National Integrated System (SIN) [30].
The Curuá-Una HPP is located at the region of Santarém municipality.Santarém is a city in northern Brazil, in the western region of Pará state, to 2° 47' 22" south latitude and 54° 17' 30" west longitude, it is at a distance of 807 km, straight from the city of Belém, Pará State capital.Located on the right bank of the Tapajós River, at the confluence with the Amazon River, Santarém has a tropical climate, hot and wet.It's average annual temperature varies between 25° and 28 °C with a relative humidity of 86%.During the year the average rainfall is of about 1,920 millimetres.The power supply to Santarém city occurs by SIN interconnection extension at 138 kV, from the Tucuruí HPP, with a capacity of about 80 MW for this interconnection.Curuá-Una HPP (30.3 MW) and Santarém Thermoelectric (10 MW) complement the current demand of 110 MW.
Although the HPP's represent the largest potential for energy generation, in Brazil, there are many restrictions to their deployment.Among these concerns are: large area of flooded forests and consequently the elimination of flora and fauna [28], change in the navigability of rivers [31], destruction of potential agricultural areas in order to form the reservoir [5], carbon dioxide and methane emissions [5], etc.In addition, the energy generated will not reach many remote locations (riverine and indigenous communities) because the plants will be linked directly to the SIN.The challenge is to find ways to harness this potential without causing environmental impacts, and to invest in studies for the use of other energy sources such as solar and wind.
Characterization of the Curuá-Una HPP
The Curuá-Una HPP is located 70 km SW from Santarém (2° 50' S, 54° 18' W) in the Curuá-Una River in Brazil's Amazonian state of Pará, and it is in operation since 1977.Most of the reservoir is in the Curuá-Una River valley (57.4%), but parts of it occupy tributary valleys of the Rivers Moju (11.7%),Mojuí (4.4%) and Poraquê (3.2%), plus several small streams (2.9%) [5].The construction of the plant was a challenge as a work of engineering.It was the first work of it's kind in Brazil, in sandy terrain, requiring different technology from other works of this type [32], the reservoir was filled from January to May 1977, and occupies 102 km 2 at its normal operating level of 68 m above mean sea level (see Table 1).Initially the plant was designed to meet the demand of the cities of Santarém and Aveiro.However, with population growth greater than expected, it was necessary to add the extra demand through Tucuruí system, installed on the Tocantins River.Until 1985, Curuá-Una had two turbines with generating capacity of 20 MW.Nowadays, Curuá-Una has three turbines with generating a total output of 30.3 MW.The fourth turbine with 12.5 MW will be deployed by 2017, and will increase Curuá-Una capacity to 42.8 MW.The calculation of production potential for 100 years considers the real capacity of the plant with 92.89% efficiency, according to the current management of the plant.Thus, the generating production is 18 MW during 8 years, 28 MW during 32 years and 39 MW to 60 years, for a total of 29,976,720 MWh in 100 years.All inputs of materials and supplies were considered for production of total energy.Curuá-Una HPP has an energy density of approximately 0.29 MW/km 2 .
Goal and scope definition
The goal of the study is to survey the life cycle environmental impacts on generation in Curuá-Una HPP by using LCA.In the first stage of LCA it is important to define the functional unit and system boundary.Thus, the amount of energy in MWh is used as functional unit.All data in the study are related to this amount of energy produced and the environmental impacts are expressed with bases in 1 MWh.According to technical planning for Curuá-Una HPP, the lifetime of the plant is 100 years, for both the equipment and the dam infrastructure.The system boundary is often a subjective decision, because it depends on the scope and purpose of the study, and should be described in detail so that comparisons between studies can be made [27].For this study, LCA includes the stages of construction (infrastructure and equipment), operation and decommissioning (Figure 1).The transport of all the material and equipment for the implementation of the plant are also included, considering that most of these were transported through great distances.Transmission and distribution for electrical network are beyond the scope of this study.
Life Cycle Inventory (LCI)
In this phase the data collection and modelling of the system is done, and every input (raw material, fuel, water, energy, etc.) and every output (product, emissions, waste, etc.) is recorded.Data for this research include primary data, which were obtained through interviews with the technical manager of the plant and the secondary data that were obtained from literature, technical reports and from the Ecoinvent 3.1 database in the openLCA 1.4.2platform.The inputs were quantified with regard to the functional unit of 1 MWh and are listed in Table 2. Construction.Construction is the phase where there is a large consumption of materials and equipment for the infrastructure of the plant.Given the period when the plant was built, many data were calculated and weighted based on surveys conducted in other plants [24,28].Cement, rock, sand, iron and steel make up the reinforced concrete used to the dam infrastructure.Although Curuá-Una is an earth-rock fill dam, it used concrete for powerhouse, adduction channel, tailrace, etc. Steel and copper to turbines, structural steel, timber, explosive, asphalt and diesel to electricity are included in the input of the construction phase.Regarding the land use, the flooded area and the infrastructure around the reservoirs is considered as transformation area.The total plant area (4,000 km 2 ) was included as occupation area.
Operation and maintenance.This phase of the HPP requires fewer resources, because it uses the flow of water to generate energy.However, to keep the plant in operation some inputs are needed, such as lubricating oil, the fuel support vehicles, electricity, etc. Curuá-Una's last maintenance was carried out in 2011, with repairs to ensure perfect operation of the equipment.As no heavy structure was replaced, inputs concerning the maintenance phase were not included, but the increased capacity of the Curuá-Una HPP, planned for the 2017, with the further of the one turbine has been included as an operation/maintenance phase.
Transportation.The Curuá-Una plant is located at great distance from the places of production of the equipment and materials used.Thus, all the transport of equipment and materials for building the dam infrastructure were considered.It included the weight of equipment carried and the distance traveled.According to the technician responsible for the plant, some equipment such as turbines and generators were purchased from companies at São Paulo state and road transport was used to the Belém city (2,830 km), arriving in Santarém city by river (876 km).The cement came from Venezuela to Manaus by road (2,237 km) and by ferry to Santarém (739 km).Equipment such as tractors, cranes, bucket trucks, etc., came from São Paulo, using air transport.And all the material that arrived in Santarém city followed the road until the Curuá-Una plant (70 km).
Decommissioning.It is not known how the HPP ends it's lifetime, but studies presume that the dams are not removed, but abandoned or replaced.Furthermore, it seems probable that the other parts of storage power stations are replaced by new plants at the end of their lifetime [28].Thus, it was considered that the material used in the construction would remain on-site, but being partly used for recycling.
Some inputs presented in Table 2 were calculated from information on the plant itself, and others were calculated based on other work as follows: although no direct data on consumption of explosives is available, an indirect calculation was performed.According to Ribeiro [24], it takes 0.4 kg of explosive for 1 m 3 excavated rock.Therefore, for 127,600 m 3 of rock, it takes 51,040 kg of explosives.The excavation volume was calculated according Itaipu data [24], considering the proportion to Curuá-Una.The timber was used only for concrete frames, considering that for each m 3 of concrete, 12 m 2 timber was used.The quantities of steel turbines, generators and transformers were obtained directly from the manufacturer.Since the turbine's models were those commonly used by the time of construction (more than 40 years ago), data were approximated as well as the amount of copper.The operation phase data were obtained directly from the spreadsheets provided by the management of the Curuá-Una plant.The values in Table 2 refer to the total quantities (total input column) and the ratio of the total amount of each input by the total energy produced in 100 years, in MWh (unit input column).
Life Cycle Impacts Assessment (LCIA)
The LCI results shown in Table 2 are the input to the LCIA phase and are converted into the related environmental impacts based in characterization and classification models [9].There are different methodologies that can be applied, e.g., CML 2001, Recipe 2008, Eco-Indicator 99, IPCC 2013, TRACI and ILCD 2009 [34].In this study, the CML 2001 method was used.The fluxes that were used to the Curuá-Una LCA were selected from the Ecoinvent database and the categories evaluated are described below.
Acidification Potential (AP).Acidification results of the sulphur dioxide (SO 2 ), ammonia (NH 4 ) or nitrogen oxides (NO x ) reaction with water, causing the "acid rain".It is expressed using the reference unit, kg SO 2 equivalent [35].
Global Warming Potential (GWP -100 years).Potential global warming expresses the climate changes referent to the global temperature caused by "greenhouse gases" released by human activity.GWP is expressed as over the time horizon of different years, being the most common 100 years (GWP100), measured in the reference unit, kg CO 2 equivalent [35].
Abiotic Depletion Resources (ADP).This impact category refers to the consumption of non-biological resources such as fossil fuels, minerals, metals, water, etc.The scarcity of a substance is what appoints it's depletion and is measured in antimony equivalent [35].
Freshwater Aquatic Ecotoxicity Potential (FAETP).Environmental toxicity is the toxic effects of chemical on an ecosystem, in this case in the freshwater, causing biodiversity loss and/or species extinction.Characterisation factors are expressed using the reference unit, kg 1.4-dichlorobenzene equivalent (1.4-DCB) [35].
Human Toxicity Potential (HTP).The HTP is considered the toxic effects of chemicals on humans.It reflects the potential harm of a unit of chemical released into the environment that are caused, for the most part, by electricity production from fossil sources.HTP is measured in 1.4-dichlorobenzene equivalent, as FAETP.
RESULTS AND DISCUSSION
In this section, the main results of the present research are presented and discussed.Graphical representation and tables are used to express these results.Table 3 shows results on contribution of life cycle phases in each impact categories that are also represented in percentage in Figure 2. The complete cycle represents the total inputs of the all phases.Note that the quantities are in relation to 1 MWh for 100 years of production.HTP, GWP and FAETP are the most affected categories, and have more emissions during the construction phase.This is because fossil fuel was used for electricity production in this phase, which is a great contributing factor in these impact categories.In the case of the Curuá-Una LCA, the transport is the most important emitter in the operating phase, since most of the equipment and materials for plant construction were brought from distant locations, various types of transport as airplanes, trucks, ferries, etc. being necessary.It is important to notice that the low results for the operation phase are due to lack of data about the emissions of CH 4 and CO 2 in the flooded area.These emissions should be measured directly in the reservoir.The methodology used in this work (LCA), does not include this analysis process.Negative results of the decommissioning phase is due to the waste stream and recycling used in Ecoinvent database.More investigation should be made in the case of this plant, because it is not known if there is a possibility of recycling the material, after it's disabling.Figure 2 shows which is the most important phase in different impact categories.The construction phase contributes highly in all impact categories.The operation phase contributes with emissions to the ADP and HTP, and the transport phase with emissions to AP and ADP.
The LCA methodology also allows assessing which are the inputs that have more influence on environmental impacts.Among the inputs that are part of this work (see Table 2) the seven most significant in this graphical analysis are included, in percentage values.The results in Figure 3 show that the largest contributor to these impacts is the steel used both for infrastructure as for equipment such as turbines and generators, as shown in Table 2.The concrete used in construction phase contributed, mainly, in the amounts found to GWP, and also to AP and ADP.Transport contributed to impacts AP, GWP and ADP and petroleum refinery operation to ADP.There are different methodologies to assess the environmental impacts of hydropower.A direct comparison between HPP's is difficult and should be uses with care because HPP's are highly site-specific [36] and it's environmental impacts are associated with it's characteristics.Dones et al. [17] has conducted a study based on data from more than 50 Swiss reservoir plants and highlighted that "these results should not be considered as representative for single power plants in any of these regions".Brizmohun et al. [21] conducted a study with a set of plants located in the southeast coast of the African continent.They considered the functional unit of 1 kWh of electricity delivered to the consumer and lifetime of 150 years.The study conducted in China in hydropower of Guanyinyan [23] considered 30 years to HPP lifetime.Castelazo et al. [37] assessed a set of HPP's in Mexico with a total capacity of 10,566 MW.Swanit and Gheewala [38] analyzed a set of five small HPP's (run of river type), with capacity between 200 to 6,000 kW.The results of these studies are shown in Table 4.The analysis suggests the need for specific studies for each plant, as it is important to consider the influence of geographical location, climate, age of the reservoir, the properties of water, etc.
Sensitivity analysis
The variation in the rain of the region promote the fluctuation in the reservoir minimizing and maximizing the water flow and have as consequence the unstable operation of the plant.In the year of the 2015, the dry season (May to September) in northern Brazil was more intense than usual, causing a reduction in the level of Curuá-Una reservoir.With the purpose of obtaining an analysis of the consequences for the environmental impacts in this condition, it was considered that the reservoir level was 4 m lower than its normal level, reducing its capacity to 70% (data provided by the HPP staff).The amount of energy produced with capacity of 70% for 100 years, is approximately 22,110,240 MWh.With this production factor, a simulation was performed to be compared with the normal level of production.Table 5 shows the results of the simulation of this scenario, considering the impact category expressed in Table 3.The environmental performance in this condition was largely affected.Note that all impact categories increased when the production of the plant was reduced.
CONCLUSIONS
The aim of this study was to perform a LCA of Curuá-Una dam, advancing in research on the environmental impacts of hydroelectric dams in tropical regions.After all the efforts made in the inventory stage of this research, data were obtained from direct sources, covering many real characteristics of Curuá-Una plant, contributing to reliable results.However, some data have been obtained and adapted from other researches [6,24], and due to lack of proper flows for this study, some inputs to the LCA used the Ecoinvent database, which built the flows based on different regions of Curuá-Una.It is important to carry out more studies on energy producing plants in the tropics, using the LCA methodologies, so that these data are consolidated, becoming a better database for these regions.
For LCA, the construction phase showed the highest contribution to environmental impacts.At this stage, the inputs that had the greatest influence were the steel used in the turbines and concrete in buildings such as spillway, penstock, powerhouse, etc.The operation phase caused no major impact when analyzed by this method.The transport of all equipment by long distances pointed to an increase in the emissions to acidification potential, global warming and depletion of abiotic resources.Hydropower has a long life, so there is no data from deactivation of a plant, however, the results must be evaluated with more supporting research on this phase.
Many studies on hydroelectric emissions have been conducted using other methodologies highlighting the plant operation phase and emissions from flooded area, overlooking the construction phase, as in [5,36,39].LCA allows input of supplies in all phases, enabling a more complete study of the environmental impacts.The variation in literature and the many open questions about the origin of the environmental impacts and development over time leads to some uncertainties.However, with more studies conducted, more realistic data and information will be available.
This project is expected to provide a LCA of the Curuá-Una HPP (Santarém/Pará/Brazil), and disseminate the results so that they can be compared with other studies using this methodology.This research is part of a larger project that aims to make LCA energy production units of alternative sources and obtain solutions to support decision-making in the operation and planning of generating units that can meet the demand of the region [26].In addition, as a result of this study an innovative methodological framework will be available for assessing environmental impacts in decision-making within the perspective of sustainable development.
Figure 3 .
Figure 3. Percentage of inputs for impact categories
Table 2 .
LCA inputs for the production of 1 MWh of electricity, considering 92.89% of total Curuá-Una HPP capacity * km 2 ycorresponds to square kilometers times year, according openLCA software
Table 3 .
Contribution of Curuá-Una life cycle phases in each impact category
Table 4 .
Main features and results of GWP in CO 2 eq/MWh of several HPP's
Table 5 .
Scenario of low production plant and the influence in the impacts categories | 2017-05-03T04:43:12.208Z | 2016-12-31T00:00:00.000 | {
"year": 2016,
"sha1": "09b5436247cbf9f26aedd957db3b1d8bd5cc9565",
"oa_license": "CCBY",
"oa_url": "http://www.sdewes.org/jsdewes/dp811f67c1d37070633255f1b58d9d621bcd06e6e9",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "09b5436247cbf9f26aedd957db3b1d8bd5cc9565",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
6095921 | pes2o/s2orc | v3-fos-license | The role of short chain fatty acids in appetite regulation and energy homeostasis
Over the last 20 years there has been an increasing interest in the influence of the gastrointestinal tract on appetite regulation. Much of the focus has been on the neuronal and hormonal relationship between the gastrointestinal tract and the brain. There is now mounting evidence that the colonic microbiota and their metabolic activity have a significant role in energy homeostasis. The supply of substrate to the colonic microbiota has a major impact on the microbial population and the metabolites they produce, particularly short chain fatty acids (SCFAs). SCFAs are produced when non-digestible carbohydrates, namely dietary fibres and resistant starch, undergo fermentation by the colonic microbiota. Both the consumption of fermentable carbohydrates and the administration of SCFAs have been reported to result in a wide range of health benefits including improvements in body composition, glucose homeostasis, blood lipid profiles and reduced body weight and colon cancer risk. However, published studies tend to report the effects that fermentable carbohydrates and SCFAs have on specific tissues and metabolic processes, and fail to explain how these local effects translate into systemic effects and the mitigation of disease risk. Moreover, studies tend to investigate SCFAs collectively and neglect to report the effects associated with individual SCFAs. Here, we bring together the recent evidence and suggest an overarching model for the effects of SCFAs on one of their beneficial aspects: appetite regulation and energy homeostasis.
INTRODUCTION
Obesity has become a global epidemic, with incidence rates of over 20% in the majority of western countries. 1 It has been proposed that the current obesity epidemic may have been caused by a mismatch between the physiological mechanisms for maintaining energy balance, which evolved in response to ancestral diets, and the composition of the current western diet. 2 Over the past several decades, the western diet has changed significantly with the popularity of 'fast' and 'convenience' foods rapidly increasing. 3 Such foods are energy dense, have a low dietary fibre content and produce lower satiety and satiation signals than low-energy dense foods. 4 This diet is markedly different to the historical low-energy dense, nutrient-poor diet that the human gut was adapted to over several millennia. Evidence suggests that for most of history the human lineage consumed more indigestible plant material, such as grasses, sedges and tubers, than is present in a typical western-style diet (4100 g per day dietary fibre compared with o 15 g per day in the average modern-day diet), and is therefore likely to have contained a larger non-digestible component. 5,6 Some carbohydrates resist digestion in the upper gastrointestinal tract and reach the large bowel mainly intact where they are subject to fermentation by the resident bacteria. The human gut microbiota is composed of 10 13 -10 14 microorganisms, accounting for 41 kg of body weight. 7,8 The gut microbiota is continuing to emerge as a major determinant of obesity and its associated health complications. 9,10 However, as this topic is beyond the scope of the present review readers are referred to the excellent review by Holmes et al. 11 , which focuses on the composition and functional activities of the gut microbiota.
The principle products of the bacterial fermentation of nondigestible carbohydrates in the gut are short chain fatty acids (SCFAs), heat and gases. [12][13][14] The process of bacterial fermentation serves as an energy harvest system for undigested material, rescuing energy that cannot be absorbed in the small bowel, and is used as a major energy source for some species. For example, lowland gorillas derive~57% of their metabolisable energy from SCFAs, compared with 1.2-10% in humans from the average western diet. [15][16][17] The main SCFAs produced by bacterial fermentation are acetate, propionate and butyrate, and are present in the approximate molar ratio of 60:20:20. 18 It has been demonstrated that the consumption of soluble fermentable carbohydrates (FCs) increases the caecel content of SCFAs in animal models. 19,20 The rate, ratio and extent of SCFA production, however, is a complex interplay between FC type, microbiome diversity and activity, and gut transit time. [21][22][23][24][25] Supplementing the high-fat diet of rodents with soluble FCs has been shown to protect against body weight and fat mass gain. [26][27][28] Futhermore, research suggests that adult rodents who consume a weaning diet high in prebiotic fibre are protected against body weight gain when challenged with a western-style diet high in fat and sucrose. 29 However, research carried out by Track et al. 30 suggests that the beneficial effects of FC consumption are specific to adolescent rodents. In addition to improvements in body composition, a number of research studies in humans have reported associations between the consumption of FCs and improvements in glucose homeostasis, insulin sensitivity and blood lipid profiles, however, these beneficial effects were not present in young healthy adults. [31][32][33][34][35][36] Although it is known that greater FC consumption increases colonic SCFA production resulting in a wide range of health benefits, further research is needed to fully elucidate the molecular mechanisms by which SCFA mediate these effects. Published research often focuses on single mechanisms to explain the positive physiological effects associated with gut-derived SCFAs. However, we hypothesise that the beneficial effects reported are not the result of the activity of a single metabolic process on a specific tissue, but are more likely to be the result of the stimulation of a number of mechanisms activated in parallel.
Here we review recent findings in this field and propose an interconnected picture of how SCFAs may affect appetite regulation and energy homeostasis.
MATERIALS AND METHODS
A review of the literature was conducted in 2014 using PubMed databases, with the following search terms: 'short chain fatty acids' AND dietary fiber(MeSH terms), 'short chain fatty acids' AND obesity(MeSH terms), 'short chain fatty acids' AND appetite(MeSH terms), 'short chain fatty acids' AND energy intake(MeSH terms), 'short chain fatty acids' AND energy expenditure(MeSH terms), 'short chain fatty acids' AND microbiota(MeSH terms), 'short chain fatty acids' AND ('free fatty acid receptor 3' OR 'GPR41' OR 'GPCR41') and 'short chain fatty acids' AND ('free fatty acid receptor 2' OR 'GPR43' OR 'GPCR43'). Reviews and research studies in which immune function or cancer progression/prevention were the primary focus were excluded as these topics were deemed beyond the scope of this review. Papers identified from the search were analysed by two of the authors and papers that were not relevant were rejected as shown in Figure 1. In total, 104 papers were identified as containing relevant primary evidence.
SCFAs and free fatty acid receptor signalling In 2003, three independent research groups discovered that the orphan G-protein-coupled receptors, GPR43 and GPR41, were activated by SCFAs. [37][38][39] These receptors have since been renamed free fatty acid receptor 2 and 3 (FFA2 and FFA3; formerly GPR43 and GPR41, respectively). Acetate and propionate are the most potent activators of FFA2, whereas FFA3 is activated in the affinity order propionate4butyrate ⩾ acetate. [37][38][39] However, results of studies using animal models must be noted with caution owing to interspecies variability. Hudson et al. 40 reported that in mice FFA2 and FFA3 have equal affinity for acetate and butyrate, whereas FFA3 has higher affinity for propionate than FFA2.
FFA2 and FFA3 are both widely expressed throughout the small intestine and colon. 37,41,42 FFA2 and FFA3 mRNA have also been discovered in areas other than the gut, which lead to the assumption that SCFAs are likely to have beneficial effects on tissues and organs beyond the gut. FFA2 mRNA has been detected in immune cells, skeletal muscle, heart, spleen and adipose tissue, 37,[42][43][44] whereas the expression of FFA3 appears to be more widespread and has been detected in adipose tissue, peripheral blood mononuclear cells, pancreas, spleen, bone marrow and lymph nodes. 37,38,42 However, reports investigating the expression of both SCFA receptors in adipose tissue have proven to be inconsistent. 37,43,[45][46][47] SCFAs and energy intake in animal models The addition of FCs to the diets of animals has been shown to reduce energy intake. 26,27,48,49 Several studies have investigated the effect of FCs on feeding motivation, however, the results have been equivocal. 50,51 Results from studies published by our research group have shown that FCs increase manganeseenhanced magnetic resonance imaging signals in the appetite centres of the hypothalamus, further suggesting a satiating effect. 52,53 In addition, feeding FCs, as well as SCFAs themselves, have been associated with an increase in the circulating concentrations of the anorectic gut hormones, glucagon-like peptide-1 (GLP-1) and peptide YY (PYY). 26,27,29,48,[54][55][56][57][58][59] GLP-1 and PYY are produced by L cells, which are present throughout the gastrointestinal tract, with the highest concentrations observed in the distal ileum and colon, and are released in response to food intake. 60,61 Peripheral infusions of these gut hormones have been shown to cause a reduction in energy intake and have thus become the target of many anti-obesity therapies. [62][63][64][65] The discovery of the co-expression of FFA2 and FFA3 in GLP-1 and PYY releasing enteroendocrine L cells has prompted suggestions that the detection of SCFAs in the colon may be responsible for triggering the release of these gut hormones. 54,66,67 This theory is supported by reports that FFA3 knock-out (KO) mice demonstrate an impaired PYY expression 68 and that FFA2 KO mice exhibit reduced GLP-1 concentrations in vivo and in response to SCFA in vitro. 54 Furthermore, FC supplementation has been shown to increase the densities of FFA2-positive enteroendocrine cells in parallel with GLP-1containing cells. 69 However, it is currently unclear whether this is because of a SCFA-stimulated increase in cell proliferation or an increase in the expression of SCFA receptors in the gut epithelium.
It has recently been reported that propionate stimulates the secretion of both GLP-1 and PYY from wild type (WT) primary murine colonic crypt cultures, an effect that was significantly reduced in FFA2 KO mice cultures. 70 In addition, intra-colonic infusions of propionate reportedly increased both GLP-1 and PYY levels in jugular vein plasma in vivo, an effect that was not present in FFA2 KO mice. These data further support the mounting evidence that FCs stimulate the secretion of GLP-1 and PYY. Additional evidence suggests a role for propionate in the modulated expression of FFA2, PPAR-γ (peroxisome proliferatoractivated receptor gamma), Fiaf and histone deacetylases. 71 Our research group recently investigated the role of the most abundant SCFA, acetate, in central appetite regulation in mice. 72 In line with recent observations, we noted that dietary supplementation with the FC inulin causes a significant reduction in energy intake and weight gain. 26,73 In addition, we investigated the effect of intravenous and colonic infusions of 11 C-acetate in vivo using positron-emission tomography-computed tomography scanning and found that although the majority of 11 C-acetate tracer was absorbed by the heart and liver, a small amount crossed the blood-brain barrier (~3%) and was taken up by the brain. We subsequently confirmed that acetate induces hypothalamic neuronal activation in the arcuate nucleus following intraperitoneal administration, suggesting that acetate itself is an anorectic signal.
That FCs reportedly have beneficial effects on energy homeostasis but also increase energy harvest appears counterintuitive. It has been suggested that the metabolisable energy gained from SCFAs via the colonic fermentation process of non-digestible carbohydrates may outweigh the beneficial effects associated with their consumption. Indeed, Isken et al. 74 demonstrated that long-term consumption (45 weeks) of soluble guar fibre significantly increased both body weight and markers of insulin resistance in mice, when compared with controls, despite a comparable dietary energy intake in both groups. Their data provides evidence that increased SCFA production in rodents, which significantly contributes to digested energy, may outweigh the short-term beneficial effects of soluble fibre consumption. These results may be explained by research carried out by Track et al. 30 , which reported that feeding adolescent rats guar gum results in a reduction in food intake and weight gain and improved glucose tolerance. However, these beneficial effects were only observed in adolescent rats, when compared with controls, and were absent in adults.
FCs and energy intake in humans As discussed earlier, evidence suggests that hominin's diets consisted of mainly vegetable matter and would have had a large fermentable component. 5 It is highly likely that this large FC consumption would therefore have stimulated gut hormone release, slowing gastric emptying and small intestinal transit. It is possible that this was advantageous as it would have increased the energy harvest from nutritionally poor food during periods when it was a major struggle for hominins to meet their energy demands. However, it is likely that this physiological adaptation is underutilised by humans in the developed world owing to wider food availability and the lower FC content of the average westerner's diet. 15 Behall et al. 75 suggest that the fermentation process significantly contributes to digestible energy when amounts of 420 g non-digestible carbohydrates are consumed. It has been demonstrated that overweight and obese individuals have higher faecal SCFA concentrations than their lean counterparts. 24,76 The results suggest that these individuals produce more colonic SCFA, indicating an increased microbial energy harvest in obesity. 76 However, in vitro fermentations using faecal samples from obese and lean individuals displayed no difference in total SCFA production. 77 Although there is clear evidence from a number of small animal studies that the addition of FCs to the feed of high-fat-fed animals results in improvements in body weight and composition, [26][27][28] translation to humans has proven inconsistent. This may be owing to the relatively small amount of FC used in human experimental diets compared with animal studies (1.5% and 45% of total energy intake, respectively). Large amounts of FCs are generally not well tolerated as they are associated with undesirable gastrointestinal effects, resulting in the use of lower doses in human research trials.
However, a number of notable acute and long-term supplementation studies have been successfully carried out in humans. Archer et al. 78 reported that replacing fat with an acute dose of inulin (24 g) at breakfast results in lower energy and fat intake throughout the day, although gut hormone concentrations were not reported. Nilsson et al. 79 demonstrated that consuming an evening meal consisting of FCs significantly increases circulating PYY concentrations and decreases ghrelin concentrations at breakfast. Our research group recently carried out a dose-finding study and demonstrated that increased circulating PYY concentrations and appetite suppression occurs only with an acute dose of 435 g per day of inulin, suggesting the need for large doses in order to induce appetite suppression. 80 Evidence from long-term studies suggest that supplementation (2-12 weeks) with oligofructose (16-30 g per day) significantly increases feelings of satiety and reduces feelings of hunger, reduces energy intake, increases the total area under the curve for PYY and reduces total area under the curve for ghrelin, an orexigenic hormone. 81-83 A 1-year study investigating supplementation with high-wheat fibre also resulted in both an increase in SCFA production and GLP-1 secretion. 84 The authors note that these changes took 9-12 months to develop, suggesting that it may take up to a year for the gut microbiota to adapt to the extra fermentable content of the diet. However, data from a recent study showed that short-term dietary change alters both the microbial community structure and gene expression of the human gut microbiome, rapidly and reproducibly. 85 Thus, the optimum time period for adaptation to a high FC diet is, at the present time, unclear.
Our research group recently investigated the role of propionate in appetite regulation. We demonstrated that propionate significantly stimulates the release of PYY and GLP-1 from human colonic cells. 86 Next, we produced a novel system, inulinpropionate ester, whereby propionate is conjugated by an ester linkage to inulin, a carrier molecule. The ester linkage is broken down by bacterial fermentation, which results in the delivery of propionate directly to the colon. When administered acutely, we found that inulin-propionate ester significantly increased postprandial PYY and GLP-1, and reduced energy intake by~14% at a buffet meal. Furthermore, after a 24-week supplementation period we demonstrated that inulin-propionate ester significantly reduced weight gain in overweight adults.
Beneficial effects associated with SCFA production independent of food intake The importance of SCFAs to energy metabolism has been further emphasised in recent studies where germ-free mice have received gut microbiota transplants. These investigations highlight that the transfer of gut microbiota compositions, which produce different levels of SCFAs in the colon, influence body weight gain and adiposity. 87,88 For example, it has been shown that the transplantation of the faecal microbiota of twins discordant for obesity to germ-free mice results in a similar phenotype in the recipient mice. 87 It was noted that the lean mice demonstrated significantly increased caecal propionate and butyrate contents when compared with their obese counterparts. Their data suggest that the increased weight gain observed in the obese mice was not caused by an increased energy harvest by the gut microbiota and suggests that instead, SCFAs inhibit the fat accumulation associated with obesity. Similarly, the faecal transplantation of mice that have undergone Roux-en-Y gastric bypass (RYGB) surgery to germ-free mice has been shown to result in weight loss and reduced fat mass in the RYGB-recipient mice. 88 In addition, the RYGB-recipient mice exhibited a relatively greater production of propionate and lower production of acetate when compared with mice that received the faecal microbiota of those that had undergone sham surgery. The beneficial change in body composition observed in RYGB-recipient mice may be owing to the beneficial effects associated with the SCFA production profile of these mice. The authors suggest that the reduced levels of acetate would result in decreased lipogenesis and that the increased levels of propionate would assist in the inhibition of acetate conversion into lipid in the liver and adipose tissue. 43,89,90 The metabolic effects noted in these studies were not associated with any significant change in energy intake, 87,88 suggesting that the positive effects on energy balance observed may be a result of a change in energy utilisation and expenditure.
A recent study carried out by Remely et al. 91 demonstrated a lower methylation status in the promoter region of the FFA3 gene in the blood of both obese and type 2 diabetics, when compared with lean individuals. The researchers hypothesise that this is owing to compositional differences in the gut microbiota and therefore different SCFA profiles. SCFAs, energy expenditure and substrate metabolism Although the consumption of FCs and SCFAs have been associated with a reduction in energy intake, there is also evidence that SCFAs may increase energy expenditure. SCFAs have been shown to increase the rates of oxygen consumption, enhance both adaptive thermogenesis and fat oxidation and increase mitochondrial function in rodents. 73,92 Marsan and McBurney 93 also demonstrated that the oxidation of all three principle SCFAs was significantly higher for colonocytes isolated from rodents who had consumed a high fibre diet for 14 days. Gao et al. 73 also investigated the expression of two thermogenesisrelated genes, PGC-1α and UCP-1, and discovered that the mRNA and expression of both genes were upregulated in those whose diets were supplemented with butyrate. Furthermore, consuming a diet high in whole-grain foods has been shown to decrease urinary excretion of markers of protein catabolism which was associated with an increase in SCFA production. 94 In addition, it has been demonstrated that SCFAs can be used as an energy source for protein gain when pigs are fed below their energy requirements. 95 Evidence from several research studies indicates that the SCFA receptors FFA2 and FFA3 may have a critical role in energy homeostasis. It has been demonstrated that FFA3 KO mice exhibit a reduced energy expenditure, compared with WT mice, despite having matching physical activity levels. 92,96 In addition, Kimura et al. 92 reported that treatment with propionate increases the rate of oxygen consumption in WT mice, a result that was not present in FFA3 KO mice. SCFAs were subsequently shown to stimulate sympathetic nervous system activity directly through FFA3 at the sympathetic ganglion, thereby controlling energy expenditure. 92 In addition, recent evidence suggests that propionate binds FFA3 in the periportal afferent system to induce intestinal gluconeogenesis (IGN) via a gut-brain neural circuit. 97 Similarly, it has been reported that FFA2 KO mice exhibit a reduction in energy expenditure when fed a high-fat diet, compared with WT mice, and are obese despite a similar physical activity level. 98 In contrast, mice with adipose-specific overexpression of FFA2 exhibited an increase in energy expenditure. Interestingly, the FFA2 KO mice had a higher RER than WT mice, which suggests a reduced capacity to oxidise fat, whereas mice with adipose-specific overexpression of FFA2 had a lower RER than WT mice. The authors note that their results indicate that FFA2 activation increases energy expenditure and the capacity to oxidise fats via the suppression of fat accumulation and adipose tissue insulin signalling.
Research suggests that SCFAs and their receptors, FFA2 and FFA3, may have a critical role in maintaining energy homeostasis. However, there is currently no known study that has specifically investigated the effect of FC consumption on energy expenditure in humans and would make for an interesting line of investigation.
SCFAs and hepatic metabolism There is evidence that feeding rodents a diet supplemented with FCs or SCFAs results in reduced intrahepatocellular lipid levels, liver triglyceride and cholesterol content, hepatic cholesterol synthesis and hepatic glucose production. 52,53,97,[99][100][101] SCFAs are absorbed from the intestinal lumen into the portal vein and subsequently enter the hepatic blood flow. As butyrate is the preferred fuel for colonocytes, the majority of butyrate produced in the gut is rapidly utilised at the epithelium. 18,102 In contrast, the majority of propionate and acetate produced in the gut is absorbed and drains into the portal vein. 103 Cummings et al. 18 investigated SCFA distribution in sudden death victims and demonstrated that the majority of butyrate and propionate present in the portal vein is extracted by the liver and subsequently metabolised (86 and 94%, respectively), with a small amount of remaining propionate and butyrate entering venous blood. Bloemen et al. 102 reported that liver uptake of acetate was not significant. These results may suggest that any hepatic changes associated with FC consumption or SCFA administration are largely because of the metabolism of propionate by the liver. It is known that propionate is a gluconeogenic substrate and inhibits the utilisation of acetate for lipid and cholesterol synthesis. 89,104 Therefore, potential upregulation of this pathway after FC consumption is likely to be responsible for any observed changes in hepatic structure or function.
A number of studies have reported that FC consumption in humans beneficially affects serum cholesterol and triglyceride concentrations, and reduces hepatic lipogenesis. 31,35,[105][106][107] In addition, our research group recently demonstrated a significant reduction in the intrahepatocellular lipid levels of overweight adults meeting the criteria for non-alcoholic fatty liver disease after a 24-week increase in colonic propionate concentrations. 86 Furthermore, SCFAs may have an indirect benefit on hepatic metabolism through their effect on gut hormone secretion. In particular, GLP-1 has been shown to modulate physiological mechanisms responsible for free fatty acid accumulation in the liver and reduce hepatic steatosis. 108 SCFAs, glucose uptake and gluconeogenesis The consumption of FCs has been associated with an improvement in glucose homeostasis, although the evidence in humans is inconsistent. Propionate is gluconeogenic and has been shown to produce a dose-dependent increase in blood glucose concentrations in humans. 89,104,109 Again, the notion that FC consumption may have beneficial effects on glucose homeostasis appears paradoxical. However, it has been demonstrated that SCFAs have no significant effect on glucose metabolism in healthy men. 110 In addition, it has been shown that propionate-supplementation induces a reduction in fasting blood glucose in rats. 111 The intestine has recently been established as a gluconeogenic organ and it has been reported that IGN promotes metabolic benefits and regulates energy and glucose homeostasis. 112,113 Delaere et al. 114 later demonstrated that a portal vein glucose sensor is activated by IGN and transmits signals to the brain via the peripheral nervous system, which initiates these beneficial effects. It has been reported that both propionate and butyrate stimulate IGN. 97 Although butyrate was found to directly activate the expression of IGN genes in enterocytes, propionate itself was shown to act as a substrate for IGN. In addition, rats fed a SCFA-or a FCsupplemented diet displayed a significantly lower weight gain, reduced adiposity, improved glucose control and reduced hepatic glucose production when compared with the control group. It was noted that this improvement in glucose tolerance involved both gutbrain communication and IGN, and that none of the reported metabolic benefits were present in mice lacking the catalytic subunit of a key enzyme involved in IGN, intestinal glucose-6-phosphatase, despite a similar shift in the gut microbiota composition. These data suggest that IGN has a major role in mediating the beneficial effects associated with the consumption of FCs.
SCFAs and adipocytes
It has been observed that FC consumption protects against fat mass development. [26][27][28] All three principle SCFAs have also been shown to protect against diet-induced obesity. 55 A number of studies have reported that treatment with the SCFAs, propionate and acetate increases the expression of leptin, a potent anorectic hormone, in adipocytes in vitro, whereas butyrate has been shown to have no effect. 43,45,46 In addition, propionate has been shown to increase plasma leptin concentrations in mice in vivo 45 and stimulate mRNA expression in human adipose tissue. 115 Xiong et al. 45 reported that this SCFA-stimulated increase in leptin expression in adipocytes is mediated by FFA3. However, not all reports regarding the body composition of FFA3 KO mice have been consistent. 55,68,96 Furthermore, a number of contradictory reports suggesting that the expression of FFA3 cannot be detected in adipose tissue have been published, indicating that the SCFA-stimulated increase in leptin expression is not mediated by FFA3. 43,46 Zaibi et al. 46 suggest that the SCFA-stimulated increase in leptin expression is mediated by FFA2 and that the downregulation of FFA2 in FFA3 KO mice is responsible for the reduction in SCFA-stimulated leptin secretion observed in FFA3 KO mice. However, Frost et al. 47 failed to demonstrate a significant effect of SCFAs on leptin secretion in adipocytes. Kimura et al. 98 recently demonstrated that SCFA-mediated activation of FFA2 suppresses insulin signalling within adipocytes, which results in the inhibition of fat accumulation within adipose tissue and the promotion of metabolism of unincorporated glucose and lipids in other tissues. In addition, it was reported that FFA2 KO mice were obese on a normal diet, which was further enhanced by a high-fat diet, that adipose-specific FFA2 transgenic mice had a significantly lower body weight than WT mice and that mice overexpressing FFA2 in their adipose tissue remained lean even when consuming a high-fat diet. The researchers suggest that FFA2 may act as a sensor for excessive dietary energy, controlling energy utilisation and maintaining metabolic homeostasis. However, these observations are not supported by Bjursell et al. 116 who reported that FFA2-deficient mice consuming a highfat diet exhibit a reduction in body-fat mass and increase in lean body mass. In addition, it has been shown that all three principle SCFAs enhance the degree of adipocyte differentiation 43,117 and that propionate and acetate inhibit lipolysis. 43 Hong et al. 43 also demonstrated that propionate increases the expression of FFA2 during adipocyte differentiation and causes an upregulation of PPAR-γ2. The authors suggest that these results indicate the involvement of FFA2 in the lipid accumulation pathway. This is further supported by Ge et al. 118 who reported that acetate and propionate inhibit adipose tissue lipolysis in a mouse model via FFA2 resulting in a reduction in plasma free fatty acid concentrations. Hosseini et al. 119 demonstrated that propionate increased the gene expression of adiponectic receptors 1 and 2 in the adiponectin system.
It has also been suggested that FFA3 may have a role in insulinstimulated glucose uptake. Han et al. 120 reported that propionate and valerate enhance insulin-stimulated glucose uptake in adipocytes which appeared to be mediated via FFA3.
Although data from animal studies suggest that SCFAs and the activity of their receptors, FFA2 and FFA3, may have an inhibitory effect against weight gain, there is currently a lack of evidence to support this hypothesis in humans. As acetate and propionate are the most potent activators of FFA2 it seems likely that these SCFAs are responsible for any adipocyte-related changes observed after FC consumption or SCFA administration. However, as acetate circulates at a higher concentration than both butyrate and propionate, it seems the most likely SCFA to directly influence adipose tissue. 18 CONCLUSION A significant body of evidence suggests that SCFAs have a beneficial role in appetite and energy homeostasis. However, as the majority of research comes from animal models, caution when translating this evidence to humans is necessary. Thus, there is an urgent need for human data to support the mechanistic data being reported. One major issue is that large amounts of FCs are generally not tolerated well by humans, which results in a relatively small amount of FCs being used in human experimental diets when compared with animal studies. Therefore, effective strategies that replicate the changes in SCFA profiles seen in animal studies, either via dietary or pharmacological means, may have the potential to translate the beneficial effects observed in animal studies to man.
In conclusion, it is evident that the administration of FCs and their breakdown products, SCFAs, have positive effects on host physiology. However, the majority of recent publications have investigated the effect of SCFAs on one particular tissue or metabolic process and have failed to look at the body system as a whole. Here, we propose that SCFAs have a number of metabolic processes, which are activated in parallel, that affect energy homeostasis and appetite regulation (summarised in Figure 2). Furthermore, the site-specific uptake of SCFA across the gut-liver-peripheral tissue axis suggests selectivity in the effect of individual SCFA. It is only by bringing these effects together that the true impact of SCFAs on host energy homeostasis can be seen. | 2017-11-08T18:57:31.471Z | 2015-05-14T00:00:00.000 | {
"year": 2015,
"sha1": "55842e0217642afa8d543eab0df7e899b3d846c6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ijo201584.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "55842e0217642afa8d543eab0df7e899b3d846c6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
]
} |
71533694 | pes2o/s2orc | v3-fos-license | The factor of immunization in the rat. The effect of allogeneic immunization on graft-versus-host activity.
Using a popliteal lymph node weight assay the graft-versus-host activity of lymphocytes from donors immunized with allogeneic tissue has been assayed by comparison with that of lymphocytes from nonimmune donors. When the donors were immunized against weak histocompatibility antigens (non-AgB) the specific GVH activity of its lymphocytes was increased. This increase was greater if spleen cells rather than thoracic duct lymphocytes were the source of the donor cells used for assay. The increase in GVH activity was also greater if the standard immunization procedure of two successive skin allografts was followed by three boosting injections of allogeneic lymphoid cells. When donors were immunized against strong histocompatibility antigens the specific GVH activity of the donors' lymphocytes was slightly increased, was unchanged, or was actually decreased depending on the experimental situation. In donors rendered incapable of a humoral alloantibody response by whole body X-irradiation, immunization across a strong barrier was followed by little or no increase in the specific GVH activity of TDL. In the rat, as in other species, the increase in GVH activity after immunization is inversely proportional to the strength of the antigenic barrier involved.
Graft-versus-host (GVH) 1 assay is one of many methods of measuring the immunological activity of lymphoid cells against histocompatibility antigens.It is the most suitable in vivo method when a quantitative comparison of cell populations is required.After immunization with allogeneic antigen the GVH activity of an animal's lymphocytes is usually increased.However, in strain combinations of mice involving a difference at the H-2 locus, the augmentation of GVH activity after immunization (the "factor of immunization") is very slight; the augmentation increases with the weakness of the antigenic barrier (1,2).
In the rat, GVH activity can be satisfactorily measured by a lymph node weight assay (3,4).Using this assay it has been found that to produce an arbitrarily chosen lymph node enlargement of 10.0 ing requires 50-100 times as many cells in weak as compared with strong strain combinations.In the present experiments the effect of specific allogeneic immunization on the GVH activity of lymphoid cells has been investigated using several weak and strong combinations of the inbred rat.The paradoxical inverse relationship between antigenic strength and the factor of immunization has been confirmed.
In further experiments the possible effects of other variables on the factor of immunization have been tested, namely: (a) the schedule of immunization used; (b)the interval elapsing between immunization and assay; and (c) the population of cells used for assay.Lastly, an attempt has been made to test a suggestion (5) that the specific inhibitory effect of humoral alloantibody might be responsible for the low factor of immunization in strong strain combinations.
Methods
Rats.--The GVH activities of lymphocytes from immune and nonimmune donors matched for age and sex were compared.The following strain combinations were used (where X --* Y denotes the assay of lymphocytes from inbred strain X in (X X Y) F1 hybrids).AS --~ AS2, AS2 --~ AS, AS---~ BN, and F--+BN (all AgB different); AS--*F and F--*AS (AgB identical).In further exeriments, cells from (AS X BN) F2 hybrids selected to be AgB heterozygotes after serological screening were assayed in (AS X BIN-) F1 hybrids.In this case the GVH reaction was against half the minor loci, on average, at which the AS and BN strains differ.
In the present experiments at least four footpads were injected with each of three graded doses of lymphocytes.Thus each estimate of GVH activity is based on the reaction of at least 12 lymph nodes.In the AgB-identical strain combinations, doubling doses were used, usually starting at 20 X 106; in the AgB-different combinations, tripling doses were used, usually starting at 0.33 X 106.
Preparation of thoracic duct lymphocytes (TDL) and spleen cells were as previously described (4).Lymph node cells were prepared as were spleen cells.
X-Irradiation.--In some experiments donors were subjected to 300 R of whole body X-irradiation 24 hr before immunization.Rats were irradiated singly in Perspex cages from a 225 kv source delivering 17 R/rain.Alloantibody Titration and A gB Classification.--Serumwas obtained from the irradiated donors before immunization, 6 days after immunization, and at the time of assay of their lymphocytes.Nonirradiated, immunized control rats were bled at the same time.Hemagglutinating alloantibody was titrated using 3-fold serum dilutions in phosphate-buffered saline (PBS).To 2 drops of serum dilution was added 1 drop of a 2% erythrocyte suspension in 9 parts of 6% dextran (Glaxo Laboratories, Ltd., Greenford, England) and 1 part of normal F1 rat serum.Macroscopic reading was after a 2 hr incubation at 37°C.The screening of (AS X BN) F2 rats for the identification of AgB heterozygotes was performed using a similar technique.Erythrocytes from the unknown rats were titrated against an AS anti-BN serum and a BN anti-AS serum.With the aid of appropriate control erythrocytes (BN, AS, and F1) double-reacting erythrocytes could be easily identified.
Immunization.--Twoskin allografts with an interval of at least 2 wk between them was the standard immunization procedure.The second set tempo of rejection of the second graft showed that some degree of immunization had already been produced by the first graft.Some groups were "boosted" after skin grafting by the subcutaneous injection of 100 ~( 106 allogeneic spleen and lymph node cells.Three injections were given at weekly intervals and each dose was distributed between the four footpads and the angles of the mouth.Further groups of rats were immunized solely by one injection of 100 X 106 spleen and lymph node cells distributed equally between the four footpads.Unless otherwise stated, the lymphoid cells for GVH assay were removed from the immune donors between 10 days and 3 wk after the last immunization. Validity of Comparing Immune ond Nonimmune Cells by GVH Assay.--Quantitativecomparison of the GVH activity of immune and nonimmune populations depends (a) on the slopes of the dose-response lines being similar and (b) the tempos of the lymph node responses being similar.Condition (a) could be seen to be fulfilled with each acceptable assay.The time course of the popliteal lymph node enlargement was studied in preliminary experiments in both weak and strong combinations.Graded doses of TDL were given to a large number of recipients.Groups of four at each dose level were killed at days 3, 5, and 7 after injection.In the strong combination there was no difference in the tempo of the responses produced by the immune and nonimmune populations.In weak strain combinations immune cells did not produce a more rapid response but with high doses the lymph nodes ceased enlarging by day 5.In fact, with these immune populations dose saturation corresponded to a lower lymph node response than in nonimmune populations or in strong strain combinations.However, acceptable assays could be performed by using smaller doses of immune cells which gave the same gradation of response as did nonimmune cells.
In another local GVH system (that after the injection of lymphocytes into guinea pig skin) immune and nonimmune cells have also been found to produce reactions which develop at a similar tempo (6).
RESULTS
The GVH activity of lymphoid cells from putatively immune donors was assayed in comparison with cells from matched, nonimmune rats.The result of such an assay is expressed as a potency ratio: the ratio of the number of nonimmune cells to the number of immune cells required to produce equal lymph node enlargement.In the case of immune vs. nonimmune assays this potency ratio is called the factor of immunization (F.I.).
Series A: A gB-Identical Strain Combinations (See Table I)
Group 1: Immunization with Two Skin Grafts (Assay of TDL).--Four assays were performed in each of the reciprocal strain combinations AS --~ F and F --~ AS.In all eight assays the GVH activity of immune TDL was increased above that of nonimmune, but the mean activity was only doubled by immunization.
Group 2: Immunization with Two Skin Grafts plus Boosting with Allogeneic Lymphoid Cells (Assay of TDL
).--Four assays were performed in the combination AS --~ F. The mean F.I. was higher than without boosting and there was no overlap in the results of individual assays.
Group 3: Immunization with Two Skin Grafts (Assay of Spleen Cells).--
Five assays were performed in the combinations F --+ AS.The consequence of assaying spleen cells rather than TDL was higher F.I., which again did not overlap with group 1.
Group 4: Immunization with Two Skin Grafts plus Boosting with Allogeneic Lymphoid Cells (Assay of Spleen Cells).--In this situation the factors by
which group 2 and group 3 differed from group 1 were imposed together.Four assays in the combination AS --~ F gave very high F.I. with a mean of 52.These F.I. were clearly much greater than in any of the other groups.
Taken as a whole the results of groups 1-4 indicate that the F.I. in weak strain combinations depends on the immunization procedure (even two skin grafts are not optimal) and also on the population of cells used for assay.When the results of groups 2 and 4 are analyzed, it can be estimated that before immunization spleen cells have slightly less GVtt activity than TDL (0.5-1.0 times the activity) but after immunization they are 7-10 times more active.
4~
,~o ~X loci for which the donor happens to be homozygous.Although the donors will on the average be homozygous at half the minor loci the proportion will vary from rat to rat.For this reason it might have been misleading to work out the F.I. on each (arbitrary) immune and nonimmune pair.The results of four assays were combined (Fig. 1) and the overall mean dose-response lines were drawn.By this method the overall F.I. was 7.7, which was twice as high as FIG. 1. TDL from AgB heterozygous (AS X BN) F2 donors injected into feet of (AS X BN) F~ recipients.Each point is geometric mean response of four lymph node weights.O--TDL from four nonimmtme donors.X--TDL from four immune donors.Responses not signifieantly greater than those produced by same dose of syngeneic F1 TDL have been omitted.Assay lines have been drawn through mean dose/mean response point of each group with a slope equal to the mean slope of the lines of best fit for each group.Immune cells were 7-8 times more potent than nonimmune cells.the F.I. obtained by assay of TDL in the "ordinary" weak combinations (group 2) and also the F.I. in strong strain combinations (vide infra).
Series B: A gB-Different Strain Combinations Group 1: Immunization with Two Skin Grafts (Assay of TDL).--Three
assays were performed in each of three strain combinations (AS ~ AS2, AS2 --~ AS, AS --~ BN).In all assays the GVH activity of immune TDL was greater or the same as nonilnmune TDL (Table II).The mean F.I. of 1.4 was very low and was probably lower than the F.I. of corresponding assays in the weak combinations (P < 0.01).
Group 2: Immunization with Two Skin Grafts plus Boosting with Allogeneic
Lymphoid Cells (Assay of Spleen Cells).--Fourassays were performed in the combination AS2 ---> AS.In every case "immune" cells had less GVH activity than nonimmune, giving a mean F.I. of 0.60 (Table II).The F.I. was significantly lower than in the results of the assays in group 1.The finding that assaying spleen cells instead of TDL and boosting with lymphoid cells after grafting decreases the F.I. in strong combinations is in striking contrast to the situation in weak strain combinations where imposition of these two factors increases the F.I. by a factor of about 25.
Group 3: Assay of Lymphoid Cells at Varying Intervals after a Single Immunizing Injection (Table III):--(a)
In three experiments F donors were immunized by a single intraperitoneal injection of 100 X 106 BN spleen and lymph node cells.When spleen cells were taken 4 days after immunization and assayed against normal F spleen cells the F.I. were variable but certainly did not indicate any exceptional increase in GVH activity at this stage.Comparison of methyl green pyronin-stained films of the immune and nonimmune spleen cells showed an obvious increase in the number of large pyroninophilic cells in the immune suspensions.(b) In three experiments donors were immunized by the injection into the four footpads of a total of 100 X 106 BN spleen and lymph node cells.Mter 4 days the draining lymph nodes (popliteal and brachial) were removed and cell suspensions assayed against cells from the same lymph nodes of nonimmune rats.As in the previous subgroup little or no increase in GVH activity was found in these lymph node cells despite the fact that the lymph nodes were enlarged in reaction to the immunization and contained a high proportion of large pyroninophilic cells.
(c) In four experiments F donors were immunized by injection into the four footpads of a total of 100 X 106 (F X BN) F1 hybrid spleen and lymph node cells.After between 3 and 4 wk, when the reaction in the draining nodes had subsided, cells prepared from the draining nodes were assayed against pooled lymph node cells from nonirmnune donors.At the same time nondraining lymph node cells from the immune donors were assayed against nonimmune pooled lymph node cells.There was no significant difference between the GVH activity of the draining and nondraining lymph node cells, and although the F.I. was very variable, the mean value of less than unity suggested a tendency for the GVH activity of lymph node cells to be depressed below normal, as in group 2.
The experiments of series B fully achieved their object, which was to show that the very low F.I. found in the standard experimental situation (group l) was not increased by altering the time between imnmnization and assay, the schedule of immunization, or the population of lymphoid cells assayed.
Series C: Factor of Immunization in Rats Incapable of an Alloantibody Response
The low F.I. in strong strain combinations may be a consequence of the high frequency of antigen-sensitive cells in nonimmune animals specifically, directly against a particular complement of strong histocompatibility antigens (1,2).An alternative explanation is that the humoral alloanfibody response, which is readily detectable against strong histocompatibility antigens, is responsible for partly inhibiting the development of the cellular immune response, a process which might be described as partial immunological enhancement.This latter possibility is not, of course, at all inconsistent with the first; both mechanisms may operate.However, if inhibition by alloantibody were the sole factor responsible for the low F.I. in strong strain combinations, then it would be predicted that immunization of an animal which had been manipulated so as to be capable of a cellular but not a humoral response would result in a raised F.I.
A critical dose of whole body X-irradiation inhibits the humoral alloantibody response of mice without detectably prolonging skin allograft survival (7).It was found empirically that a suitable dose of X-irradiation for this purpose in AS and F rats was 300 rads.The survival of skin allografts applied the day after irradiation was not detectably prolonged and the alloantibody response to (BN X AS) Fi hybrid lymphoid cells injected into AS rats was completely suppressed.Seven nonirradiated immunized control rats all produced hemagglutinating alloantibody at titers of 1/27 or 1/81 on day 6 and at 3 wk after immunization, whereas none of seven irradiated, immunized rats had detectable antibody on either day.
In the first three experiments (Table IV, C-I) F rats were immunized with a BN skin graft 1 day after 300 rads of whole body X-irradiation and their TDL were assayed between 10 days and 3 wk later.This assay was done by comparison with irradiated, nonimmune F donors and also untreated F donors.The thoracic duct outputs of the two irradiated groups were depressed to about 10% of normal and the residual TDL had lower GVH activity than normal.The F.I. was variable but since the highest value was 1.6 there was no suggestion that irradiation before immunization had produced a higher F.I. than in the experiments of series B.
In other experiments (Table IV, C-2) AS rats were immunized 1 day after 300 rads of whole body X-irradiation by injection into the footpads of 100 X 106 (AS X BN) F1 spleen and lymph node cells.The GVH activity of their TDL obtained 3 wk after irradiation was compared to that of TDL from irradiated, nonimmune donors.TDL from two other control groups were included, i.e.(a) nonirradiated and immunized and (b) untreated.The results of these assays were very similar to those of the first group.Whole body X-irradiation is followed by a low output of TDL of diminished GVH activity.However, the F.I. remains low as in the experiments of series B. It was remarkable that although irradiated, immunized rats rejected skin allografts at the usual tempo, the GVH activity of their TDL was in all cases less than that of TDL from normal (nonimmune) donors.
DISCUSSION
The object of the present experiments was to measure the change in GVH activity after specific alloimmunization (expressed as the factor of immunization) in a number of experimental situations.In the mouse (1) and in the chicken (8) the F.I. is greater when the antigenic disparity is less; when a strong histocompatibility difference is involved it is near unity.The present results fully confirm earlier results in less satisfactory assay systems (9,10) which hinted that the dependence of F.I. on antigenic strength is valid in the rat also.
The most dramatic difference in F.I. between weak and strong strain con> binations was found when spleen cells were assayed after immunization with two skin allografts followed by boosting injections of allogeneic lymphoid cells.In the weak combination the F.I. of about 50 is very similar to the values from comparable experiments in the mouse (1).In the strong combination the F.I. of less than unity has also a precedent in mice (11,12).These workers attributed the depression of GVH activity to immunological enhancement.In the present experiments there are no grounds for favoring enhancement over the alternative explanation of partial tolerance.However, preliminary experiments (Simonsen, M., unpublished data) designed to test for the enhancing effect of AgB-hemagglutinating antisera in both directions of the AS-AS2 combination have been performed.Normal spleen cells were mixed with equal volumes of undiluted antisera against the recipient antigen or, for control, with normal sera before injection into the hind feet.No difference in the popliteal lymph node enlargement was observed between test and control groups.
After two skin allografts had been rejected the F.I. in a weak combination could be further increased by boosting with allogeneic lymphoid cells, especially if spleen cells were assayed.The particular effectiveness of living peripheral lymphoid cells in allogeneic immunization as compared with skin grafts may reflect their high content of histocompatibility antigen (13,14) and their widespread distribution in the spleen and lymph nodes before their destruction by a host-vs.-graftreaction (15).
Spleen cells, as a source of donor cells for the GVH assay, gave a higher F.I. than did TDL in weak combinations.About half of the small lymphocytes in the spleen belong to the same recirculating pool as do the great majority of TDL (15).Under the conditions of these experiments the greatly increased GVH activity of spleen cells after immunization can be attributed mainly to the nonrecirculating population in the spleen.This conclusion can be compared to recent data on immunological memory to conventional antigens (17,18).Although it was confirmed that immunological memory is carried by recircu-lating cells (19) these studies showed that the antigenically stimulated lymph node responds to a second injection of antigen more quickly and more strongly than does the contralateral node.Apparently immunological memory is not confined to recirculating lymphocytes but is also strongly represented in the nonrecirculating population of a lymph node.
In AgB-different (strong) strain combinations the several factors which influenced the F.I. in weak combinations were completely ineffective in raising the F.I.It was particularly remarkable that when draining lymph nodes were assayed 4 days after subcutaneous immunization at the height of the large pyroninophilic cell response the F.I. was near unity.By contrast, a large increment of stimulation in mixed-lymphocyte culture was found when spleen cells were removed 4 days after intraperitoneal immunization with allogeneic spleen cells (20) but in a very similar experiment (B-3[a]) no increment was found in GVH assay.
The failure of irradiation applied before immunization to bring about a higher F.I. suggested that the inhibitory effect of alloantibody is not solely responsible for the low F.I. in strong strain combinations.Of course it cannot be certain that partial inhibition of the cellular response did not result from concentrations of enhancing antibody which were too low to be detected.However this result is especially convincing of the intrinsic limitation of the F.I. in strong combinations, since irradiated animals provide a superior environment for immune responses mediated by transferred cells (21); it is reasonable to suppose that the surviving cells reacting to the antigen would be favored in the same way by the extra space available in the lymphoid tissue.
An alternative way to immunize animals incapable of a humoral alloantibody response is to use bursectomized chickens.Experiments are at present being performed in this laboratory to measure the F.I. of such animals.So far the F.I. has not shown any tendency to increase in bursectomized chickens in spite of the absence of a detectable antibody response.
The conclusion that the limitation of the F.I. in strong combinations is a consequence of a very high proportion of antigen-sensitive cells in the nonimmune animal (1, 2) has been reinforced by the present experiments.This is quite consistent with the possibility that the cells reactive to strong antigens generate predominantly effector cells (which may not count in GVH assay), whereas the cells reactive to weak antigens generate both effector cells and more of themselves, thus producing high F.I. (22).SUMMARY Using a popliteal lymph node weight assay the graft-versus-host activity of lymphocytes from donors immunized with allogeneic tissue has been assayed by comparison with that of lymphocytes from nonimmune donors.When the donors were immunized against weak histocompatibility antigens (non-AgB) the specific GVH activity of its lymphocytes was increased.This increase was greater if spleen cells rather than thoracic duct lymphocytes were the source of the donor cells used for assay.The increase in GVH activity was also greater if the standard immunization procedure of two successive skin allografts was followed by three boosting injections of allogeneie lymphoid cells.
When donors were immunized against strong histocompatibility :antigens the specific GVH activity of the donors' lymphocytes was slightly increased, was unchanged, or was actually decreased depending on the experimental situation.In donors rendered incapable of a humoral alloantibody response by whole body X-irradiation, immunization across a strong barrier was followed by little or no increase in the specific GVH activity of TDL.In the rat, as in other species, the increase in GVH activity after immunization is inversely proportional to the strength of the antigenic barrier involved.
THE FACTOR OF IMMUNIZATION IN THE RAT THE EFFECT OF ALLOGENEIC IMMUNIZATION ON GRAFT-VERSus-HosT ACTIVITY BY WILLIAM L. FORD* D. PHIL., AND MORTEN SIMONSEN, M.D. (From the Institute for Experimental Immunology, University of Copenhagen, NCrre All~ 71, Copenhagen O) (Received for publication 17 November 1970) this situation the GVH activity is against the antigens present in the F1 recipients but not the F2 donor, i.e., antigens determined by the minor ro single injection of 100 X 106 spleen and lymph node cells.* Draining lymph node cells assayed against nonimmnne lymph node cells.~: Nondraining lymph node cells assayed against nonimmune lymph node cells, F.I. of 300 Rads of Whole Body X-Irradlation 24 hr before Immunization assay.* No. of TDL required to produce lymph node enlargement to 10.0 mg inversely related to strength of combination.
TABLE II
Factor of Immunization in AgB-Different Strain Cnmbinations * No. of cells required to produce lymph node enlargement to 10.0 mg; inversely related to strength of combination.Three injections of 100 X 10e allogeneic lymph node and spleen cells given subcutaneously at weekly intervals. | 2014-10-01T00:00:00.000Z | 1971-03-31T00:00:00.000 | {
"year": 1971,
"sha1": "c9db4c81b4118b7149fc9b7e9d3cd58bdacff9a9",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jem/article-pdf/133/4/938/1084146/938.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9db4c81b4118b7149fc9b7e9d3cd58bdacff9a9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
16841443 | pes2o/s2orc | v3-fos-license | Kaon photoproduction on the nucleon with constrained parameters
The new experimental data of kaon photoproduction on the nucleon, gamma p ->K+ Lambda, have been analyzed by means of a multipoles model. Different from the previous models, in this analysis the resonance decay widths are constrained to the values given by the Particle Data Group (PDG). The result indicates that constraining these parameters to the PDG values could dramatically change the conclusion of the important resonances in this reaction found in the previous studies.
One of the most intensively studied topics in the realm of hadronic physics is kaon photoproduction. In the last decades a large number of attempts have been devoted to model this reaction. Since this process is not dominated by any single resonant state, the main difference among these models is chiefly in the use of nucleon, hyperon, and kaon resonances. Recently, a large number of experimental data with good quality have been provided by the SAPHIR 1 , CLAS 2 , LEPS 3 , and GRAAL 4 collaborations. However, the lack of mutual consistency between the SAPHIR and other data found by the recent phenomenological studies has increased the difficulties of the extraction of the "missing resonance" properties. In our previous work we investigated the physics consequence of using each data set 5 . It was found that the use of SAPHIR and CLAS data, individually or simultaneously, leads to quite different resonance parameters which, therefore, could lead to different conclusions on "missing resonances". In this paper we extend this investigation by constraining the resonance decay widths to the values given by the Particle Data Group 6 . This is intended to approximately account for unitarity corrections at tree-level, i.e., constraining the model by including some information from the leading π and η channels.
The background amplitudes of the model are constructed from a series of treelevel Feynman diagrams, consisting of the standard s-, u-, and t-channel Born terms along with the K * (892) and width Γ R are assumed to have the Breit-Wigner form 7,8 where W represents the total c.m. energy, ℓ indicates the kaon angular momentum, and ℓ± ≡ ℓ ± 1/2 = j shows the total angular momentum. The isospin factor c KY is 3/2 and −1/ √ 3 for the isospin 3/2 and isospin 1/2 7,8 , respectively, The factor f KR is the usual Breit-Wigner factor describing the decay of a resonance R with a total width Γ tot (W ) and physical mass M R , whereas f γR indicates the γN R vertex.
The results of fitting to the SAPHIR or CLAS data, compared with these data, are displayed in Fig. 1. Obviously, the model that fits to the SAPHIR data cannot perfectly explain the CLAS data, and vice versa. It also appears from this figure that the two data sets show the largest discrepancy at W ≈ 1.9 GeV in the forward and backward directions. Consequently, at this energy the total cross section data show a bump (see the upper panels of Fig. 2), which corresponds to the "missing resonance" D 13 (2080) found in Ref. 9 . As shown in the upper panels of Fig. 2 the CLAS data show a relatively larger bump compared to the SAPHIR data. Clearly, this implies that the extracted information on the responsible resonances for this peak could be different if we used different data sets.
Contributions of the background terms and the most important resonances are also shown in Fig. 2. Near the production threshold the two well established resonances S 11 (1650) and P 13 (1720) show important roles in both fits. This finding emphasizes the results of our previous studies 5, 10 . Reference 11 has also arrived at the same result, except for the P 13 (1720). Although not too significant, Ref. 11 found that this state is still required in the case of SAPHIR data. Comparison between the two dash-dotted curves in the lower panels of Fig. 2 also confirms that the role of this state is more substantial in the case of SAPHIR data.
Another important state is the P 13 (1900), a two-star resonance with the total width Γ ≈ 500 MeV. Although we found that the extracted mass is much larger than 1900 MeV, its significant role found in all recent studies 5,10,11 is also observed in the present analysis. On the other hand, the resonance with the same quantum numbers but lower mass P 11 (1710) plays insignificant role in this process, which is consistent with the finding of our previous work 5 .
Compared to the previous studies, the only different result exhibited in Fig. 2 is the origin of the second peak in the total cross section. As clearly shown, this peak originates from the contributions of the S 11 (2090) and P 11 (2100) states. Most of the recent investigations found that this peak indicates a "missing" D 13 resonance with a total width Γ varies from 165 to 570 MeV 5 . Clearly the effect of constraining the fitted parameters is quite significant in this case. Therefore, it is quite important to address this issue in the future single channel analyses of kaon photoproduction.
It has been found that the inclusion of the new CLAS C x and C z data reveals the role of the S 11 (1650), P 11 (1710), P 13 (1720), and P 13 (1900) resonances for the description of these data 10 . In this study we also investigate the effects of these data on our model. The importance of the individual resonance for the fits with and without these data is represented by ∆χ 2 = |χ 2 All − χ 2 All−N * |/χ 2 All × 100% in Fig. 3, where χ 2 All is the χ 2 obtained by using all resonances and χ 2 All−N * is the χ 2 obtained by using all but a specific resonance. Obviously, constraining the free parameters in the fits changes the conclusion of the previous analyses. Only the P 13 (1720) seems to be still important in both cases, whereas the P 13 (1900) is only important in the fit without C x and C z data. The near-threshold resonance S 11 (1650) is found to be With C x and C z data Without C x and C z data relatively important in both cases. We also note that in this study the S 11 (2090) is found to be quite important in both cases. The importance of this state has been actually found by the former study 5 , although with a smaller ∆χ 2 (≈ 6%). In conclusion we would like to say that the use of different data sets could lead to different conclusions on the important resonances required in the kaon photoproduction. Furthermore, constraining the free parameters in the multipoles model for this process results in a significant effect and could dramatically change the conclusion found in the previous analyses. A more detailed study is currently underway and the result will be published elsewhere in the near future.
The authors acknowledge the support from the University of Indonesia. | 2009-04-23T04:41:03.000Z | 2009-04-23T00:00:00.000 | {
"year": 2009,
"sha1": "f4f114329127ba19f322ed6f30284a8a0094ae71",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0904.3598",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f4f114329127ba19f322ed6f30284a8a0094ae71",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
100067117 | pes2o/s2orc | v3-fos-license | Rapid purification and crystallization of Neurospora crassa tyrosinase
A rapid method for the purification of Neurospora crassa tyrosinase (EC 1.14.18.1) has been developed using an FPLC system that yields sufficient pure protein for protein crystals to be grown. Creative Commons License This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License. This regular paper is available in Fungal Genetics Reports: http://newprairiepress.org/fgr/vol40/iss1/10 Rapid purification and crystallization of Neurospora crassa tyrosinase A. M. Fuentes, M. A. J. Taylor, J. Jenkins and I. Connerton AFRC, Institute of Food Research, Earley Gate, Whiteknights Road, Reading, RG6 2EF, UK A rapid method for the purification of Neurospora crassa tyrosinase (EC 1.14.18.1) has been developed using an FPLC system that yields sufficient pure protein for protein crystals to be grown. Neurospora crassa wild type (74A) was grown at 30 C in shake culture in full-strength Vogel's medium, 2% sucrose, from a large conidial inoculum according to the procedure by Lerch (1987 Meth. Enzymol. 142:165-169). Tyrosinase was induced by transferring the mycelia into fresh half-strength Vogel's medium, 0.5% sucrose, containing cycloheximide to a final concentration of 1.5 uM (Horowitz et al. 1970 J. Biol. Chem. 245: 2784-2788). The mycelia were harvested by filtration under low vacuum, dried with paper towels, and ground in sand with a pestle and mortar in the presence of 0.1 M Tris-HCl buffer, pH 8.0 and 2 mM sodium benzoate (buffer A). After stirring for one hour, the crude extract was centrifuged and the supernatant collected. The supernatant was then applied to a Q-Sepharose column (2.6 x 28 cm) pre-equilibrated with buffer A. After washing the column with two column volumes of buffer A, tyrosinase activity was eluted with a linear gradient (0-0.75 M) of NaCl over 600 ml in buffer A. A wide peak of protein showing tyrosinase activity was eluted between 0.18 and 0.54 M NaCl. The pooled activities from the tyrosinase peak were desalted on a Sephadex G-15 column (5.0 x 16 cm), preequilibrated in 50 mM sodium citrate buffer pH 5,0 containing 2 mM sodium benzoate (buffer B). Active fractions were loaded onto a Mono-S HR5/5 column pre-equilibrated in buffer B. Activity was eluted with a linear gradient (0-0.3 M) of NaCl over 12 ml in buffer B. Protein containing tyrosinase activity eluted as a single peak at 0.2 M NaCl. The peak fractions appear as a single band on SDS-PAGE. Samples of each of the steps during the purification are presented in Figure 1. The purified protein was concentrated to 15 mg/ml using a Centricon 10 spun at 5,000 x g for 1 h. Protein was crystallized by the hanging drop method (Ducroix and Giege, Methods of Crystallization, pp. 73-98 in Crystallization of Nucleic Acids and Proteins: A Practical Approach. A. Ducroix and R. Giege (eds.), IRL Press, Oxford, UK). The reservoir contained 46% w/v polyethylene glycol MW 12,000 in 0.3 M MOPS/ethanolamine buffer, pH 7.2, at 18 C, in the presence of 50 mM thiourea as an inhibitor of tyrosinase activity. After three days tetragonal needles 0.03 x 0.03 x 1.5 mm were formed, as shown in Figure 2. Improving the size of the crystals to enable X-ray structure analysis of tyrosinase is the subject of current work. Published by New Prairie Press, 2017 Figure 1. SDS-PAGE gel stained with Coomasie brilliant blue R250 showing the purification of tyrosinase. (A) Prestained protein molecular weight standards, (B) Cell culture supernatant, (C) Crude extract of Neurospora, (D) Post-cell debris supernatant, (E) Tyrosinase from Q-Sepharose, (F) Tyrosinase from G-15, (G) Tyrosinase from Mono S HR5/5. Figure 2. Long tetragonal needles of Neurospora tyrosinase. The square cross-section of the needle can be clearly seen. http://newprairiepress.org/fgr/vol40/iss1/10 DOI: 10.4148/1941-4765.1401
Rapid purification and crystallization of Neurospora crassa tyrosinase
A. M. Fuentes, M. A. J. Taylor, J. Jenkins and I. Connerton -AFRC, Institute of Food Research, Earley Gate, Whiteknights Road, Reading, RG6 2EF, UK A rapid method for the purification of Neurospora crassa tyrosinase (EC 1.14.18.1) has been developed using an FPLC system that yields sufficient pure protein for protein crystals to be grown.
Neurospora crassa wild type (74A) was grown at 30 C in shake culture in full-strength Vogel's medium, 2% sucrose, from a large conidial inoculum according to the procedure by Lerch (1987 Meth. Enzymol. 142:165-169).Tyrosinase was induced by transferring the mycelia into fresh half-strength Vogel's medium, 0.5% sucrose, containing cycloheximide to a final concentration of 1.5 uM (Horowitz et al. 1970J. Biol. Chem. 245: 2784-2788).The mycelia were harvested by filtration under low vacuum, dried with paper towels, and ground in sand with a pestle and mortar in the presence of 0.1 M Tris-HCl buffer, pH 8.0 and 2 mM sodium benzoate (buffer A).
After stirring for one hour, the crude extract was centrifuged and the supernatant collected.The supernatant was then applied to a Q-Sepharose column (2.6 x 28 cm) pre-equilibrated with buffer A. After washing the column with two column volumes of buffer A, tyrosinase activity was eluted with a linear gradient (0-0.75M) of NaCl over 600 ml in buffer A. A wide peak of protein showing tyrosinase activity was eluted between 0.18 and 0.54 M NaCl.The pooled activities from the tyrosinase peak were desalted on a Sephadex G-15 column (5.0 x 16 cm), preequilibrated in 50 mM sodium citrate buffer pH 5,0 containing 2 mM sodium benzoate (buffer B).Active fractions were loaded onto a Mono-S HR5/5 column pre-equilibrated in buffer B. Activity was eluted with a linear gradient (0-0.3M) of NaCl over 12 ml in buffer B. Protein containing tyrosinase activity eluted as a single peak at 0.2 M NaCl.The peak fractions appear as a single band on SDS-PAGE.Samples of each of the steps during the purification are presented in Figure 1.
The purified protein was concentrated to 15 mg/ml using a Centricon 10 spun at 5,000 x g for 1 h.Protein was crystallized by the hanging drop method (Ducroix and Giege, Methods of Crystallization, pp.73-98 in Crystallization of Nucleic Acids and Proteins: A Practical Approach.A. Ducroix and R. Giege (eds.),IRL Press, Oxford, UK).The reservoir contained 4-6% w/v polyethylene glycol MW 12,000 in 0.3 M MOPS/ethanolamine buffer, pH 7.2, at 18 C, in the presence of 50 mM thiourea as an inhibitor of tyrosinase activity.After three days tetragonal needles 0.03 x 0.03 x 1.5 mm were formed, as shown in Figure 2. Improving the size of the crystals to enable X-ray structure analysis of tyrosinase is the subject of current work.
Figure 2 .
Figure 2. Long tetragonal needles of Neurospora tyrosinase.The square cross-section of the needle can be clearly seen. | 2018-12-07T22:44:16.698Z | 1993-01-01T00:00:00.000 | {
"year": 1993,
"sha1": "e62be439b0de7101cefbe2c7a1faf35f28d43c6e",
"oa_license": "CCBYSA",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=1401&context=fgr",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e62be439b0de7101cefbe2c7a1faf35f28d43c6e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.